threads
listlengths
1
2.99k
[ { "msg_contents": "\nacctng=> vacuum radlog;\nNOTICE: BlowawayRelationBuffers(radlog, 3): block 786 is referenced\n(private 0, last 0, global 53)\nFATAL 1: VACUUM (vc_rpfheap): BlowawayRelationBuffers returned -2\nacctng=>\n\nJust got this on one of my tables...indices are all dropped before doing\nthe vacuum...\n\n\n\n", "msg_date": "Mon, 6 Apr 1998 10:10:36 -0400 (EDT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "Ack...major(?) bug just found in v6.3.1..." }, { "msg_contents": "The Hermit Hacker wrote:\n> \n> acctng=> vacuum radlog;\n> NOTICE: BlowawayRelationBuffers(radlog, 3): block 786 is referenced\n> (private 0, last 0, global 53)\n ^^^^^^^^^\nI assume that you got some FATAL before vacuum.\n\nWe have problems with elog(FATAL): backend just exits (with normal code)\nand postmaster doesn't re-initialize shmem etc though backend could\nhave some spinlocks and pinned buffers. This leaves system in unpredictable\nstate! \n\nIMO, in elog(FATAL) backend should abort() (just like in ASSERT).\n\n> FATAL 1: VACUUM (vc_rpfheap): BlowawayRelationBuffers returned -2\n> acctng=>\n> \n> Just got this on one of my tables...indices are all dropped before doing\n> the vacuum...\n\nComments ?\n\nVadim\n", "msg_date": "Tue, 07 Apr 1998 09:39:02 +0800", "msg_from": "\"Vadim B. Mikheev\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Ack...major(?) bug just found in v6.3.1..." }, { "msg_contents": "> The Hermit Hacker wrote:\n> > \n> > acctng=> vacuum radlog;\n> > NOTICE: BlowawayRelationBuffers(radlog, 3): block 786 is referenced\n> > (private 0, last 0, global 53)\n> ^^^^^^^^^\n> I assume that you got some FATAL before vacuum.\n> \n> We have problems with elog(FATAL): backend just exits (with normal code)\n> and postmaster doesn't re-initialize shmem etc though backend could\n> have some spinlocks and pinned buffers. This leaves system in unpredictable\n> state! \n> \n> IMO, in elog(FATAL) backend should abort() (just like in ASSERT).\n\nThe other way to do this, and it might be a good idea is to review the code\nfor elogs while holding a spinlock and make sure to release the lock first!\n\nThis problem will get worse if we start allowing query cancelation from\nthe client, and when the spinlock backoff code goes in.\n\nIn the long run you have to make all the signal handlers safe. Safe means\nthey set a flag and the code periodically polls for it to catch whatever\nthe condition is. Obviously this doesn't work for SEGV etc, but IO and the\ntimers yes.\n\n-dg\n \nDavid Gould [email protected] 510.628.3783 or 510.305.9468 \nInformix Software (No, really) 300 Lakeside Drive Oakland, CA 94612\n - Linux. Not because it is free. Because it is better.\n\n", "msg_date": "Mon, 6 Apr 1998 19:53:13 -0700 (PDT)", "msg_from": "[email protected] (David Gould)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Ack...major(?) bug just found in v6.3.1..." } ]
[ { "msg_contents": "\nThe following was the result after doing a:\n\ncreate table <mirror of bad table>\ninsert into <newtable> from <oldtable>\nvacuum\n\n<1-832 removed here>\nNOTICE: Rel radlog: Uninitialized page 831 - fixing\nNOTICE: Rel radlog: Uninitialized page 832 - fixing\nNOTICE: Rel radlog: Uninitialized page 833 - fixing\nNOTICE: Rel radlog: Uninitialized page 834 - fixing\nNOTICE: Rel radlog: Uninitialized page 835 - fixing\nNOTICE: Rel radlog: Uninitialized page 836 - fixing\nVACUUM\na\n\n", "msg_date": "Mon, 6 Apr 1998 10:16:31 -0400 (EDT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "More on last reported problem..." } ]
[ { "msg_contents": "Bruce Momjian\n> \n> Are you still having problems? Can you check this again? Looks like it\n> is saying you are connecting via pgsql.\n> \n\nYes on pgsql. I had the same problems with perlscripts, php/fi,\neverything I tried.\n\nMercifully, this problem went away with the March 24 version. It\nseems to have fixed the authid problem.\n\nNow I have other problems... can't get perl modules to compile with\n6.3.1 on DEC UNIX, bizzarro date/time problems on DEC UNIX,\n\n______ on dec unix ----------\nedstrom=> create table t(x time);\nCREATE\nedstrom=> insert into t values('12:12:12');\nINSERT 27858 1\nedstrom=> insert into t values('12:12:33');\nERROR: Second must be limited to values 0 through < 60 in '12:12:33'\nedstrom=> \n\n__________\n\nIs that weird or what?\n\nI can't get it built under machten at all...\n\nIt works fine on Linux though!\n\nsigh!\n\n> > \n> > ============================================================================\n> > POSTGRESQL BUG REPORT TEMPLATE\n> > ============================================================================\n> > \n> > \n> > Your name\t\t: john edstrom\n> > Your email address\t: [email protected]\n> > \n> > Category\t\t: runtime: back-end\n> > Severity\t\t: serious\n> > \n> > Summary: ident authority map problem\n> > \n....\n> \n> \n> -- \n> Bruce Momjian | 830 Blythe Avenue\n> [email protected] | Drexel Hill, Pennsylvania 19026\n> + If your life is a hard drive, | (610) 353-9879(w)\n> + Christ can be your backup. | (610) 853-3000(h)\n> \n\n\tje\n\n-- \n John Edstrom | edstrom @ slugo.hmsc.orst.edu\n\n http://www.hmsc.orst.edu/~edstrom\n \"Lurker\" at BioMOO (bioinfo.weizmann.ac.il:8888)\n\n Hatfield Marine Science Center\n 2030 S. Marine Science Drive\n Newport, Oregon 97365-5296\n wk: (541) 867 0197\n fx: (541) 867 0138\n\n", "msg_date": "Mon, 6 Apr 1998 09:50:24 -0700 (PDT)", "msg_from": "John Edstrom <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PORTS] Port Bug Report: ident authority map problem" } ]
[ { "msg_contents": "The postgresql-?.?.?/contrib/linux/postgres.init is meant to start your\npostmaster at boot time and stop it at halt/reboot. Excelent.\nBut it is made for postgres account running tcsh. I know nothing about tchs\nand my postgres account defaults to bash. So (thanks to Steve \"Stevers!\"\nCoile) I changed it to bash:\n-----------------------------------------------------\nif [ ${USE_SYSLOG} = \"yes\" ]; then\n\n su - ${PGACCOUNT} -c \"(${POSTMASTER} ${PGOPTS} 2>&1 | logger -p\n${FACILITY}.notice) &\" > /dev/null 2>&1 &\n\nelse\n\n su - ${PGACCOUNT} -c \"${POSTMASTER} ${PGOPTS} 2>>&1 ${PGLOGFILE} &\" >\n/dev/null 2>&1 &\n\nfi\n\n-----------------------------------------------------\ncool, but there was another problem. The script wouldn't stop the postmaster\nat halt/reboot while it worked just fine if manually invoked. (RedHat 5.0\nsysV init)\n\nMeanwhile I leafed some shell programming book and things cleared up a bit.\n\nIn the /etc/rc.d/rc file (the one to start/stop services), the 'stop'\nbranch, there is a section meant to check if subsystems are up.\n\n-----------------------------------------------------\n # Check if the subsystem is already up.\n subsys=${i#/etc/rc.d/rc$runlevel.d/K??}\n [ ! -f /var/lock/subsys/$subsys ] && \\\n [ ! -f /var/lock/subsys/${subsys}.init ] && continue\n-----------------------------------------------------\nThat's it, if there's no file named exactly after the symlink (without K??)\nin /var/lock/subsys, then it decides the subsystem is not up and gracefully\nexits.\n\nIf you look in /etc/rc.d/init.d/postgres.init (copy of\npostgresql-?.?.?/contrib/linux/postgres.init), you'll find:\n\n-----------------------------------------------------\n# touch /var/lock/subsys/${POSTMASTER}\n-----------------------------------------------------\nwhich is equally commented and wrong for that kind of init. It should be:\n\n-----------------------------------------------------\n# use the name of the symlink without [KS]??\n\ntouch /var/lock/subsys/postgres\n-----------------------------------------------------\nI use:\n\nK05postgres -> /etc/rc.d/init.d/postgres.init on runlevel 1\n\nK05postgres -> /etc/rc.d/init.d/postgres.init on runlevel 6\n\nS98postgres -> /etc/rc.d/init.d/postgres.init on runlevel 3\n\n\n\nClaudiu", "msg_date": "Tue, 7 Apr 1998 11:26:23 +0300", "msg_from": "\"Claudiu Balciza\" <[email protected]>", "msg_from_op": true, "msg_subject": "postgres init script things solved" }, { "msg_contents": "Applied.\n\n> \n> This is a multi-part message in MIME format.\n> \n> ------=_NextPart_000_0087_01BD6217.FE9E50C0\n> Content-Type: text/plain;\n> \tcharset=\"iso-8859-1\"\n> Content-Transfer-Encoding: 7bit\n> \n> The postgresql-?.?.?/contrib/linux/postgres.init is meant to start your\n> postmaster at boot time and stop it at halt/reboot. Excelent.\n> But it is made for postgres account running tcsh. I know nothing about tchs\n> and my postgres account defaults to bash. So (thanks to Steve \"Stevers!\"\n> Coile) I changed it to bash:\n> -----------------------------------------------------\n> if [ ${USE_SYSLOG} = \"yes\" ]; then\n> \n> su - ${PGACCOUNT} -c \"(${POSTMASTER} ${PGOPTS} 2>&1 | logger -p\n> ${FACILITY}.notice) &\" > /dev/null 2>&1 &\n> \n> else\n> \n> su - ${PGACCOUNT} -c \"${POSTMASTER} ${PGOPTS} 2>>&1 ${PGLOGFILE} &\" >\n> /dev/null 2>&1 &\n> \n> fi\n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Sun, 26 Apr 1998 23:05:02 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] postgres init script things solved" }, { "msg_contents": "> > The postgresql-?.?.?/contrib/linux/postgres.init is meant to start your\n> > postmaster at boot time and stop it at halt/reboot. Excelent.\n> > But it is made for postgres account running tcsh. I know nothing about tchs\n> > and my postgres account defaults to bash. So (thanks to Steve \"Stevers!\"\n> > Coile) I changed it to bash:\n\nOK, but _I_ don't run bash. So someone else is now maintaining this\nfile? Why didn't we keep both forms in the file, with one commented out?\nWhat are we trying to accomplish here??\n\n - Tom\n", "msg_date": "Mon, 27 Apr 1998 14:54:43 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] postgres init script things solved" }, { "msg_contents": "> \n> > > The postgresql-?.?.?/contrib/linux/postgres.init is meant to start your\n> > > postmaster at boot time and stop it at halt/reboot. Excelent.\n> > > But it is made for postgres account running tcsh. I know nothing about tchs\n> > > and my postgres account defaults to bash. So (thanks to Steve \"Stevers!\"\n> > > Coile) I changed it to bash:\n> \n> OK, but _I_ don't run bash. So someone else is now maintaining this\n> file? Why didn't we keep both forms in the file, with one commented out?\n> What are we trying to accomplish here??\n\nbash/sh is the standard, expecially because the top of the file has\n#!/bin/sh.\n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Mon, 27 Apr 1998 10:55:18 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] postgres init script things solved" }, { "msg_contents": "On Mon, 27 Apr 1998, Thomas G. Lockhart wrote:\n\n> > > The postgresql-?.?.?/contrib/linux/postgres.init is meant to start your\n> > > postmaster at boot time and stop it at halt/reboot. Excelent.\n> > > But it is made for postgres account running tcsh. I know nothing about tchs\n> > > and my postgres account defaults to bash. So (thanks to Steve \"Stevers!\"\n> > > Coile) I changed it to bash:\n> \n> OK, but _I_ don't run bash. So someone else is now maintaining this\n> file? Why didn't we keep both forms in the file, with one commented out?\n> What are we trying to accomplish here??\n\n\tWhy not do what most init scripts do and use the standard /bin/sh?\n\n\n", "msg_date": "Mon, 27 Apr 1998 10:55:41 -0400 (EDT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] postgres init script things solved" }, { "msg_contents": "> > > > The postgresql-?.?.?/contrib/linux/postgres.init is meant to start your\n> > > > postmaster at boot time and stop it at halt/reboot. Excelent.\n> > > > But it is made for postgres account running tcsh. I know nothing about tchs\n> > > > and my postgres account defaults to bash. So (thanks to Steve \"Stevers!\"\n> > > > Coile) I changed it to bash:\n> >\n> > OK, but _I_ don't run bash. So someone else is now maintaining this\n> > file? Why didn't we keep both forms in the file, with one commented out?\n> > What are we trying to accomplish here??\n> \n> bash/sh is the standard, expecially because the top of the file has\n> #!/bin/sh.\n\nSorry, the top of the init script and the postgres shell aren't related.\nThe init script is run at startup out of the root account. The line\nwhich was changed is run under the postgres account, which can have a\ndifferent shell from the root shell.\n\nimho it is not a step forward to break some code to help others. How\nabout asking the person submitting the patch to document what they did\ndifferently? The original code at least had a line saying what had to be\nchanged; the patch doesn't fix the comment or suggest how to support an\nalternate shell.\n\nOr, have two files in contrib, postgres.init.tcsh and postgres.init. Or\nat least something which is a step forward rather than sideways...\n\n - Tom\n", "msg_date": "Tue, 28 Apr 1998 02:02:05 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] postgres init script things solved" }, { "msg_contents": "> about asking the person submitting the patch to document what they did\n> differently? The original code at least had a line saying what had to be\n> changed; the patch doesn't fix the comment or suggest how to support an\n> alternate shell.\n> \n> Or, have two files in contrib, postgres.init.tcsh and postgres.init. Or\n> at least something which is a step forward rather than sideways...\n\nOk, now have two files.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Mon, 27 Apr 1998 23:39:57 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] postgres init script things solved" } ]
[ { "msg_contents": "On 7 Apr 1998, Bruce Stephens wrote:\n\n> AC_CHECK_LIB(tk8.0, main, TK_LIB=tk)\n> \n> (line 618 or so)\n> \n> to\n> \n> AC_CHECK_LIB(tk8.0, main, TK_LIB=tk,, $TCL_LIB $X_PRE_LIBS $X_LIBS $X11_LIBS)\n> \n> After changing configure.in, run \"autoconf\" to reconstruct configure.\n\n\tDone and committed...please check out the newest cvsup copy and\nlet us know whether this helps... \n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Tue, 7 Apr 1998 23:02:12 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [QUESTIONS] warning: tcl support disabled" }, { "msg_contents": " Anyway, the problem is that configure is trying to link a main program\n with -ltk8.0, which will fail *even if libtk8.0 is present*. The\n reason is that libTk requires both Tcl and the X libraries in order to\n link successfully.\n\nI guess this may only be a problem with shared libraries, because this\ndidn't occur for me when I had the static versions. Sorry about\nthat. A more complete integration of the solution with my earlier\npatch follows. Note that this patch should be applied afer the\nearlier one, then autoconf run to reconstruct the configure script.\n\nHope this works for all.\n\nCheers,\nBrook\n\n===========================================================================\n--- configure.in.orig\tTue Apr 7 20:55:44 1998\n+++ configure.in\tTue Apr 7 22:05:44 1998\n@@ -639,6 +639,17 @@\n \n dnl Check for Tk archive\n if test \"$USE_TCL\" = \"true\"; then\n+\n+\tice_save_LIBS=\"$LIBS\"\n+\tice_save_CFLAGS=\"$CFLAGS\"\n+\tice_save_CPPFLAGS=\"$CPPFLAGS\"\n+\tice_save_LDFLAGS=\"$LDFLAGS\"\n+\n+\tLIBS=\"$TCL_LIB $X_PRE_LIBS $X11_LIBS $X_EXTRA_LIBS $LIBS\"\n+\tCFLAGS=\"$CFLAGS $X_CFLAGS\"\n+\tCPPFLAGS=\"$CPPFLAGS $X_CFLAGS\"\n+\tLDFLAGS=\"$LDFLAGS $X_LIBS\"\n+\n \tTK_LIB=\n \ttk_libs=\"tk8.0 tk80 tk4.2 tk42 tk\"\n \tfor tk_lib in $tk_libs; do\n@@ -653,6 +664,11 @@\n \t TK_LIB=-l$TK_LIB\n \tfi\n \tAC_SUBST(TK_LIB)\n+\n+\tLIBS=\"$ice_save_LIBS\"\n+\tCFLAGS=\"$ice_save_CFLAGS\"\n+\tCPPFLAGS=\"$ice_save_CPPFLAGS\"\n+\tLDFLAGS=\"$ice_save_LDFLAGS\"\n fi\n \n AC_OUTPUT(GNUmakefile Makefile.global backend/port/Makefile bin/pg_version/Makefile bin/psql/Makefile bin/pg_dump/Makefile backend/utils/Gen_fmgrtab.sh interfaces/libpq/Makefile interfaces/libpgtcl/Makefile interfaces/ecpg/lib/Makefile ) \n", "msg_date": "Tue, 7 Apr 1998 22:13:38 -0600 (MDT)", "msg_from": "Brook Milligan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [QUESTIONS] warning: tcl support disabled" }, { "msg_contents": "On Tue, 7 Apr 1998, Brook Milligan wrote:\n\n> Hope this works for all.\n> \n> ===========================================================================\n> --- configure.in.orig\tTue Apr 7 20:55:44 1998\n> +++ configure.in\tTue Apr 7 22:05:44 1998\n> @@ -639,6 +639,17 @@\n\nIs this a patch for 6.3.1?\n\n--------------------------\nPatching file configure.in using Plan A...\nHunk #1 failed at 639.\nHunk #2 failed at 664.\n2 out of 2 hunks failed--saving rejects to configure.in.rej\n\nconfigure.in has only 628 lines.\n\nDwight\n\n", "msg_date": "Wed, 8 Apr 1998 00:58:42 -0700 (PDT)", "msg_from": "Dwight Johnson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [QUESTIONS] warning: tcl support disabled" }, { "msg_contents": " On Tue, 7 Apr 1998, Brook Milligan wrote:\n\n > ===========================================================================\n > --- configure.in.orig\tTue Apr 7 20:55:44 1998\n > +++ configure.in\tTue Apr 7 22:05:44 1998\n > @@ -639,6 +639,17 @@\n\n Is this a patch for 6.3.1?\n\nNo.\n\n --------------------------\n Patching file configure.in using Plan A...\n Hunk #1 failed at 639.\n Hunk #2 failed at 664.\n 2 out of 2 hunks failed--saving rejects to configure.in.rej\n\n configure.in has only 628 lines.\n\nThis was a patch to be applied to 6.3.1 AFTER applying my earlier\npatch to configure.in.\n\nCheers,\nBrook\n", "msg_date": "Wed, 8 Apr 1998 15:45:13 -0600 (MDT)", "msg_from": "Brook Milligan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [QUESTIONS] warning: tcl support disabled" }, { "msg_contents": "\nCan you provide a patch against the current source tree? This whole thing\nhas become one big mess, and I no longer know what to back out and what\nnot to back out...\n\n\n\nOn Wed, 8 Apr 1998, Brook Milligan wrote:\n\n> On Tue, 7 Apr 1998, Brook Milligan wrote:\n> \n> > ===========================================================================\n> > --- configure.in.orig\tTue Apr 7 20:55:44 1998\n> > +++ configure.in\tTue Apr 7 22:05:44 1998\n> > @@ -639,6 +639,17 @@\n> \n> Is this a patch for 6.3.1?\n> \n> No.\n> \n> --------------------------\n> Patching file configure.in using Plan A...\n> Hunk #1 failed at 639.\n> Hunk #2 failed at 664.\n> 2 out of 2 hunks failed--saving rejects to configure.in.rej\n> \n> configure.in has only 628 lines.\n> \n> This was a patch to be applied to 6.3.1 AFTER applying my earlier\n> patch to configure.in.\n> \n> Cheers,\n> Brook\n> \n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Wed, 8 Apr 1998 19:23:58 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Re: [QUESTIONS] warning: tcl support disabled" } ]
[ { "msg_contents": "subscribe\n\n\n", "msg_date": "Wed, 8 Apr 1998 11:59:34 +0900", "msg_from": "\"Toshiaki Okuda\" <[email protected]>", "msg_from_op": true, "msg_subject": "None" } ]
[ { "msg_contents": "subscribe\n\n", "msg_date": "Wed, 8 Apr 1998 12:00:01 +0900 (JST)", "msg_from": "Toshiaki Okuda <[email protected]>", "msg_from_op": true, "msg_subject": "None" } ]
[ { "msg_contents": "Hi,\n\nIn include/catalog/pg_type.h I've noticed the following code.\n\n<Code>\nCATALOG(pg_type) BOOTSTRAP\n{\n\tNameData\ttypname;\n\tOid\t\t\ttypowner;\n\tint2\t\ttyplen;\n\n\t/*\n\t * typlen is the number of bytes we use to represent a value of this\n\t * type, e.g. 4 for an int4. But for a variable length type, typlen\n\t * is -1.\n\n\t...\n</Code>\n\nThe pg_type catalog is then populated with lines like the following.\n\n<Code>\nDATA(insert OID = 71 (\tpg_type\t\t PGUID 1 1 t b t \\054 1247 0 foo bar foo bar c _null_));\nDATA(insert OID = 75 (\tpg_attribute PGUID 1 1 t b t \\054 1249 0 foo bar foo bar c _null_));\nDATA(insert OID = 81 (\tpg_proc\t\t PGUID 1 1 t b t \\054 1255 0 foo bar foo bar c _null_));\nDATA(insert OID = 83 (\tpg_class\t PGUID 1 1 t b t \\054 1259 0 foo bar foo bar c _null_));\n\n</Code>\n\nNotice that the type length for types like pg_class have the value of one (1)?\nI would have expected them to the same length as an oid.\nAm I just seeing things?\n\nThanks with regards from Maurice\n\n", "msg_date": "Wed, 8 Apr 1998 10:56:50 +0200", "msg_from": "Maurice Gittens <[email protected]>", "msg_from_op": true, "msg_subject": "pg_type populated incorrectly in some cases?" }, { "msg_contents": "> \n> Hi,\n> \n> In include/catalog/pg_type.h I've noticed the following code.\n> \n> <Code>\n> CATALOG(pg_type) BOOTSTRAP\n> {\n> \tNameData\ttypname;\n> \tOid\t\t\ttypowner;\n> \tint2\t\ttyplen;\n> \n> \t/*\n> \t * typlen is the number of bytes we use to represent a value of this\n> \t * type, e.g. 4 for an int4. But for a variable length type, typlen\n> \t * is -1.\n> \n> \t...\n> </Code>\n> \n> The pg_type catalog is then populated with lines like the following.\n> \n> <Code>\n> DATA(insert OID = 71 (\tpg_type\t\t PGUID 1 1 t b t \\054 1247 0 foo bar foo bar c _null_));\n> DATA(insert OID = 75 (\tpg_attribute PGUID 1 1 t b t \\054 1249 0 foo bar foo bar c _null_));\n> DATA(insert OID = 81 (\tpg_proc\t\t PGUID 1 1 t b t \\054 1255 0 foo bar foo bar c _null_));\n> DATA(insert OID = 83 (\tpg_class\t PGUID 1 1 t b t \\054 1259 0 foo bar foo bar c _null_));\n> \n> </Code>\n> \n> Notice that the type length for types like pg_class have the value of one (1)?\n> I would have expected them to the same length as an oid.\n> Am I just seeing things?\n> \n> Thanks with regards from Maurice\n> \n> \n> \n\n\nI believe these should be -1, and I have set them to that. typprtlen is\nnot used for any purpose currently, but it pads the other int2 field, so\nit is not taking any space, and someday may be useful.\n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Sun, 26 Apr 1998 23:17:24 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] pg_type populated incorrectly in some cases?" } ]
[ { "msg_contents": "> Send the bug report to here and Stefan (he is on the TODO list). \n> Let's see if he can fix it.\n\nHi Stefan. I ran across some funny behavior with the HAVING clause:\n\n-- try a having clause in the wrong order (OK, my mistake :)\npostgres=> select x.x, count(y.i) from t x, t y\n group by x.x having x.x = 'four';\nPQexec() -- Request was sent to backend, but backend closed\n the channel before responding. This probably means the backend\n terminated abnormally before or while processing the request.\n\n<start over>\n-- works better when it is a good query...\npostgres=> select x.x, count(y.i) from t x, t y\n group by x.x having count(y.i) = 40;\nx |count\n----+-----\nfour| 40\n(1 row)\n\nTable is defined below...\n\n - Tom\n\n\npostgres=> create table t (x text, i int);\n\n<populate the table; one entry for 'one', two for 'two', etc>\n\npostgres=> select x, i, count(i) from t group by x, i;\nx |i|count\n-----+-+-----\nfour |4| 4\none |1| 1\nthree|3| 3\ntwo |2| 2\n(4 rows)\n", "msg_date": "Wed, 08 Apr 1998 14:18:10 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": true, "msg_subject": "HAVING clause" } ]
[ { "msg_contents": "Hi hackers,\n\nI have an SQL-question and a related core dump :-)\n\n> create table test\n> (\n> col1 text,\n> col2 text,\n> col3 text\n> );\n> CREATE\n> insert into test values ('one', 'two', 'three');\n> INSERT 96299 1\n> select col1, count(*) from test group by col1;\n> col1|count\n> ----+-----\n> one | 1\n> (1 row)\n\nNow I am going to do something illegal:\n\n> select col1, col3, count(*) from test group by col1;\n> ERROR: parser: illegal use of aggregates or non-group column in target list\n\nObviously, I did not use the aggregate correctly, but look at the last\nbit of this error message. If I understand this correctly, all the columns\nin the target list must also be stated in the grouping list. In a way,\nthis makes sense, because the extra columns in the target list\nwould be undefined: these columns would originate from a random row (tuple)\nper group.\n\nMy question: is the following query legal?\n\n> select col1, col3 from test group by col1;\n> col1|col3 \n> ----+-----\n> one |three\n> (1 row)\n\nShouldn't Postgres complain about 'col3'? It is not in the grouping list.\n\nWhat actually brought me to that question is a core dump in a (faulty)\nquery which, after isolating the problem, looks like this:\n\n> select col1, col3 from test where 1 = 1 group by col1;\n> FATAL: unrecognized data from the backend. It probably dumped core.\n> FATAL: unrecognized data from the backend. It probably dumped core.\n\nIf I delete the '1 = 1' or replace 'col3' by 'col2' the query produces\nnormal results. I'm running the snapshot of April 6 on Linux kernel 2.0.33.\n\nCheers,\nRonald\n\n", "msg_date": "Wed, 8 Apr 1998 16:39:53 +0200 (MET DST)", "msg_from": "Ronald Baljeu <[email protected]>", "msg_from_op": true, "msg_subject": "Is this legal???" } ]
[ { "msg_contents": "-----Original Message-----\nFrom: Maurice Gittens <[email protected]>\nTo: [email protected] <[email protected]>\nDate: zondag 5 april 1998 21:47\nSubject: [HACKERS] On improving OO support in posgresql and relaxing oid\nbottleneck at the same time\n\n\n>Hi,\n>\n>I'm currently under the impression that the following change in the\n>postgresql system would benefict the overall performance and quality\n>of the system.\n>\n> Tuples for a class and all it's derived classes are stored in one file.\n>\n>Advantages:\n\n\n -- cut --\n>\n>\n>Disadvantages\n>- sequential heapscans for tables _with_ derived classes will be less\n>efficient\n> in general, because now some tuples may have to be skipped since they\n>may\n> belong to the wrong class. This is easily solved using indices.\n>\n>- slight space overhead for tuple when not using inheritance.\n> The space is used to tag each tuple with the most derived class it\n> belongs to.\n>\n\nOne extra disadvantage of this is that multiple inheritance is only\neasily supported if bases classes being inherited from have a common\ntop most base class. In which all tuples are stored. Otherwise\nwe'll storagefile independant oid's will become necesary again.\n\nSo loosely speaking it still allows for multiple inheritance but only within\na common hierarchy.\n\nWith regards from Maurice.\n\n\n", "msg_date": "Wed, 8 Apr 1998 16:59:21 +0200", "msg_from": "\"Maurice Gittens\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] On improving OO support in posgresql and relaxing oid\n\tbottleneck at the same time" }, { "msg_contents": "> -----Original Message-----\n> From: Maurice Gittens <[email protected]>\n> To: [email protected] <[email protected]>\n> Date: zondag 5 april 1998 21:47\n> Subject: [HACKERS] On improving OO support in posgresql and relaxing oid\n> bottleneck at the same time\n> \n> top most base class. In which all tuples are stored. Otherwise\n> we'll storagefile independant oid's will become necesary again.\n> \n> So loosely speaking it still allows for multiple inheritance but only within\n> a common hierarchy.\n\nJust for everyones information. In Illustra, an oid is 64 bits. The low\norder 32 bits are (approximately), the row identifier within a table. The\nhigh order 32bits the table identifier (which then works out to be the\nsame as the oid of the row in the tables table for the table in question).\n\nOids are unique for the life of the system. Limits are 4G tables, 4G rows\nper table.\n\nI for some reason have never bothered to remember, but I think inheritance\nis done via separate tables.\n\n-dg\n\nDavid Gould [email protected] 510.628.3783 or 510.305.9468 \nInformix Software (No, really) 300 Lakeside Drive Oakland, CA 94612\n - Linux. Not because it is free. Because it is better.\n\n", "msg_date": "Thu, 9 Apr 1998 00:41:38 -0700 (PDT)", "msg_from": "[email protected] (David Gould)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] On improving OO support in posgresql and relaxing oid\n\tbottleneck at the same time" } ]
[ { "msg_contents": "\nI'm getting the above \"error\" *alot* in v6.3.x ... I've dropped and\nrebuilt indices, which appears to fix it for a bit, and then it happens\nagain...\n\nIdeas? Can we at least add which index is corrupted to the error message?\n\n\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Wed, 8 Apr 1998 16:31:32 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "FATAL 1: btree: BTP_CHAIN flag was expected" } ]
[ { "msg_contents": "(We were discussing support for complicated data types last October)\n\nOK, I think I understand most of what the parser does to handle data\ntypes and to handle type resolution. I've started rewriting the code\ninvolved with functions, operators, and targets, and have completed a\ntrial implementation. I was wondering where some things which must\nhappen were actually done, and it turns out that they aren't! For\nexample:\n\npostgres=> create table c1 (c char(4));\npostgres=> create table c2 (c char(4));\npostgres=> insert into c1 values ('abc', 'abc');\npostgres=> insert into c1 values ('def', 'def');\npostgres=> insert into c2(c) select c || 'ghi' from c1;\nINSERT 0 2\npostgres=> select * from c2;\nc |t\n-------+-\nabc ghi|\ndef ghi|\n(2 rows)\n\nNote that our char(4) column happily ends up with strings 7 characters\nlong! This is with pristine v6.3.1 code.\n\nOK, so to do things right, for, say, char() and for numeric(), we need a\nway to provide target properties to routines even though the\ncharacteristics of a particular value might be stored with the value.\nThe current system usually stores string or array properties with the\nvalue; but even so we also need to provide any target properties to\ncorrectly handle function and operator output into target columns. In\nthe example above, each input string of char(4) is concatenated with a\nstring constant, and stored back into a char(4) column. The target\ncolumn info must appear somewhere so a conversion or storage routine can\nfix up the values.\n\n> > > What about passing to functions references to some structures ?\n> > > struct ... {\n> > > TypeSpec type_spec;\n> > > ...some other things maybe...\n> > > Datum data; /* this is real data */\n> > > }\n> > > type_spec could be precision/scale for NUMBERs and DECIMALs,\n> > > max len for (VAR)CHARs or other type specification value (pointer > > > to) for (other) user-defined types, so one could define new type \n> > > with pointing of function(s) to handle type_spec for this type in\n> > > CREATE TABLE:\n> > > A_Column An_User_Type[(type-spec)]\n> > > Just like\n> > > A NUMBER(5,2)but more general (object-oriented).\n\nOK, so I'm thinking that we need to do something like this, but there\nare problems in doing it with explicit structures in the backend:\n1) the backend might need to know too much about each type, damaging the\nextensibility\n2) somehow the backend would need to be able to support dump/reload\noperations (so there needs to be a string representation of the type).\n3) we need this part to be extensible to new types also.\n\nI'm not happy with the structure above, but I'm not sure why. It just\nseems pretty complicated, and doesn't really address issues 1-3 above. \n\nWell, finally a light came on: the characteristics of these complicated\ntypes should be a type also! For example, the numeric type would have a\nsupport definition type which contains two fields, the precision and\nscale, and which as part of the definition would know how to print\nitself out, read itself in, etc. Then, the backend would just need to be\nable to match up the type with the support type, and the adt code would\nneed to be able to access both.\n\nI haven't gotten farther than this yet, but it seems like any solution\nhas _got_ to take advantage of some of the existing type mechanisms so\nthat we have access to input/output routines for the type support info.\nSQL doesn't naturally lend itself to full OO extensibility, but perhaps\nwe can extend the backend to handle syntax like \n\n typename(characteristic, characteristic,...)\n\nwhere the characteristics have properties stored in the type/attribute\nsystem and the backend knows how to meld the type with the support type\ninfo (e.g. typeOutput = print(\"%s(%s)\", typename, printTypeSupport) ).\n\nDoes this ring a bell with anyone? Vadim?\n\n - Tom\n\n> > Yes . For a few data types (bpchar for varchar support) the system already passes\n> > more than just a single field structure to the type-handler code. It passes the\n> > pointer to the structure, and also a couple of other parameters including the\n> > maximum length. We could/should generalize this so that a descriptive structure\n> > can be passed in for every type-handler (at least for -in() and -out() functions,\n> > perhaps for all calls?) which is specialized for each data type and which is\n> ^^^^^^^^^\n> Imho, for all. number_pl() should know about precision/scale of\n> both args...\n> \n> > defined _by_ the data type code itself. This basically means that data types can\n> > provide more \"methods\" to help with the type handling.\n> >\n> > The default behavior could be that NULL is passed for these extra arguments and\n> > type-handlers could choose to ignore it (so perhaps existing handlers could work\n> > without change).\n> \n> My suggestion is passing data in descriptive structure in all cases.\n> This will require user to re-write all user-defined functions...\n> \n> >\n> > For example, to implement NUMERIC(5,2) we would need to pass the precision and\n> > scale numbers to the type handlers (numeric_in(), etc) after saving them when\n> > defining the table/column. We also need a way to either save the original text\n> > definition or to reconstruct it so that dump/reload can work.\n> ^^^^^^^^^^^\n> Type_spec handling functions could accept two args: one is\n> type_spec itself (in external/internal formats) and second - in/out\n> flag: in - to convert (5,2) into internal form, out - from\n> internal form to external representation.\n> I prefer this way.\n> \n> We could store type_spec in pg_attribute. This makes\n> pg_attribute tuple len variable (if we'll allow type_spec\n> has > 4 bytes len) and so many parts of code must be changed.\n> For the moment, we could allow only 4 bytes for type_spec -\n> it's ok for NUMBERs, DECIMALs, (VAR/BP)CHARs...\n> \n> Vadim\n", "msg_date": "Thu, 09 Apr 1998 05:02:24 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Data types" }, { "msg_contents": "> I haven't gotten farther than this yet, but it seems like any solution\n> has _got_ to take advantage of some of the existing type mechanisms so\n> that we have access to input/output routines for the type support info.\n> SQL doesn't naturally lend itself to full OO extensibility, but perhaps\n> we can extend the backend to handle syntax like \n> \n> typename(characteristic, characteristic,...)\n> \n> where the characteristics have properties stored in the type/attribute\n> system and the backend knows how to meld the type with the support type\n> info (e.g. typeOutput = print(\"%s(%s)\", typename, printTypeSupport) ).\n> \n> Does this ring a bell with anyone? Vadim?\n\nHow does atttypmod fit/not fit the need here? It is passed to all\ninput/output functions.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Thu, 9 Apr 1998 02:35:12 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Data types" }, { "msg_contents": "> > SQL doesn't naturally lend itself to full OO extensibility, but\n> > we can extend the backend to handle syntax like\n> >\n> > typename(characteristic, characteristic,...)\n> >\n> > where the characteristics have properties stored in the \n> > type/attribute system and the backend knows how to meld the type \n> > with the support type info (e.g.\n> > typeOutput = print(\"%s(%s)\", typename, printTypeSupport) ).\n> How does atttypmod fit/not fit the need here? It is passed to all\n> input/output functions.\n\nAt the moment, atttypmod is defined to be a two byte integer. I believe\nit is used to flag specific behaviors regarding type handling, and I'm\nnot sure that, for example, it gives enough info and flexibility to keep\ntrack of both precision and scale for the numeric() type as well as do\nit's other jobs.\n\natttypmod is passed to input/output routines, but I think we need a\ncallable routine to convert internal representations also. That is, the\ninput/output routines convert from and to C strings, but we also need a\nway to \"convert\" a type to itself (e.g. char(x) to char(4)), checking\natttypmod and/or other type-specific information while doing so. It also\nneeds a convention of some sort, or built-in to tables, so that this can\nbe set up extensibly and on-the-fly by the parser code.\n\nMaybe if there were a routine defined in pg_proc which took a type and\natttypmod as arguments and output that same type the parser could look\nfor that and wrap it in a function call when converting types to\ntargets. Maybe that would be enough? It would be similar to the \"cast\"\nconvention we've adopted. I need to understand atttypmod's usage and\ncapabilities better to know for sure; these are just my impressions\nright now.\n\natttypmod's presence in the code is certainly a good marker of where\nwe'd need to look to make changes (using atttypmod or something else).\n\n??\n\n - Tom\n", "msg_date": "Thu, 09 Apr 1998 13:25:15 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Data types" }, { "msg_contents": "> \n> > > SQL doesn't naturally lend itself to full OO extensibility, but\n> > > we can extend the backend to handle syntax like\n> > >\n> > > typename(characteristic, characteristic,...)\n> > >\n> > > where the characteristics have properties stored in the \n> > > type/attribute system and the backend knows how to meld the type \n> > > with the support type info (e.g.\n> > > typeOutput = print(\"%s(%s)\", typename, printTypeSupport) ).\n> > How does atttypmod fit/not fit the need here? It is passed to all\n> > input/output functions.\n> \n> At the moment, atttypmod is defined to be a two byte integer. I believe\n> it is used to flag specific behaviors regarding type handling, and I'm\n> not sure that, for example, it gives enough info and flexibility to keep\n> track of both precision and scale for the numeric() type as well as do\n> it's other jobs.\n\nI thought about this. Unless you are going to make such a thing for\neach type, you are going to have to over-load the storage for each type,\nso you would have something like:\n\n\ttypedef numeric_typmod {\n\t\tchar len;\n\t\tchar prec;\n\t};\n\nand cast the atttypmod int2 value into that structure, and read the\nfield as two one-byte fields.\n\n> atttypmod is passed to input/output routines, but I think we need a\n> callable routine to convert internal representations also. That is, the\n> input/output routines convert from and to C strings, but we also need a\n> way to \"convert\" a type to itself (e.g. char(x) to char(4)), checking\n> atttypmod and/or other type-specific information while doing so. It also\n> needs a convention of some sort, or built-in to tables, so that this can\n> be set up extensibly and on-the-fly by the parser code.\n\nThis should be possible. Now that atttypmod is passed around the\nbackend through resdom, it should be available in almost all context. \nThe trick is to pass it to the type-specific conversion functions. \nShould certainly be possible. The initial implemenation of atttypmod\njust passed it to input functions, and it was not passed through the\nbackend. Now, it does, so we no longer do funny things with TupleDesc\nin the executor for char() and varchar(). They now get their atttypmod\nvalues set from Resdom, and those are passed to the output functions.\n\n> Maybe if there were a routine defined in pg_proc which took a type and\n> atttypmod as arguments and output that same type the parser could look\n> for that and wrap it in a function call when converting types to\n> targets. Maybe that would be enough? It would be similar to the \"cast\"\n> convention we've adopted. I need to understand atttypmod's usage and\n> capabilities better to know for sure; these are just my impressions\n> right now.\n\nSounds like a plan.\n\n> atttypmod's presence in the code is certainly a good marker of where\n> we'd need to look to make changes (using atttypmod or something else).\n\nLet me know if you find any limitations. Initially, it was limited only\nto input, but now, it will be more useful.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Thu, 9 Apr 1998 10:39:31 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Data types" } ]
[ { "msg_contents": ">> \n>\n>Does it make sense to have a 'row' context which is released just before\n>starting with a new tuple ? The total number or free is the same but they\n>are distributed over the query and unused memory should not accumulate.\n>I have seen backends growing to 40-60MB with queries which scan a very\n>large number of rows.\n>\n\n\nI think this would be appropiate.\n\nWith regards from Maurice.\n\n", "msg_date": "Thu, 9 Apr 1998 13:00:09 +0200", "msg_from": "\"Maurice Gittens\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Everything leaks; How it mm suppose to work?" }, { "msg_contents": "> >Does it make sense to have a 'row' context which is released just \n> >before starting with a new tuple ? The total number or free is the \n> >same but they are distributed over the query and unused memory should \n> >not accumulate.\n> >I have seen backends growing to 40-60MB with queries which scan a \n> >very large number of rows.\n> I think this would be appropiate.\n\nIt seems that the CPU overhead on all queries would increase trying to\ndeallocate/reuse memory during the query. There are lots of places in\nthe backend where memory is palloc'd and then left lying around after\nuse; I had assumed it was sort-of-intentional to avoid having extra\ncleanup overhead during a query.\n\n - Tom\n", "msg_date": "Thu, 09 Apr 1998 13:00:24 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Everything leaks; How it mm suppose to work?" }, { "msg_contents": "Thomas G. Lockhart wrote:\n> \n> > >Does it make sense to have a 'row' context which is released just\n> > >before starting with a new tuple ? The total number or free is the\n> > >same but they are distributed over the query and unused memory should\n> > >not accumulate.\n> > >I have seen backends growing to 40-60MB with queries which scan a\n> > >very large number of rows.\n> > I think this would be appropiate.\n> \n> It seems that the CPU overhead on all queries would increase trying to\n> deallocate/reuse memory during the query. There are lots of places in\n> the backend where memory is palloc'd and then left lying around after\n> use; I had assumed it was sort-of-intentional to avoid having extra\n> cleanup overhead during a query.\n\nThis problem (introduced in 6.3) is already fixed by Bruce - will be\nin 6.3.2\n\nVadim\n", "msg_date": "Thu, 09 Apr 1998 22:55:05 +0800", "msg_from": "\"Vadim B. Mikheev\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Everything leaks; How it mm suppose to work?" }, { "msg_contents": "Thomas G. Lockhart replies to Maurice:\n> > >Does it make sense to have a 'row' context which is released just \n> > >before starting with a new tuple ? The total number or free is the \n> > >same but they are distributed over the query and unused memory should \n> > >not accumulate.\n> > >I have seen backends growing to 40-60MB with queries which scan a \n> > >very large number of rows.\n> > I think this would be appropiate.\n> \n> It seems that the CPU overhead on all queries would increase trying to\n> deallocate/reuse memory during the query. There are lots of places in\n> the backend where memory is palloc'd and then left lying around after\n> use; I had assumed it was sort-of-intentional to avoid having extra\n> cleanup overhead during a query.\n\nThis is exactly right. Destroying a memory context in the current\nimplementationis a very high overhead operation. Doing it once per row\nwould be a performance disaster.\n\n-dg\n\nDavid Gould [email protected] 510.628.3783 or 510.305.9468 \nInformix Software (No, really) 300 Lakeside Drive Oakland, CA 94612\n - Linux. Not because it is free. Because it is better.\n\n", "msg_date": "Thu, 9 Apr 1998 11:34:00 -0700 (PDT)", "msg_from": "[email protected] (David Gould)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Everything leaks; How it mm suppose to work?" } ]
[ { "msg_contents": "It used to be that configure asked for extra include and lib directories\nusing reasonable defaults. Now I get failures because it can't find\nreadline unless I manually add these in. Is this supposed to be picked\nup automatically from the template file? The template file has SRCH_INC\nand SRCH_LIB but it doesn't seem to be picked up by configure.\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 424 2871 (DoD#0082) (eNTP) | what's for dinner.\n", "msg_date": "Thu, 9 Apr 1998 08:04:30 -0400 (EDT)", "msg_from": "[email protected] (D'Arcy J.M. Cain)", "msg_from_op": true, "msg_subject": "NetBSD configuration" }, { "msg_contents": "On Thu, 9 Apr 1998, D'Arcy J.M. Cain wrote:\n\n> It used to be that configure asked for extra include and lib directories\n> using reasonable defaults. Now I get failures because it can't find\n> readline unless I manually add these in. Is this supposed to be picked\n> up automatically from the template file? The template file has SRCH_INC\n> and SRCH_LIB but it doesn't seem to be picked up by configure.\n\n\tWe recently switched it over to requiring you to use\n'--with-includes=' and '--with-libs'\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Thu, 9 Apr 1998 22:15:58 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] NetBSD configuration" }, { "msg_contents": "> \n> On Thu, 9 Apr 1998, D'Arcy J.M. Cain wrote:\n> \n> > It used to be that configure asked for extra include and lib directories\n> > using reasonable defaults. Now I get failures because it can't find\n> > readline unless I manually add these in. Is this supposed to be picked\n> > up automatically from the template file? The template file has SRCH_INC\n> > and SRCH_LIB but it doesn't seem to be picked up by configure.\n> \n> \tWe recently switched it over to requiring you to use\n> '--with-includes=' and '--with-libs'\n\nWoh, I found a problem with readline not being found too. The fix was\nto use --with-libraries, not --with-libs. It doesn't error out with\n--with-libs (nor with --with-asdf either), but it does nothing. \n--with-libraries does the trick.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Thu, 9 Apr 1998 22:09:43 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] NetBSD configuration" }, { "msg_contents": "On Thu, 9 Apr 1998, Bruce Momjian wrote:\n\n> > \n> > On Thu, 9 Apr 1998, D'Arcy J.M. Cain wrote:\n> > \n> > > It used to be that configure asked for extra include and lib directories\n> > > using reasonable defaults. Now I get failures because it can't find\n> > > readline unless I manually add these in. Is this supposed to be picked\n> > > up automatically from the template file? The template file has SRCH_INC\n> > > and SRCH_LIB but it doesn't seem to be picked up by configure.\n> > \n> > \tWe recently switched it over to requiring you to use\n> > '--with-includes=' and '--with-libs'\n> \n> Woh, I found a problem with readline not being found too. The fix was\n> to use --with-libraries, not --with-libs. It doesn't error out with\n> --with-libs (nor with --with-asdf either), but it does nothing. \n> --with-libraries does the trick.\n\n\tFixed... --with-libs == --with-libraries ... so you can use either\none...\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Thu, 9 Apr 1998 23:59:36 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] NetBSD configuration" } ]
[ { "msg_contents": "Would it be possible to put the v6.3.x release notes into the v6.3.2\nsgml sources? I can go ahead and regenerate the html output (trivial to\ndo) for the release. Also, I can help with marking up the sources.\n\nThis would give us some practice on including release notes in the\nonline docs for each release :)\n\n - Tom\n", "msg_date": "Thu, 09 Apr 1998 14:59:09 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": true, "msg_subject": "Release notes" }, { "msg_contents": "> \n> Would it be possible to put the v6.3.x release notes into the v6.3.2\n> sgml sources? I can go ahead and regenerate the html output (trivial to\n> do) for the release. Also, I can help with marking up the sources.\n> \n> This would give us some practice on including release notes in the\n> online docs for each release :)\n> \n> - Tom\n> \n\nThe release notes that are at the top of the HISTORY files, or\n/migration. Both are useful. They are just lists of items from the CVS\nlogs.\n\nDo you want them in another format?\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Thu, 9 Apr 1998 11:04:41 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Release notes" }, { "msg_contents": "> > Would it be possible to put the v6.3.x release notes into the v6.3.2\n> > sgml sources?\n> > This would give us some practice on including release notes in the\n> > online docs for each release :)\n> The release notes that are at the top of the HISTORY files, or\n> /migration. Both are useful. They are just lists of items from the \n> CVS logs.\n> Do you want them in another format?\n\nWell, yes, eventually...\n\nWhere I'm hoping to go with this is to have the release notes in sgml,\nso that they become a part of the main docs. They are appropriate for\nthat for several reasons:\n\n1) they document the evolution of capabilities\n2) they highlight new capabilities\n3) releases are snapshots which match these notes exactly\n4) lots of people upgrade from old versions, jumping forward several\nreleases.\n5) at least a few items deserve longer descriptions and explanations in\nthe release notes. These could be expanded from the \"one-liner\" items,\njust like you did for the last release on a few of them.\n\nAnyway, I think it would be a great addition to the \"formal docs\". Also,\nwe should (soon) be able to generate html docs on the fly on\npostgresql.org, so it would be feasible to use this as a working\ndocument between releases also.\n\n - Tom\n", "msg_date": "Fri, 10 Apr 1998 01:30:50 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Re: Release notes" }, { "msg_contents": "> Well, yes, eventually...\n> \n> Where I'm hoping to go with this is to have the release notes in sgml,\n> so that they become a part of the main docs. They are appropriate for\n> that for several reasons:\n> \n> 1) they document the evolution of capabilities\n> 2) they highlight new capabilities\n> 3) releases are snapshots which match these notes exactly\n> 4) lots of people upgrade from old versions, jumping forward several\n> releases.\n> 5) at least a few items deserve longer descriptions and explanations in\n> the release notes. These could be expanded from the \"one-liner\" items,\n> just like you did for the last release on a few of them.\n> \n> Anyway, I think it would be a great addition to the \"formal docs\". Also,\n> we should (soon) be able to generate html docs on the fly on\n> postgresql.org, so it would be feasible to use this as a working\n> document between releases also.\n\nOK, here is HTML generated by txt2html from the actual HISTORY file. \nHow is it?\n\n\n---------------------------------------------------------------------------\n\n<!DOCTYPE HTML PUBLIC \"-//IETF//DTD HTML 3.2//EN\">\n<HTML>\n<HEAD>\n<META NAME=\"generator\" CONTENT=\"txt2html v1.24\">\n</HEAD>\n<BODY>\n<A NAME=\"section-1\"><H1>PostgreSQL 6.3.2 Tue Apr 7 16:53:16 EDT 1998</H1></A>\n\n<P>\nA dump/restore is NOT required for those running 6.3.1. A \n'make distclean', 'make', and 'make install' is all that is required.\nThis last step should be performed while the postmaster is not running.\nYou should re-link any custom applications that use PostgreSQL libraries.\n\n<A NAME=\"section-1.1\"><H2>Changes</H2></A>\nconfigure detection improvements for tcl/tk(Brook Milligan, Alvin)\nManual page improvements(Bruce)<BR>\nBETWEEN and LIKE fix(Thomas)<BR>\nfix for psql \\connect used by pg_dump(Oliver Elphick)\nCleanup of postodbc source code indentation\npgaccess, version 0.86<BR>\nHAVING clause now supported in SELECT(Stefan)\nqsort removed, now uses libc version, cleanups(Jeroen)\nfix for buffer over-runs detected(Maurice Gittens)\nfix for buffer overrun in libpgtcl(Randy Kunkee)\nfix for UNION with DISTINCT or ORDER BY(Bruce)\ngettimeofday configure check(Doug Winterburn)\nFix \"indexes not used\" bug(Vadim)<BR>\ndocs additions(Thomas)<BR>\nFix for backend memory leak(Bruce)<BR>\nlibreadline cleanup(Erwan MAS)<BR>\nRemove DISTDIR(Bruce)<BR>\nMakefile dependency cleanup(Jeroen van Vianen)\nASSERT fixes(Bruce)\n\n\n\n<A NAME=\"section-2\"><H1>PostgreSQL 6.3.1 Mon Mar 23 10:21:52 EST 1998</H1></A>\n\n<P>\nA dump/restore is NOT required for those running 6.3. A \n'make distclean', 'make', and 'make install' is all that is required.\nThis last step should be performed while the postmaster is not running.\nYou should re-link any custom applications that use PostgreSQL libraries.\n\n<A NAME=\"section-2.1\"><H2>Changes</H2></A>\necpg cleanup/fixes, now version 1.1(Michael Meskes)\npg_user cleanup(Bruce)<BR>\nlarge object fix for pg_dump and tclsh([email protected])\nLIKE fix for multiple adjacent underscores\nLIKE/BETWEEN fix for having function call as target(Thomas)\nfix for redefining builtin functions(Thomas)\nultrix4 cleanup<BR>\nupgrade to pg_access 0.83<BR>\nupdated CLUSTER manual page<BR>\nmulti-byte character set support, see doc/README.mb(Tatsuo)\nconfigure --with-pgport fix<BR>\npg_ident fix<BR>\nbig-endian fix for backend communications(Kataoka)\nSUBSTR() and substring() fix(Jan)<BR>\nseveral jdbc fixes(Peter)<BR>\nlibpgtcl improvements, see libptcl/README(Randy Kunkee)\nFix for \"Datasize = 0\" error(Vadim)<BR>\nPrevent \\do from wrapping(Bruce)<BR>\nRemove duplicate Russian character set entries\nSunos4 cleanup<BR>\nAllow optional TABLE keyword in LOCK and SELECT INTO(Thomas)\nCREATE SEQUENCE options to allow a negative integer(Thomas)\nAdd \"PASSWORD\" as an allowed column identifier(Thomas)\nAdd checks for UNION target fields(Bruce)\nFix Alpha port(Dwayne Bailey)<BR>\nFix for text arrays containing quotes(Doug Gibson)\nSolaris compile fix(Albert Chin-A-Young)\nBetter identify tcl and tk libs and includes(Bruce)\n\n\n\n<A NAME=\"section-3\"><H1>PostgreSQL 6.3 Sun Mar 1 14:57:30 EST 1998</H1></A>\n\n<P>\nA dump/restore is required for those wishing to migrate data from\nprevious releases of PostgreSQL.\n\n<UL>\n <LI> The migration/6.2.1_to_6.3 file contains a detailed description \n <LI> of the feature changes in this release, and is recommended reading.\n\n</UL>\n<A NAME=\"section-3.1\"><H2>Bug Fixes</H2></A>\nFix binary cursors broken by MOVE implementation(Vadim)\nFix for tcl library crash(Jan)<BR>\nFix for array handling, from Gerhard Hintermayer\nFix acl error, and remove duplicate pqtrace(Bruce)\nFix psql \\e for empty file(Bruce)<BR>\nFix for textcat on varchar() fields(Bruce)\nFix for DBT Sendproc (Zeugswetter Andres)\nFix vacuum analyze syntax problem(Bruce)\nFix for international identifiers(Tatsuo)\nFix aggregates on inherited tables(Bruce)\nFix substr() for out-of-bounds data<BR>\nFix for select 1=1 or 2=2, select 1=1 and 2=2, and select sum(2+2)(Bruce)\nFix notty output to show status result. -q option still turns it off(Bruce)\nFix for count(*), aggs with views and multiple tables and sum(3)(Bruce)\nFix cluster(Bruce)<BR>\nFix for PQtrace start/stop several times(Bruce)\nFix a variety of locking problems like newer lock waiters getting\n<PRE>\n lock before older waiters, and having readlock people not share\n locks if a writer is waiting for a lock, and waiting writers not\n getting priority over waiting readers(Bruce)\n</PRE>\n<P>\nFix crashes in psql when executing queries from external files(James)\nFix problem with multiple order by columns, with the first one having\n<P>\n NULL values(Jeroen)<BR>\nUse correct hash table support functions for float8 and int4(Thomas)\nRe-enable JOIN= option in CREATE OPERATOR statement (Thomas)\nChange precedence for boolean operators to match expected behavior(Thomas)\nGenerate elog(ERROR) on over-large integer(Bruce)\nAllow multiple-argument functions in constraint clauses(Thomas)\nCheck boolean input literals for 'true','false','yes','no','1','0'\n<P>\n and throw elog(ERROR) if unrecognized(Thomas)\nMajor large objects fix<BR>\nFix for GROUP BY showing duplicates(Vadim)\nFix for index scans in MergeJion(Vadim)\n\n<A NAME=\"section-3.2\"><H2>Enhancements</H2></A>\nSubselects with EXISTS, IN, ALL, ANY keywords (Vadim, Bruce, Thomas)\nNew User Manual(Thomas, others)<BR>\nSpeedup by inlining some frequently-called functions\nReal deadlock detection, no more timeouts(Bruce)\nAdd SQL92 \"constants\" CURRENT_DATE, CURRENT_TIME, CURRENT_TIMESTAMP, \n<P>\n CURRENT_USER(Thomas)<BR>\nModify constraint syntax to be SQL92-compliant(Thomas)\nImplement SQL92 PRIMARY KEY and UNIQUE clauses using indices(Thomas)\nRecognize SQL92 syntax for FOREIGN KEY. Throw elog notice(Thomas)\nAllow NOT NULL UNIQUE constraint clause (each allowed separately before)(Thomas)\nAllow Postgres-style casting (\"::\") of non-constants(Thomas)\nAdd support for SQL3 TRUE and FALSE boolean constants(Thomas)\nSupport SQL92 syntax for IS TRUE/IS FALSE/IS NOT TRUE/IS NOT FALSE(Thomas)\nAllow shorter strings for boolean literals (e.g. \"t\", \"tr\", \"tru\")(Thomas)\nAllow SQL92 delimited identifiers(Thomas)\nImplement SQL92 binary and hexadecimal string decoding (b'10' and x'1F')(Thomas)\nSupport SQL92 syntax for type coercion of literal strings\n<P>\n (e.g. \"DATETIME 'now'\")(Thomas)\nAdd conversions for int2, int4, and OID types to and from text(Thomas)\nUse shared lock when building indices(Vadim)\nFree memory allocated for an user query inside transaction block after\n<P>\n this query is done, was turned off in &lt;= 6.2.1(Vadim)\nNew SQL statement CREATE PROCEDURAL LANGUAGE(Jan)\nNew PostgreSQL Procedural Language (PL) backend interface(Jan)\nRename pg_dump -H option to -h(Bruce)<BR>\nAdd Java support for passwords, European dates(Peter)\nUse indices for LIKE and ~, !~ operations(Bruce)\nAdd hash functions for datetime and timespan(Thomas)\nTime Travel removed(Vadim, Bruce)<BR>\nAdd paging for \\d and \\z, and fix \\i(Bruce)\nAdd Unix domain socket support to backend and to frontend library(Goran)\nImplement CREATE DATABASE/WITH LOCATION and initlocation utility(Thomas)\nAllow more SQL92 and/or Postgres reserved words as column identifiers(Thomas)\nAugment support for SQL92 SET TIME ZONE...(Thomas)\nSET/SHOW/RESET TIME ZONE uses TZ backend environment variable(Thomas)\nImplement SET keyword = DEFAULT and SET TIME ZONE DEFAULT(Thomas)\nEnable SET TIME ZONE using TZ environment variable(Thomas)\nAdd PGDATESTYLE environment variable to frontend and backend initialization(Thomas)\nAdd PGTZ, PGCOSTHEAP, PGCOSTINDEX, PGRPLANS, PGGEQO\n<P>\n frontend library initialization environment variables(Thomas)\nRegression tests time zone automatically set with \"setenv PGTZ PST8PDT\"(Thomas)\nAdd pg_description table for info on tables, columns, operators, types, and\n<P>\n aggregates(Bruce)<BR>\nIncrease 16 char limit on system table/index names to 32 characters(Bruce)\nRename system indices(Bruce)<BR>\nAdd 'GERMAN' option to SET DATESTYLE(Thomas)\nDefine an \"ISO-style\" timespan output format with \"hh:mm:ss\" fields(Thomas)\nAllow fractional values for delta times (e.g. '2.5 days')(Thomas)\nValidate numeric input more carefully for delta times(Thomas)\nImplement day of year as possible input to date_part()(Thomas)\nDefine timespan_finite() and text_timespan() functions(Thomas)\nRemove archive stuff(Bruce)<BR>\nAllow for a pg_password authentication database that is separate from\n<P>\n the system password file(Todd)<BR>\nDump ACLs, GRANT, REVOKE permissions(Matt)\nDefine text, varchar, and bpchar string length functions(Thomas)\nFix Query handling for inheritance, and cost computations(Bruce)\nImplement CREATE TABLE/AS SELECT (alternative to SELECT/INTO)(Thomas)\nAllow NOT, IS NULL, IS NOT NULL in constraints(Thomas)\nImplement UNIONs for SELECT(Bruce)<BR>\nAdd UNION, GROUP, DISTINCT to INSERT(Bruce)\nvarchar() stores only necessary bytes on disk(Bruce)\nFix for BLOBs(Peter)<BR>\nMega-Patch for JDBC...see README_6.3 for list of changes(Peter)\nRemove unused \"option\" from PQconnectdb()\nNew LOCK command and lock manual page describing deadlocks(Bruce)\nAdd new psql \\da, \\dd, \\df, \\do, \\dS, and \\dT commands(Bruce)\nEnhance psql \\z to show sequences(Bruce)\nShow NOT NULL and DEFAULT in psql \\d table(Bruce)\nNew psql .psqlrc file startup(Andrew)<BR>\nModify sample startup script in contrib/linux to show syslog(Thomas)\nNew types for IP and MAC addresses in contrib/ip_and_mac(TomH)\nUnix system time conversions with date/time types in contrib/unixdate(Thomas)\nUpdate of contrib stuff(Massimo)<BR>\nAdd Unix socket support to DBD::Pg(Goran)\nNew python interface (PyGreSQL 2.0)(D'Arcy)\nNew frontend/backend protocol has a version number, network byte order(Phil)\nSecurity features in pg_hba.conf enhanced and documented, many cleanups(Phil)\nCHAR() now faster access than VARCHAR() or TEXT\necpg embedded SQL preprocessor<BR>\nReduce system column overhead(Vadmin)<BR>\nRemove pg_time table(Vadim)<BR>\nAdd pg_type attribute to identify types that need length (bpchar, varchar)\nAdd report of offending line when COPY command fails\nAllow VIEW permissions to be set separately from the underlying tables. \n<P>\n For security, use GRANT/REVOKE on views as appropriate(Jan)\nTables now have no default GRANT SELECT TO PUBLIC. You must\n<P>\n explicitly grant such permissions.\nClean up tutorial examples(Darren)\n\n<A NAME=\"section-3.3\"><H2>Source Tree Changes</H2></A>\nAdd new html development tools, and flow chart in /tools/backend\nFix for SCO compiles<BR>\nStratus computer port \"Gillies, Robert\" &lt;[email protected]&gt;\nAdded support for shlib for BSD44_derived &amp; i386_solaris\nMake configure more automated(Brook)<BR>\nAdd script to check regression test results\nBreak parser functions into smaller files, group together(Bruce)\nRename heap_create to heap_create_and_catalog, rename heap_creatr\n<P>\n to heap_create()(Bruce)<BR>\nSparc/Linux patch for locking(TomS)<BR>\nRemove PORTNAME and reorganize port-specific stuff(Marc)\nAdd optimizer README file(Bruce)<BR>\nRemove some recursion in optimizer and clean up some code there(Bruce)\nFix for NetBSD locking(Henry)<BR>\nFix for libptcl make(Tatsuo)<BR>\nAIX patch(Darren)<BR>\nChange IS TRUE, IS FALSE, ... to expressions using \"=\" rather than\n<P>\n function calls to istrue() or isfalse() to allow optimization(Thomas)\nVarious fixes NetBSD/Sparc related(TomH)\nAlpha linux locking(Travis,Ryan)<BR>\nChange elog(WARN) to elog(ERROR)(Bruce)\nFAQ for FreeBSD(Marc)<BR>\nBring in the PostODBC source tree as part of our standard distribution(Marc)\nA minor patch for HP/UX 10 vs 9(Stan)<BR>\nNew pg_attribute.atttypmod for type-specific info like varchar length(Bruce)\nUnixware patches(Billy)<BR>\nNew i386 'lock' for spin lock asm(Billy)\nSupport for multiplexed backends is removed\nStart an OpenBSD port<BR>\nStart an AUX port<BR>\nStart a Cygnus port<BR>\nAdd string functions to regression suite(Thomas)\nExpand a few function names formerly truncated to 16 characters(Thomas)\nRemove un-needed malloc() calls and replace with palloc()(Bruce)\n\n\n\n<P>\nPostgreSQL 6.2.1 Fri Oct 17 00:01:27 EDT 1997\n<HR>\n\n<P>\nThis release does NOT require a dump/restore for those running 6.2, but\nthere is an SQL query in /migration/6.2_to_6.2.1 that should be run. See\nthat file for more information.\n\n<A NAME=\"section-3.4\"><H2>Changes in this release</H2></A>\nAllow TIME and TYPE column names(Thomas)\nAllow larger range of true/false as boolean values(Thomas)\nSupport output of \"now\" and \"current\"(Thomas)\nHandle DEFAULT with INSERT of NULL properly(Vadim)\nFix for relation reference counts problem in buffer manager(Vadim)\nAllow strings to span lines, like ANSI(Thomas)\nFix for backward cursor with ORDER BY(Vadim)\nFix avg(cash) computation(Thomas)<BR>\nFix for specifying a column twice in ORDER/GROUP BY(Vadim)\nDocumented new libpq function to return affected rows, PQcmdTuples(Bruce)\nTrigger function for inserting user names for INSERT/UPDATE(Brook Milligan)\n\n\n\n<A NAME=\"section-4\"><H1>PostgreSQL 6.2 Thu Oct 02 12:53:46 EDT 1997</H1></A>\n\n<P>\nA dump/restore is required for those wishing to migrate data from\nprevious releases of PostgreSQL.\n\n<A NAME=\"section-4.1\"><H2>Bug Fixes</H2></A>\nFix problems with pg_dump for inheritance, sequences, archive tables(Bruce)\nFix compile errors on overflow due to shifts, unsigned, and bad prototypes\n<P>\n from Solaris(Diab Jerius)<BR>\nFix bugs in geometric line arithmetic (bad intersection calculations)(Thomas)\nCheck for geometric intersections at endpoints to avoid rounding ugliness(Thomas)\nCatch non-functional delete attempts(Vadim)\nChange time function names to be more consistent(Michael Reifenberg)\nCheck for zero divides(Michael Reifenberg)\nFix very old bug which made tuples changed/inserted by a commnd\n<PRE>\n visible to the command itself (so we had multiple update of \n updated tuples, etc)(Vadim)\n</PRE>\n<P>\nFix for SELECT null, 'fail' FROM pg_am (Patrick)\nSELECT NULL as EMPTY_FIELD now allowed(Patrick)\nRemove un-needed signal stuff from contrib/pginterface\nFix OR (where x &lt;&gt; 1 or x isnull didn't return tuples with x NULL) (Vadim)\nFix time_cmp function (Vadim)<BR>\nFix handling of functions with non-attribute first argument in \n<P>\n WHERE clauses (Vadim)<BR>\nFix GROUP BY when order of entries is different from order\n<P>\n in target list (Vadim)<BR>\nFix pg_dump for aggregates without sfunc1 (Vadim)\n\n<A NAME=\"section-4.2\"><H2>Enhancements</H2></A>\nDefault genetic optimizer GEQO parameter is now 8(Bruce)\nAllow use parameters in target list having aggregates in functions(Vadim)\nAdded JDBC driver as an interface(Adrian &amp; Peter)\npg_password utility<BR>\nReturn number of tuples inserted/affected by INSERT/UPDATE/DELETE etc.(Vadim)\nTriggers implemented with CREATE TRIGGER (SQL3)(Vadim)\nSPI (Server Programming Interface) allows execution of queries inside \n<P>\n C-functions (Vadim)<BR>\nNOT NULL implemented (SQL92)(Robson Paniago de Miranda)\nInclude reserved words for string handling, outer joins, and unions(Thomas)\nImplement extended comments (\"/* ... */\") using exclusive states(Thomas)\nAdd \"//\" single-line comments(Bruce)<BR>\nRemove some restrictions on characters in operator names(Thomas)\nDEFAULT and CONSTRAINT for tables implemented (SQL92)(Vadim &amp; Thomas)\nAdd text concatenation operator and function (SQL92)(Thomas)\nSupport WITH TIME ZONE syntax (SQL92)(Thomas)\nSupport INTERVAL &lt;unit&gt; TO &lt;unit&gt; syntax (SQL92)(Thomas)\nDefine types DOUBLE PRECISION, INTERVAL, CHARACTER,\n<P>\n and CHARACTER VARYING (SQL92)(Thomas)\nDefine type FLOAT(p) and rudimentary DECIMAL(p,s), NUMERIC(p,s) (SQL92)(Thomas)\nDefine EXTRACT(), POSITION(), SUBSTRING(), and TRIM() (SQL92)(Thomas)\nDefine CURRENT_DATE, CURRENT_TIME, CURRENT_TIMESTAMP (SQL92)(Thomas)\nAdd syntax and warnings for UNION, HAVING, INNER and OUTER JOIN (SQL92)(Thomas)\nAdd more reserved words, mostly for SQL92 compliance(Thomas)\nAllow hh:mm:ss time entry for timespan/reltime types(Thomas)\nAdd center() routines for lseg, path, polygon(Thomas)\nAdd distance() routines for circle-polygon, polygon-polygon(Thomas)\nCheck explicitly for points and polygons contained within polygons\n<P>\n using an axis-crossing algorithm(Thomas)\nAdd routine to convert circle-box(Thomas)\nMerge conflicting operators for different geometric data types(Thomas)\nReplace distance operator \"&lt;===&gt;\" with \"&lt;-&gt;\"(Thomas)\nReplace \"above\" operator \"!^\" with \"&gt;^\" and \"below\" operator \"!|\" with \"&lt;^\"(Thomas)\nAdd routines for text trimming on both ends, substring, and string position(Thomas)\nAdded conversion routines circle(box) and poly(circle)(Thomas)\nAllow internal sorts to be stored in memory rather than in files(Bruce &amp; Vadim)\nAllow functions and operators on internally-identical types to succeed(Bruce)\nSpeed up backend startup after profiling analysis(Bruce)\nInline frequently called functions for performance(Bruce)\nReduce open() calls(Bruce)<BR>\npsql: Add PAGER for \\h and \\?,\\C fix<BR>\nFix for psql pager when no tty(Bruce)<BR>\nNew entab utility(Bruce)<BR>\nGeneral trigger functions for referential integrity (Vadim)\nGeneral trigger functions for time travel (Vadim)\nGeneral trigger functions for AUTOINCREMENT/IDENTITY feature (Vadim)\nMOVE implementation (Vadim)\n\n<A NAME=\"section-4.3\"><H2>Source Tree Changes</H2></A>\nHPUX 10 patches (Vladimir Turin)<BR>\nAdded SCO support, (Daniel Harris)<BR>\nmkLinux patches (Tatsuo Ishii)<BR>\nChange geometric box terminology from \"length\" to \"width\"(Thomas)\nDeprecate temporary unstored slope fields in geometric code(Thomas)\nRemove restart instructions from INSTALL(Bruce)\nLook in /usr/ucb first for install(Bruce)\nFix c++ copy example code(Thomas)<BR>\nAdd -o to psql manual page(Bruce)<BR>\nPrevent relname unallocated string length from being copied into database(Bruce)\nCleanup for NAMEDATALEN use(Bruce)<BR>\nFix pg_proc names over 15 chars in output(Bruce)\nAdd strNcpy() function(Bruce)<BR>\nremove some (void) casts that are unnecessary(Bruce)\nnew interfaces directory(Marc)<BR>\nReplace fopen() calls with calls to fd.c functions(Bruce)\nMake functions static where possible(Bruce)\nenclose unused functions in #ifdef NOT_USED(Bruce)\nRemove call to difftime() in timestamp support to fix SunOS(Bruce &amp; Thomas)\nChanges for Digital Unix<BR>\nPortability fix for pg_dumpall(Bruce)<BR>\nRename pg_attribute.attnvals to attdisbursion(Bruce)\n\"intro/unix\" manual page now \"pgintro\"(Bruce)\n\"built-in\" manual page now \"pgbuiltin\"(Bruce)\n\"drop\" manual page now \"drop_table\"(Bruce)\nAdd \"create_trigger\", \"drop_trigger\" manual pages(Thomas)\nAdd constraints regression test(Vadim &amp; Thomas)\nAdd comments syntax regression test(Thomas)\nAdd PGINDENT and support program(Bruce)\nMassive commit to run PGINDENT on all *.c and *.h files(Bruce)\nFiles moved to /src/tools directory(Bruce)\nSPI and Trigger programming guides (Vadim &amp; D'Arcy)\n\n\n\n<A NAME=\"section-5\"><H1>PostgreSQL 6.1.1 Mon Jul 22 18:04:49 EDT 1997</H1></A>\n\n<P>\nThis release does NOT require a dump/restore for those running 6.1.\n\n<A NAME=\"section-5.1\"><H2>Changes in this release</H2></A>\nfix for SET with options (Thomas)<BR>\nallow pg_dump/pg_dumpall to preserve ownership of all tables/objects(Bruce)\nnew psql \\connect option allows changing usernames without chaning databases\nfix for initdb --debug option(Yoshihiko Ichikawa))\nlextest cleanup(Bruce)<BR>\nhash fixes(Vadim)<BR>\nfix date/time month boundary arithmetic(Thomas)\nfix timezone daylight handling for some ports(Thomas, Bruce, Tatsuo)\ntimestamp overhauled to use standard functions(Thomas)\nother code cleanup in date/time routines(Thomas)\npsql's \\d now case-insensitive(Bruce)<BR>\npsql's backslash commands can now have trailing semicolon(Bruce)\nfix memory leak in psql when using \\g(Bruce)\nmajor fix for endian handling of communication to server(Thomas, Tatsuo)\nFix for Solaris assembler and include files(Yoshihiko Ichikawa)\nallow underscores in usernames(Bruce)<BR>\npg_dumpall now returns proper status, portability fix(Bruce)\n\n\n\n<A NAME=\"section-6\"><H1>PostgreSQL 6.1 Sun Jun 8 14:41:13 EDT 1997</H1></A>\n\n<P>\nA dump/restore is required for those wishing to migrate data from\nprevious releases of PostgreSQL.\n\n<A NAME=\"section-6.1\"><H2>Bug Fixes</H2></A>\npacket length checking in library routines\nlock manager priority patch<BR>\ncheck for under/over flow of float8(Bruce)\nmulti-table join fix(Vadim)<BR>\nSIGPIPE crash fix(Darren)<BR>\nlarge object fixes(Sven)<BR>\nallow btree indexes to handle NULLs(Vadim)\ntimezone fixes(D'Arcy)<BR>\nselect SUM(x) can return NULL on no rows(Thomas)\ninternal optimizer, executor bug fixes(Vadim)\nfix problem where inner loop in &lt; or &lt;= has no rows(Vadim)\nprevent re-commuting join index clauses(Vadim)\nfix join clauses for multiple tables(Vadim)\nfix hash, hashjoin for arrays(Vadim)<BR>\nfix btree for abstime type(Vadim)<BR>\nlarge object fixes(Raymond)<BR>\nfix buffer leak in hash indices (Vadim)\nfix rtree for use in inner scan (Vadim)\nfix gist for use in inner scan, cleanups (Vadim, Andrea)\navoid unnecessary local buffers allocation (Vadim, Massimo)\nfix local buffers leak in transaction aborts (Vadim)\nfix file manager memmory leaks, cleanups (Vadim, Massimo)\nfix storage manager memmory leaks (Vadim)\nfix btree duplicates handling (Vadim)<BR>\nfix deleted tuples re-incarnation caused by vacuum (Vadim)\nfix SELECT varchar()/char() INTO TABLE made zero-length fields(Bruce)\nmany psql, pg_dump, and libpq memory leaks fixed using Purify (Igor)\n\n<A NAME=\"section-6.2\"><H2>Enhancements</H2></A>\nattribute optimization statistics(Bruce)\nmuch faster new btree bulk load code(Paul)\nBTREE UNIQUE added to bulk load code(Vadim) \nnew lock debug code(Massimo)<BR>\nmassive changes to libpg++(Leo)<BR>\nnew GEQO optimizer speeds table multi-table optimization(Martin)\nnew WARN message for non-unique insert into unique key(Marc)\nupdate x=-3, no spaces, now valid(Bruce)\nremove case-sensitive identifier handling(Bruce,Thomas,Dan)\ndebug backend now pretty-prints tree(Darren)\nnew Oracle character functions(Edmund)<BR>\nnew plaintext password functions(Dan)<BR>\nno such class or insufficient privilege changed to distinct messages(Dan)\nnew ANSI timestamp function(Dan)<BR>\nnew ANSI Time and Date types (Thomas)<BR>\nmove large chunks of data in backend(Martin)\nmulti-column btree indexes(Vadim)<BR>\nnew SET var TO value command(Martin)<BR>\nupdate transaction status on reads(Dan)\nnew locale settings for character types(Oleg)\nnew SEQUENCE serial number generator(Vadim)\nGROUP BY function now possible(Vadim)<BR>\nre-organize regression test(Thomas,Marc)\nnew optimizer operation weights(Vadim)<BR>\nnew psql \\z grant/permit option(Marc)<BR>\nnew MONEY data type(D'Arcy,Thomas)<BR>\ntcp socket communication speed improved(Vadim)\nnew VACUUM option for attribute statistics, and for certain columns (Vadim)\nmany geometric type improvements(Thomas,Keith)\nadditional regression tests(Thomas)<BR>\nnew datestyle variable(Thomas,Vadim,Martin)\nmore comparison operators for sorting types(Thomas)\nnew conversion functions(Thomas)<BR>\nnew more compact btree format(Vadim)<BR>\nallow pg_dumpall to preserve database ownership(Bruce)\nnew SET GEQO=# and R_PLANS variable(Vadim)\nold (!GEQO) optimizer can use right-sided plans (Vadim)\ntypechecking improvement in SQL parser(Bruce)\nnew SET, SHOW, RESET commands(Thomas,Vadim)\nnew \\connect database USER option<BR>\nnew destroydb -i option (Igor)<BR>\nnew \\dt and \\di psql commands (Darren)<BR>\nSELECT \"\\n\" now escapes newline (A. Duursma)\nnew geometry conversion functions from old format (Thomas)\n\n<A NAME=\"section-6.3\"><H2>Source tree changes</H2></A>\nnew configuration script(Marc)<BR>\nreadline configuration option added(Marc)\nOS-specific configuration options removed(Marc)\nnew OS-specific template files(Marc)<BR>\nno more need to edit Makefile.global(Marc)\nre-arrange include files(Marc)<BR>\nnextstep patches (Gregor Hoffleit)<BR>\nremoved WIN32-specific code(Bruce)<BR>\nremoved postmaster -e option, now only postgres -e option (Bruce)\nmerge duplicate library code in front/backends(Martin)\nnow works with eBones, international Kerberos(Jun)\nmore shared library support<BR>\nc++ include file cleanup(Bruce)<BR>\nwarn about buggy flex(Bruce)<BR>\nDG-UX, Ultrix, Irix, AIX portability fixes\n\n\n\n<A NAME=\"section-7\"><H1>PostgreSQL 6.0 Wed Jan 29 00:19:54 EST 1997</H1></A>\n\n<P>\nA dump/restore is required for those wishing to migrate data from\nprevious releases of PostgreSQL.\n\n<A NAME=\"section-7.1\"><H2>Bug Fixes</H2></A>\nALTER TABLE bug - running postgress process needs to re-read table definition\nAllow vacuum to be run on one table or entire database(Bruce)\nArray fixes<BR>\nFix array over-runs of memory writes(Kurt)\nFix elusive btree range/non-range bug(Dan)\nFix for hash indexes on some types like time and date\nFix for pg_log size explosion<BR>\nFix permissions on lo_export()(Bruce)<BR>\nFix unitialized reads of memory(Kurt)<BR>\nFixed ALTER TABLE ... char(3) bug(Bruce)\nFixed a few small memory leaks<BR>\nFixed EXPLAIN handling of options and changed full_path option name\nFixed output of group acl permissions<BR>\nMemory leaks (hunt and destroy with tools like Purify(Kurt)\nMinor improvements to rules system<BR>\nNOTIFY fixes<BR>\nNew asserts for run-checking<BR>\nOverhauled parser/analyze code to properly report errors and increase speed\nPg_dump -d now handles NULL's properly(Bruce)\nPrevent SELECT NULL from crashing server (Bruce)\nProperly report errors when INSERT ... SELECT columns did not match\nProperly report errors when insert column names were not correct\nPsql \\g filename now works(Bruce)<BR>\nPsql fixed problem with multiple statements on one line with multiple outputs\nRemoved duplicate system oid's<BR>\nSELECT * INTO TABLE . GROUP/ORDER BY gives unlink error if table exists(Bruce)\nSeveral fixes for queries that crashed the backend\nStarting quote in insert string errors(Bruce)\nSubmiting an empty query now returns empty status, not just \" \" query(Bruce)\n\n<A NAME=\"section-7.2\"><H2>Enhancements</H2></A>\nAdd EXPLAIN manual page(Bruce)<BR>\nAdd UNIQUE index capability(Dan)<BR>\nAdd hostname/user level access control rather than just hostname and user\nAdd synonym of != for &lt;&gt;(Bruce)<BR>\nAllow \"select oid,* from table\"<BR>\nAllow BY,ORDER BY to specify columns by number, or by non-alias table.column(Bruce)\nAllow COPY from the frontend(Bryan)<BR>\nAllow GROUP BY to use alias column name(Bruce)\nAllow actual compression, not just reuse on the same page(Vadim)\nAllow installation-configuration option to auto-add all local users(Bryan)\nAllow libpq to distinguish between text value '' and null(Bruce)\nAllow non-postgres users with createdb privs to destroydb's\nAllow restriction on who can create C functions(Bryan)\nAllow restriction on who can do backend COPY(Bryan)\nCan shrink tables, pg_time and pg_log(Vadim &amp; Erich)\nChange debug level 2 to print queries only, changed debug heading layout(Bruce)\nChange default decimal constant representation from float4 to float8(Bruce)\nEuropean date format now set when postmaster is started\nExecute lowercase function names if not found with exact case\nFixes for aggregate/GROUP processing, allow 'select sum(func(x),sum(x+y) from z'\nGist now included in the distrubution(Marc)\nIdend authentication of local users(Bryan)\nImplement BETWEEN qualifier(Bruce)<BR>\nImplement IN qualifier(Bruce)<BR>\nLibpq has PQgetisnull()(Bruce)<BR>\nLibpq++ improvements<BR>\nNew options to initdb(Bryan)<BR>\nPg_dump allow dump of oid's(Bruce)<BR>\nPg_dump create indexes after tables are loaded for speed(Bruce)\nPg_dumpall dumps all databases, and the user table\nPginterface additions for NULL values(Bruce)\nPrevent postmaster from being run as root\nPsql \\h and \\? is now readable(Bruce)<BR>\nPsql allow backslashed, semicolons anywhere on the line(Bruce)\nPsql changed command prompt for lines in query or in quotes(Bruce)\nPsql char(3) now displays as (bp)char in \\d output(Bruce)\nPsql return code now more accurate(Bryan?)\nPsql updated help syntax(Bruce)<BR>\nRe-visit and fix vacuum(Vadim)<BR>\nReduce size of regression diffs, remove timezone name difference(Bruce)\nRemove compile-time parameters to enable binary distributions(Bryan)\nReverse meaning of HBA masks(Bryan)<BR>\nSecure Authentication of local users(Bryan)\nSpeed up vacuum(Vadim)<BR>\nVacuum now had VERBOSE option(Bruce)\n\n<A NAME=\"section-7.3\"><H2>Source tree changes</H2></A>\nAll functions now have prototypes that are compared against the calls\nAllow asserts to be disabled easly from Makefile.global(Bruce)\nChange oid constants used in code to #define names\nDecoupled sparc and solaris defines(Kurt)\nGcc -Wall compiles cleanly with warnings only from unfixable constructs\nMajor include file reorganization/reduction(Marc)\nMake now stops on compile failure(Bryan)\nMakefile restructuring(Bryan, Marc)<BR>\nMerge bsdi_2_1 to bsdi(Bruce)<BR>\nMonitor program removed<BR>\nName change from Postgres95 to PostgreSQL\nNew config.h file(Marc, Bryan)<BR>\nPG_VERSION now set to 6.0 and used by postmaster\nPortability additions, including Ultrix, DG/UX, AIX, and Solaris\nReduced the number of #define's, centeralized #define's\nRemove duplicate OIDS in system tables(Dan)\nRemove duplicate system catalog info or report mismatches(Dan)\nRemoved many os-specific #define's<BR>\nRestructured object file generation/location(Bryan, Marc)\nRestructured port-specific file locations(Bryan, Marc)\nUnused/uninialized variables corrected\n\n\n\n<P>\nPostgreSQL 1.09 ???\n<HR>\n\n<P>\nSorry, we stopped keeping track of changes from 1.02 to 1.09. Some of\nthe changes listed in 6.0 were actually included in the 1.02.1 to 1.09\nreleases.\n\n\n\n<A NAME=\"section-8\"><H1>Postgres95 1.02 Thu Aug 1 18:00:00 EDT 1996</H1></A>\n\n<P>\nSource code maintainenance and development\n<UL>\n <LI> worldwide team of volunteers\n <LI> the source tree now in CVS at ftp.ki.net\n <LI> developers mailing list - [email protected]\n\n</UL>\n<P>\nEnhancements\n<UL>\n <LI> psql (and underlying libpq library) now has many more options for\n formatting output, including HTML\n <LI> pg_dump now output the schema and/or the data, with many fixes to\n enhance completeness.\n <LI> psql used in place of monitor in administration shell scripts.\n monitor to be depreciated in next release.\n <LI> date/time functions enhanced\n <LI> NULL insert/update/comparison fixed/enhanced\n <LI> TCL/TK lib and shell fixed to work with both tck7.4/tk4.0 and tcl7.5/tk4.1\n\n</UL>\n<P>\nBug Fixes (almost too numerous to mention)\n<UL>\n <LI> indexes\n <LI> storage management\n <LI> check for NULL pointer before dereferencing\n <LI> Makefile fixes\n\n</UL>\n<P>\nNew Ports\n<UL>\n <LI> added SolarisX86 port\n <LI> added BSDI 2.1 port\n <LI> added DGUX port\n\n</UL>\n<P>\nContributors (appologies to any missed)\n<UL>\n <LI> Kurt J. Lidl &lt;[email protected]&gt; \n<P>\n (missed in first run, but no less important)\n <LI> Erich Stamberger &lt;[email protected]&gt;\n <LI> Jason Wright &lt;[email protected]&gt;\n <LI> Cees de Groot &lt;[email protected]&gt;\n <LI> [email protected]\n <LI> [email protected] (Michael Siebenborn (6929))\n <LI> Brian E. Gallew &lt;[email protected]&gt;\n <LI> Vadim B. Mikheev &lt;[email protected]&gt;\n <LI> Adam Sussman &lt;[email protected]&gt;\n <LI> Chris Dunlop &lt;[email protected]&gt;\n <LI> Marc G. Fournier &lt;[email protected]&gt;\n <LI> Dan McGuirk &lt;[email protected]&gt;\n <LI> Dr_George_D_Detlefsen &lt;[email protected]&gt;\n <LI> Erich Stamberger &lt;[email protected]&gt;\n <LI> Massimo Dal Zotto &lt;[email protected]&gt;\n <LI> Randy Kunkee &lt;[email protected]&gt;\n <LI> Rick Weldon &lt;[email protected]&gt;\n <LI> Thomas van Reimersdahl &lt;[email protected]&gt;\n <LI> david bennett &lt;[email protected]&gt;\n <LI> [email protected]\n <LI> Julian Assange &lt;[email protected]&gt;\n <LI> Bruce Momjian &lt;[email protected]&gt;\n <LI> Paul \"Shag\" Walmsley &lt;[email protected]&gt;\n <LI> \"Alistair G. Crooks\" &lt;[email protected]&gt;\n\n\n\n</UL>\n<A NAME=\"section-9\"><H1>Postgres95 1.01 Fri Feb 23 18:20:36 PST 1996</H1></A>\nIncompatibilities:\n<UL>\n <LI> 1.01 is backwards compatible with 1.0 database provided the user\n follow the steps outlined in the MIGRATION_from_1.0_to_1.01 file.\n If those steps are not taken, 1.01 is not compatible with 1.0 database.\n\n</UL>\n<P>\nEnhancements:\n<UL>\n <LI> added PQdisplayTuples() to libpq and changed monitor and psql to use it\n <LI> added NeXT port (requires SysVIPC implementation)\n <LI> added CAST .. AS ... syntax\n <LI> added ASC and DESC keywords\n <LI> added 'internal' as a possible language for CREATE FUNCTION\n internal functions are C functions which have been statically linked\n into the postgres backend.\n <LI> a new type \"name\" has been added for system identifiers (table names,\n attribute names, etc.) This replaces the old char16 type. The\n of name is set by the NAMEDATALEN #define in src/Makefile.global\n <LI> a readable reference manual that describes the query language.\n <LI> added host-based access control. A configuration file ($PGDATA/pg_hba)\n is used to hold the configuration data. If host-based access control\n is not desired, comment out HBA=1 in src/Makefile.global.\n <LI> changed regex handling to be uniform use of Henry Spencer's regex code\n regardless of platform. The regex code is included in the distribution\n <LI> added functions and operators for case-insensitive regular expressions. \n The operators are ~* and !~*.\n <LI> pg_dump uses COPY instead of SELECT loop for better performance\n\n</UL>\n<P>\nBug fixes:\n<UL>\n <LI> fixed an optimizer bug that was causing core dumps when \n functions calls were used in comparisons in the WHERE clause\n <LI> changed all uses of getuid to geteuid so that effective uids are used\n <LI> psql now returns non-zero status on errors when using -c\n <LI> applied public patches 1-14\n\n\n\n</UL>\n<A NAME=\"section-10\"><H1>Postgres95 1.0 Tue Sep 5 11:24:11 PDT 1995</H1></A>\n\n<P>\nCopyright change:\n<UL>\n <LI> The copyright of Postgres 1.0 has been loosened to be freely modifiable\n and modifiable for any purpose. Please read the COPYRIGHT file.\n Thanks to Professor Michael Stonebraker for making this possible.\n\n</UL>\n<P>\nIncompatibilities:\n<UL>\n <LI> date formats have to be MM-DD-YYYY (or DD-MM-YYYY if you're using\n EUROPEAN STYLE). This follows SQL-92 specs.\n <LI> \"delimiters\" is now a keyword\n\n</UL>\n<P>\nEnhancements:\n<UL>\n <LI> sql LIKE syntax has been added\n <LI> copy command now takes an optional USING DELIMITER specification.\n delimiters can be any single-character string. \n <LI> IRIX 5.3 port has been added.\n Thanks to Paul Walmsley ([email protected]) and others.\n <LI> updated pg_dump to work with new libpq\n <LI> \\d has been added psql \n Thanks to Keith Parks ([email protected])\n <LI> regexp performance for architectures that use POSIX regex has been\n improved due to caching of precompiled patterns.\n Thanks to Alistair Crooks ([email protected]) \n <LI> a new version of libpq++\n Thanks to William Wanders ([email protected])\n\n</UL>\n<P>\nBug fixes:\n<UL>\n <LI> arbitrary userids can be specified in the createuser script\n <LI> \\c to connect to other databases in psql now works.\n <LI> bad pg_proc entry for float4inc() is fixed\n <LI> users with usecreatedb field set can now create databases without\n having to be usesuper\n <LI> remove access control entries when the entry no longer has any\n permissions\n <LI> fixed non-portable datetimes implementation\n <LI> added kerberos flags to the src/backend/Makefile\n <LI> libpq now works with kerberos\n <LI> typographic errors in the user manual have been corrected.\n <LI> btrees with multiple index never worked, now we tell you they don't\n work when you try to use them\n\n\n\n</UL>\n<A NAME=\"section-11\"><H1>Postgres95 Beta 0.03 Fri Jul 21 14:49:31 PDT 1995</H1></A>\nIncompatible changes:\n<UL>\n <LI> BETA-0.3 IS INCOMPATIBLE WITH DATABASES CREATED WITH PREVIOUS VERSIONS\n (due to system catalog changes and indexing structure changes).\n <LI> double-quote (\") is deprecated as a quoting character for string literals;\n you need to convert them to single quotes (').\n <LI> name of aggregates (eg. int4sum) are renamed in accordance with the\n SQL standard (eg. sum).\n <LI> CHANGE ACL syntax is replaced by GRANT/REVOKE syntax.\n <LI> float literals (eg. 3.14) are now of type float4 (instead of float8 in\n previous releases); you might have to do typecasting if you depend on it\n being of type float8. If you neglect to do the typecasting and you assign\n a float literal to a field of type float8, you may get incorrect values\n stored!\n <LI> LIBPQ has been totally revamped so that frontend applications\n can connect to multiple backends\n <LI> the usesysid field in pg_user has been changed from int2 to int4 to\n allow wider range of Unix user ids.\n <LI> the netbsd/freebsd/bsd o/s ports have been consolidated into a\n single BSD44_derived port. (thanks to Alistair Crooks)\n\n</UL>\n<P>\nSQL standard-compliance (the following details changes that makes postgres95\nmore compliant to the SQL-92 standard):\n<UL>\n <LI> the following SQL types are now built-in: smallint, int(eger), float, real,\n char(N), varchar(N), date and time.\n\n</UL>\n<P>\n The following are aliases to existing postgres types:\n<PRE>\n smallint -&gt; int2\n integer, int -&gt; int4\n float, real -&gt; float4\n</PRE>\n<P>\n char(N) and varchar(N) are implemented as truncated text types. In\n addition, char(N) does blank-padding. \n<UL>\n <LI> single-quote (') is used for quoting string literals; '' (in addition to\n \\') is supported as means of inserting a single quote in a string\n <LI> SQL standard aggregate names (MAX, MIN, AVG, SUM, COUNT) are used\n (Also, aggregates can now be overloaded, i.e. you can define your\n own MAX aggregate to take in a user-defined type.)\n <LI> CHANGE ACL removed. GRANT/REVOKE syntax added. \n <UL>\n <LI> Privileges can be given to a group using the \"GROUP\" keyword.\n<P>\n For example:\n<P>\n GRANT SELECT ON foobar TO GROUP my_group;\n The keyword 'PUBLIC' is also supported to mean all users. \n\n </UL>\n</UL>\n<PRE>\n Privileges can only be granted or revoked to one user or group\n at a time. \n\n \"WITH GRANT OPTION\" is not supported. Only class owners can change\n access control\n - The default access control is to to grant users readonly access.\n You must explicitly grant insert/update access to users. To change\n this, modify the line in \n src/backend/utils/acl.h \n that defines ACL_WORLD_DEFAULT \n</PRE>\n\n<P>\nBug fixes:\n<UL>\n <LI> the bug where aggregates of empty tables were not run has been fixed. Now,\n aggregates run on empty tables will return the initial conditions of the\n aggregates. Thus, COUNT of an empty table will now properly return 0.\n MAX/MIN of an empty table will return a tuple of value NULL. \n <LI> allow the use of \\; inside the monitor\n <LI> the LISTEN/NOTIFY asynchronous notification mechanism now work\n <LI> NOTIFY in rule action bodies now work\n <LI> hash indices work, and access methods in general should perform better.\n creation of large btree indices should be much faster. (thanks to Paul\n Aoki)\n\n</UL>\n<P>\nOther changes and enhancements:\n<UL>\n <LI> addition of an EXPLAIN statement used for explaining the query execution\n plan (eg. \"EXPLAIN SELECT * FROM EMP\" prints out the execution plan for\n the query).\n <LI> WARN and NOTICE messages no longer have timestamps on them. To turn on\n timestamps of error messages, uncomment the line in\n src/backend/utils/elog.h:\n<P>\n /* define ELOG_TIMESTAMPS */ \n <LI> On an access control violation, the message\n<P>\n \"Either no such class or insufficient privilege\"\n will be given. This is the same message that is returned when\n a class is not found. This dissuades non-privileged users from\n guessing the existence of privileged classes.\n <LI> some additional system catalog changes have been made that are not\n visible to the user.\n\n</UL>\n<P>\nlibpgtcl changes:\n<UL>\n <LI> The -oid option has been added to the \"pg_result\" tcl command.\n pg_result -oid returns oid of the last tuple inserted. If the\n last command was not an INSERT, then pg_result -oid returns \"\".\n <LI> the large object interface is available as pg_lo* tcl commands:\n pg_lo_open, pg_lo_close, pg_lo_creat, etc.\n\n</UL>\n<P>\nPortability enhancements and New Ports:\n<UL>\n <LI> flex/lex problems have been cleared up. Now, you should be able to use\n flex instead of lex on any platforms. We no longer make assumptions of\n what lexer you use based on the platform you use. \n <LI> The Linux-ELF port is now supported. Various configuration have been \n tested: The following configuration is known to work:\n<P>\n kernel 1.2.10, gcc 2.6.3, libc 4.7.2, flex 2.5.2, bison 1.24\n with everything in ELF format,\n\n</UL>\n<P>\nNew utilities:\n<UL>\n <LI> ipcclean added to the distribution\n ipcclean usually does not need to be run, but if your backend crashes\n and leaves shared memory segments hanging around, ipcclean will\n clean them up for you.\n\n</UL>\n<P>\nNew documentation:\n<UL>\n <LI> the user manual has been revised and libpq documentation added.\n\n\n\n</UL>\n<A NAME=\"section-12\"><H1>Postgres95 Beta 0.02 Thu May 25 16:54:46 PDT 1995</H1></A>\nIncompatible changes:\n<UL>\n <LI> The SQL statement for creating a database is 'CREATE DATABASE' instead\n of 'CREATEDB'. Similarly, dropping a database is 'DROP DATABASE' instead\n of 'DESTROYDB'. However, the names of the executables 'createdb' and \n 'destroydb' remain the same.\n \n</UL>\n<P>\nNew tools:\n<UL>\n <LI> pgperl - a Perl (4.036) interface to Postgres95\n <LI> pg_dump - a utility for dumping out a postgres database into a\n<P>\n script file containing query commands. The script files are in a ASCII\n format and can be used to reconstruct the database, even on other\n machines and other architectures. (Also good for converting\n a Postgres 4.2 database to Postgres95 database.)\n\n</UL>\n<P>\nThe following ports have been incorporated into postgres95-beta-0.02:\n<UL>\n <LI> the NetBSD port by Alistair Crooks\n <LI> the AIX port by Mike Tung\n <LI> the Windows NT port by Jon Forrest (more stuff but not done yet)\n <LI> the Linux ELF port by Brian Gallew\n\n</UL>\n<P>\nThe following bugs have been fixed in postgres95-beta-0.02:\n<UL>\n <LI> new lines not escaped in COPY OUT and problem with COPY OUT when first\n attribute is a '.' \n <LI> cannot type return to use the default user id in createuser\n <LI> SELECT DISTINCT on big tables crashes\n <LI> Linux installation problems\n <LI> monitor doesn't allow use of 'localhost' as PGHOST\n <LI> psql core dumps when doing \\c or \\l\n <LI> the \"pgtclsh\" target missing from src/bin/pgtclsh/Makefile\n <LI> libpgtcl has a hard-wired default port number\n <LI> SELECT DISTINCT INTO TABLE hangs\n <LI> CREATE TYPE doesn't accept 'variable' as the internallength\n <LI> wrong result using more than 1 aggregate in a SELECT\n\n\n\n</UL>\n<A NAME=\"section-13\"><H1>Postgres95 Beta 0.01 Mon May 1 19:03:10 PDT 1995</H1></A>\nInitial release.\n\n</BODY>\n</HTML>\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Thu, 9 Apr 1998 22:40:28 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: Release notes" } ]
[ { "msg_contents": "> Please enter a FULL description of your problem:\n> ------------------------------------------------\n> \n> COUNT(*) doesn't work with HAVING\n> \n> \n> Please describe a way to repeat the problem. Please try to provide a\n> concise reproducible example, if at all possible: \n> \n> ----------------------------------------------------------------------\n> \n> SELECT PNO\n> FROM SP\n> GROUP BY PNO\n> HAVING COUNT(PNO) > 1;\n> \n> pno\n> -----\n> P1\n> P2\n> P4\n> P5\n> (4 rows)\n> \n> \n> SELECT PNO\n> FROM SP\n> GROUP BY PNO\n> HAVING COUNT(*) > 1;\n> \n> PQexec() -- Request was sent to backend, but backend closed the channel before responding.\n> This probably means the backend terminated abnormally before or while processing the request.\n\nAppreciate your report. Hopefully we can fix it by the 6.3.2 final\nrelease. If not, we will have to remove the feature until 6.4.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n\n From maillist Mon Apr 13 11:03:12 1998\nReceived: (from maillist@localhost)\n\tby candle.pha.pa.us (8.8.5/8.8.5) id LAA17347\n\tfor maillist; Mon, 13 Apr 1998 11:03:11 -0400 (EDT)\nFrom: Bruce Momjian <maillist>\nMessage-Id: <[email protected]>\nSubject: Re: [HACKERS] error on HAVING clause\nTo: [email protected] (Jose' Soares Da Silva)\nDate: Thu, 9 Apr 1998 11:40:45 -0400 (EDT)\nCc: [email protected], [email protected]\nIn-Reply-To: <[email protected]>\n\tfrom \"Jose' Soares Da Silva\" at Apr 9, 98 05:11:25 pm\nX-Mailer: ELM [version 2.4 PL25]\nMIME-Version: 1.0\nContent-Type: text/plain; charset=US-ASCII\nContent-Transfer-Encoding: 7bit\nSender: maillist\nStatus: OR\n\n> Please enter a FULL description of your problem:\n> ------------------------------------------------\n> \n> COUNT(*) doesn't work with HAVING\n> \n> \n> Please describe a way to repeat the problem. Please try to provide a\n> concise reproducible example, if at all possible: \n> \n> ----------------------------------------------------------------------\n> \n> SELECT PNO\n> FROM SP\n> GROUP BY PNO\n> HAVING COUNT(PNO) > 1;\n> \n> pno\n> -----\n> P1\n> P2\n> P4\n> P5\n> (4 rows)\n> \n> \n> SELECT PNO\n> FROM SP\n> GROUP BY PNO\n> HAVING COUNT(*) > 1;\n> \n> PQexec() -- Request was sent to backend, but backend closed the channel before responding.\n> This probably means the backend terminated abnormally before or while processing the request.\n\nAppreciate your report. Hopefully we can fix it by the 6.3.2 final\nrelease. If not, we will have to remove the feature until 6.4.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n\n", "msg_date": "Thu, 9 Apr 1998 11:40:45 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] error on HAVING clause" }, { "msg_contents": "To report any other bug, fill out the form below and e-mail it to\[email protected].\n\n============================================================================\n POSTGRESQL BUG REPORT TEMPLATE\n============================================================================\n\nYour name\t\t:\tJose' Soares\nYour email address\t:\[email protected] \n\n\nSystem Configuration\n---------------------\n Architecture (example: Intel Pentium) \t: Intel Pentium\n\n Operating System (example: Linux 2.0.26 ELF) \t: Linux 2.0.31 Elf\n\n PostgreSQL version (example: PostgreSQL-6.1) : PostgreSQL-snapshot april 6, 1998\n\n Compiler used (example: gcc 2.7.2)\t\t: gcc 2.7.2.1\n\n\nPlease enter a FULL description of your problem:\n------------------------------------------------\n\nCOUNT(*) doesn't work with HAVING\n\n\nPlease describe a way to repeat the problem. Please try to provide a\nconcise reproducible example, if at all possible: \n\n----------------------------------------------------------------------\n\nSELECT PNO\nFROM SP\nGROUP BY PNO\nHAVING COUNT(PNO) > 1;\n\npno\n-----\nP1\nP2\nP4\nP5\n(4 rows)\n\n\nSELECT PNO\nFROM SP\nGROUP BY PNO\nHAVING COUNT(*) > 1;\n\nPQexec() -- Request was sent to backend, but backend closed the channel before responding.\n This probably means the backend terminated abnormally before or while processing the request.\n \n\nIf you know how this problem might be fixed, list the solution below:\n---------------------------------------------------------------------\n\n??\n\n", "msg_date": "Thu, 9 Apr 1998 17:11:25 +0000 (UTC)", "msg_from": "\"Jose' Soares Da Silva\" <[email protected]>", "msg_from_op": false, "msg_subject": "error on HAVING clause" }, { "msg_contents": "> \n> On Thu, 9 Apr 1998, Bruce Momjian wrote:\n> \n> Don't worry about time Bruce. I'm not in a hurry.\n> HAVING is an important feature. Finally SELECT statement is complete.\n> I would like to show you another thing about HAVING.\n\nHonestly, I don't know who is right. You are not using an aggregate in\nthe HAVING, so I have no idea how it is supposed to be handled.\n\n> \n> prova=> select sno,qty from sp group by sno,qty having qty = 300;\n> sno |qty\n> -----+---\n> S1 |100\n> S1 |200\n> S1 |300\n> S1 |400\n> S2 |300\n> S2 |400\n> S3 |200\n> S4 |200\n> S4 |300\n> S4 |400\n> (10 rows)\n> \n> prova=> select oid,sno,qty from sp group by sno,qty having qty = 300;\n> oid|sno |qty\n> ------+-----+---\n> 147004|S1 |100\n> 147001|S1 |200\n> 147000|S1 |300\n> 147002|S1 |400\n> 147006|S2 |300\n> 147007|S2 |400\n> 147008|S3 |200\n> 147009|S4 |200\n> 147010|S4 |300\n> 147011|S4 |400\n> (10 rows)\n> \n> Solid give me another result. Who are rigth ?\n> \n> SOLID SQL Editor (teletype) v.02.20.0007\n> select sno,qty from sp group by sno,qty having qty = 300;\n> SNO QTY\n> --- ---\n> S1 300.\n> S2 300.\n> S4 300.\n> 3 rows fetched.\n> \n> Maybe this one is illegal, but it give me a strange output:\n> \n> prova=> select oid,sno,qty from sp having qty = 300;\n> | | <---------where is the title ????\n> ------+-----+---\n> 147000|S1 |300\n> 147001|S1 |200\n> 147002|S1 |400\n> 147003|S1 |200\n> 147004|S1 |100\n> 147005|S1 |100\n> 147006|S2 |300\n> 147007|S2 |400\n> 147008|S3 |200\n> 147009|S4 |200\n> 147010|S4 |300\n> 147011|S4 |400\n> (12 rows)\n> Jose'\n> \n> \n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Fri, 10 Apr 1998 09:29:59 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] error on HAVING clause" }, { "msg_contents": "Question is, if we can't get it fixed completely, does it work well\nenough for us to keep it in 6.3.2?\n\n> \n> On Thu, 9 Apr 1998, Bruce Momjian wrote:\n> \n> Don't worry about time Bruce. I'm not in a hurry.\n> HAVING is an important feature. Finally SELECT statement is complete.\n> I would like to show you another thing about HAVING.\n> \n> prova=> select sno,qty from sp group by sno,qty having qty = 300;\n> sno |qty\n> -----+---\n> S1 |100\n> S1 |200\n> S1 |300\n> S1 |400\n> S2 |300\n> S2 |400\n> S3 |200\n> S4 |200\n> S4 |300\n> S4 |400\n> (10 rows)\n> \n> prova=> select oid,sno,qty from sp group by sno,qty having qty = 300;\n> oid|sno |qty\n> ------+-----+---\n> 147004|S1 |100\n> 147001|S1 |200\n> 147000|S1 |300\n> 147002|S1 |400\n> 147006|S2 |300\n> 147007|S2 |400\n> 147008|S3 |200\n> 147009|S4 |200\n> 147010|S4 |300\n> 147011|S4 |400\n> (10 rows)\n> \n> Solid give me another result. Who are rigth ?\n> \n> SOLID SQL Editor (teletype) v.02.20.0007\n> select sno,qty from sp group by sno,qty having qty = 300;\n> SNO QTY\n> --- ---\n> S1 300.\n> S2 300.\n> S4 300.\n> 3 rows fetched.\n> \n> Maybe this one is illegal, but it give me a strange output:\n> \n> prova=> select oid,sno,qty from sp having qty = 300;\n> | | <---------where is the title ????\n> ------+-----+---\n> 147000|S1 |300\n> 147001|S1 |200\n> 147002|S1 |400\n> 147003|S1 |200\n> 147004|S1 |100\n> 147005|S1 |100\n> 147006|S2 |300\n> 147007|S2 |400\n> 147008|S3 |200\n> 147009|S4 |200\n> 147010|S4 |300\n> 147011|S4 |400\n> (12 rows)\n> Jose'\n> \n> \n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Fri, 10 Apr 1998 09:33:04 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] error on HAVING clause" }, { "msg_contents": "On Thu, 9 Apr 1998, Bruce Momjian wrote:\n\nDon't worry about time Bruce. I'm not in a hurry.\nHAVING is an important feature. Finally SELECT statement is complete.\nI would like to show you another thing about HAVING.\n\nprova=> select sno,qty from sp group by sno,qty having qty = 300;\nsno |qty\n-----+---\nS1 |100\nS1 |200\nS1 |300\nS1 |400\nS2 |300\nS2 |400\nS3 |200\nS4 |200\nS4 |300\nS4 |400\n(10 rows)\n\nprova=> select oid,sno,qty from sp group by sno,qty having qty = 300;\n oid|sno |qty\n------+-----+---\n147004|S1 |100\n147001|S1 |200\n147000|S1 |300\n147002|S1 |400\n147006|S2 |300\n147007|S2 |400\n147008|S3 |200\n147009|S4 |200\n147010|S4 |300\n147011|S4 |400\n(10 rows)\n\nSolid give me another result. Who are rigth ?\n\nSOLID SQL Editor (teletype) v.02.20.0007\nselect sno,qty from sp group by sno,qty having qty = 300;\nSNO QTY\n--- ---\nS1 300.\nS2 300.\nS4 300.\n3 rows fetched.\n \nMaybe this one is illegal, but it give me a strange output:\n\nprova=> select oid,sno,qty from sp having qty = 300;\n | | <---------where is the title ????\n------+-----+---\n147000|S1 |300\n147001|S1 |200\n147002|S1 |400\n147003|S1 |200\n147004|S1 |100\n147005|S1 |100\n147006|S2 |300\n147007|S2 |400\n147008|S3 |200\n147009|S4 |200\n147010|S4 |300\n147011|S4 |400\n(12 rows)\n Jose'\n\n", "msg_date": "Fri, 10 Apr 1998 14:55:10 +0000 (UTC)", "msg_from": "\"Jose' Soares Da Silva\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] error on HAVING clause" } ]
[ { "msg_contents": "Greetings,\n\nWe are putting the finishing touches on some enhancements to the ODBC\ndriver. One feature, in particular, uses large objects to implement\nOLE data types. We are rather please with the way it is working. Via\nMS Access, we have been able to INSERT and SELECT objects, such as VISIO\ndrawings, Word Documents, and WAV sound clips. However, we've run\ninto two problems.\n\nThe first is, that when we update the OID which points to the large\nobject, the large object is orphaned. I realize that at the time of the\nupdate, we could select the old OID and subsequently drop the large\nobject. The problem is that general purpose tools such as MS Access do\nnot provide an clean framework for invoking such a query.\nSpecifically, UPDATE statements would have to be torn apart to build\nsuch a SELECT statement. In the short term I can build a separate\ndaemon to track down the orphans. I hope VACUUM will eventually handle\nthese.\n\nThe second, and more difficult, problem is that there is no large object\ndata type. When we gather table info in the driver we have no idea that\nan OID may actually be a large object. What we need is a large object\ndata type. Furthermore, the data type must have a stable OID so the we\ncan recognize it when we gather table info. We have tested the driver\nby creating our own date type. However, with the existing function\nscoping of our driver, it is extremely difficult to dynamically locate a\nuser defined large object data type. So for testing we have compiled\nin our \"lo\" data type OID.\n\nWhat I would like to know is, can a large object data type be added as\nan internal data type? The various \"lo_\" functions should eventually\nbe overloaded (or modified) to be able to use this data type. But it\nis not necessary at this time. I believe this addition is a very low\nrisk change, and I would very much like to get to have it in the 6.3.2\nrelease for distribution. May I submit the patch, or would someone\nkindly hack it in for us?\n\nGreat work!", "msg_date": "Thu, 09 Apr 1998 18:54:24 -0400", "msg_from": "David Hartwig <[email protected]>", "msg_from_op": true, "msg_subject": "New pg_type for large object" }, { "msg_contents": "> What I would like to know is, can a large object data type be added as\n> an internal data type? The various \"lo_\" functions should \n> eventually be overloaded (or modified) to be able to use this data \n> type. But it is not necessary at this time. I believe this addition \n> is a very low risk change, and I would very much like to get to have \n> it in the 6.3.2 release for distribution. May I submit the patch, or \n> would someone kindly hack it in for us?\n\nI'm not certain exactly what you want (didn't read very closely and it\ndoesn't fall in an area I've worked with) but it is not likely to be in\nv6.3.2 since we're already in the freeze period. However, I would\nsuggest revisiting the subject just after the release, perhaps roping in\nothers who have worked with large objects (Peter Mount comes to mind).\n\nThere will be a ~2 month period for working on new capabilities, and\nthis might fit into that.\n\n - Tom\n", "msg_date": "Fri, 10 Apr 1998 01:18:24 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] New pg_type for large object" }, { "msg_contents": "> \n> > What I would like to know is, can a large object data type be added as\n> > an internal data type? The various \"lo_\" functions should \n> > eventually be overloaded (or modified) to be able to use this data \n> > type. But it is not necessary at this time. I believe this addition \n> > is a very low risk change, and I would very much like to get to have \n> > it in the 6.3.2 release for distribution. May I submit the patch, or \n> > would someone kindly hack it in for us?\n> \n> I'm not certain exactly what you want (didn't read very closely and it\n> doesn't fall in an area I've worked with) but it is not likely to be in\n> v6.3.2 since we're already in the freeze period. However, I would\n> suggest revisiting the subject just after the release, perhaps roping in\n> others who have worked with large objects (Peter Mount comes to mind).\n> \n> There will be a ~2 month period for working on new capabilities, and\n> this might fit into that.\n\nYes, agreed. And it is a good topic to discuss.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Thu, 9 Apr 1998 22:10:34 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] New pg_type for large object" }, { "msg_contents": "On Thu, 9 Apr 1998, David Hartwig wrote:\n\n> Greetings,\n> \n> We are putting the finishing touches on some enhancements to the ODBC\n> driver. One feature, in particular, uses large objects to implement\n> OLE data types. We are rather please with the way it is working. Via\n> MS Access, we have been able to INSERT and SELECT objects, such as VISIO\n> drawings, Word Documents, and WAV sound clips. However, we've run\n> into two problems.\n> \n> The first is, that when we update the OID which points to the large\n> object, the large object is orphaned. I realize that at the time of the\n> update, we could select the old OID and subsequently drop the large\n> object. The problem is that general purpose tools such as MS Access do\n> not provide an clean framework for invoking such a query.\n> Specifically, UPDATE statements would have to be torn apart to build\n> such a SELECT statement. In the short term I can build a separate\n> daemon to track down the orphans. I hope VACUUM will eventually handle\n> these.\n> \n> The second, and more difficult, problem is that there is no large object\n> data type. When we gather table info in the driver we have no idea that\n> an OID may actually be a large object. What we need is a large object\n> data type. Furthermore, the data type must have a stable OID so the we\n> can recognize it when we gather table info. We have tested the driver\n> by creating our own date type. However, with the existing function\n> scoping of our driver, it is extremely difficult to dynamically locate a\n> user defined large object data type. So for testing we have compiled\n> in our \"lo\" data type OID.\n> \n> What I would like to know is, can a large object data type be added as\n> an internal data type? The various \"lo_\" functions should eventually\n> be overloaded (or modified) to be able to use this data type. But it\n> is not necessary at this time. I believe this addition is a very low\n> risk change, and I would very much like to get to have it in the 6.3.2\n> release for distribution. May I submit the patch, or would someone\n> kindly hack it in for us?\n\nI've actually started to look at this for JDBC, as it too has the orphan\nproblem. I went down two routes. One using triggers, but that had the\nproblem that triggers are not inherited, so I started to look at rules.\n\nHowever, as usual, my pay job had to take precidence, so I was about to\nstart looking at it today.\n\nI'd like to see your solution to this.\n\n-- \nPeter T Mount [email protected] or [email protected]\nMain Homepage: http://www.demon.co.uk/finder\nWork Homepage: http://www.maidstone.gov.uk Work EMail: [email protected]\n\n", "msg_date": "Fri, 10 Apr 1998 11:06:34 +0100 (BST)", "msg_from": "Peter T Mount <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [INTERFACES] New pg_type for large object" }, { "msg_contents": ">>>>> \"daveh\" == David Hartwig <[email protected]> writes:\n\n > This is a multi-part message in MIME format.\n > --------------493C6ADCB141A4B0F7C01648 Content-Type: text/plain;\n > charset=us-ascii Content-Transfer-Encoding: 7bit\n\n > Greetings,\n\n > We are putting the finishing touches on some enhancements to the\n > ODBC driver. One feature, in particular, uses large objects to\n > implement OLE data types. We are rather please with the way it\n > is working. Via MS Access, we have been able to INSERT and\n > SELECT objects, such as VISIO drawings, Word Documents, and WAV\n > sound clips. However, we've run into two problems.\n\n > The first is, that when we update the OID which points to the\n > large object, the large object is orphaned. I realize that at\n > the time of the update, we could select the old OID and\n > subsequently drop the large object. The problem is that general\n > purpose tools such as MS Access do not provide an clean\n > framework for invoking such a query. Specifically, UPDATE\n > statements would have to be torn apart to build such a SELECT\n > statement. In the short term I can build a separate daemon to\n > track down the orphans. I hope VACUUM will eventually handle\n > these.\n\nYou should be able to use triggers to fix the problem at the time that \nthe update statement is run.\n\n > The second, and more difficult, problem is that there is no\n > large object data type. When we gather table info in the driver\n > we have no idea that an OID may actually be a large object.\n > What we need is a large object data type. Furthermore, the data\n > type must have a stable OID so the we can recognize it when we\n > gather table info. We have tested the driver by creating our\n > own date type. However, with the existing function scoping of\n > our driver, it is extremely difficult to dynamically locate a\n > user defined large object data type. So for testing we have\n > compiled in our \"lo\" data type OID.\n\n > What I would like to know is, can a large object data type be\n > added as an internal data type? The various \"lo_\" functions\n > should eventually be overloaded (or modified) to be able to use\n > this data type. But it is not necessary at this time. I\n > believe this addition is a very low risk change, and I would\n > very much like to get to have it in the 6.3.2 release for\n > distribution. May I submit the patch, or would someone kindly\n > hack it in for us?\n\n > Great work!\n\n\n > --------------493C6ADCB141A4B0F7C01648 Content-Type:\n > text/x-vcard; charset=us-ascii; name=\"vcard.vcf\"\n > Content-Transfer-Encoding: 7bit Content-Description: Card for\n > David Hartwig Content-Disposition: attachment;\n > filename=\"vcard.vcf\"\n\n > begin: vcard fn: David Hartwig n: Hartwig;David email;internet:\n > [email protected] x-mozilla-cpt: ;0 x-mozilla-html: FALSE\n > version: 2.1 end: vcard\n\n\n > --------------493C6ADCB141A4B0F7C01648--\n\n-- \nKent S. Gordon\nArchitect\niNetSpace Co.\nvoice: (972)851-3494 fax:(972)702-0384 e-mail:[email protected]\n", "msg_date": "Fri, 10 Apr 1998 10:32:38 -0500 (CDT)", "msg_from": "\"Kent S. Gordon\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] New pg_type for large object" }, { "msg_contents": "On Fri, 10 Apr 1998, Kent S. Gordon wrote:\n\n[snip]\n\n> > The first is, that when we update the OID which points to the\n> > large object, the large object is orphaned. I realize that at\n> > the time of the update, we could select the old OID and\n> > subsequently drop the large object. The problem is that general\n> > purpose tools such as MS Access do not provide an clean\n> > framework for invoking such a query. Specifically, UPDATE\n> > statements would have to be torn apart to build such a SELECT\n> > statement. In the short term I can build a separate daemon to\n> > track down the orphans. I hope VACUUM will eventually handle\n> > these.\n> \n> You should be able to use triggers to fix the problem at the time that \n> the update statement is run.\n\nYes that is one possibility, which I have done here, but this is a\ngeneric problem, rather than one unique to a single application.\n\nFor triggers to work, you would have to add the trigger to each table, and\nto each column that may contain a large object. Also, triggers are not\ninherited.\n\nCreating a new lo/blob data type would make this transparent to the user,\nand would permit already written JDBC or ODBC based applications for other\ndatabases to work without modification.\n\n-- \nPeter T Mount [email protected] or [email protected]\nMain Homepage: http://www.demon.co.uk/finder\nWork Homepage: http://www.maidstone.gov.uk Work EMail: [email protected]\n\n", "msg_date": "Fri, 10 Apr 1998 19:48:29 +0100 (BST)", "msg_from": "Peter T Mount <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] New pg_type for large object" }, { "msg_contents": "Peter T Mount wrote:\n\n> On Thu, 9 Apr 1998, David Hartwig wrote:\n>\n> > Greetings,\n> >\n> > We are putting the finishing touches on some enhancements to the ODBC\n> > driver. One feature, in particular, uses large objects to implement\n> > OLE data types. We are rather please with the way it is working. Via\n> > MS Access, we have been able to INSERT and SELECT objects, such as VISIO\n> > drawings, Word Documents, and WAV sound clips. However, we've run\n> > into two problems.\n> >\n> > The first is, that when we update the OID which points to the large\n> > object, the large object is orphaned. I realize that at the time of the\n> > update, we could select the old OID and subsequently drop the large\n> > object. The problem is that general purpose tools such as MS Access do\n> > not provide an clean framework for invoking such a query.\n> > Specifically, UPDATE statements would have to be torn apart to build\n> > such a SELECT statement. In the short term I can build a separate\n> > daemon to track down the orphans. I hope VACUUM will eventually handle\n> > these.\n> >\n> > The second, and more difficult, problem is that there is no large object\n> > data type. When we gather table info in the driver we have no idea that\n> > an OID may actually be a large object. What we need is a large object\n> > data type. Furthermore, the data type must have a stable OID so the we\n> > can recognize it when we gather table info. We have tested the driver\n> > by creating our own date type. However, with the existing function\n> > scoping of our driver, it is extremely difficult to dynamically locate a\n> > user defined large object data type. So for testing we have compiled\n> > in our \"lo\" data type OID.\n> >\n> > What I would like to know is, can a large object data type be added as\n> > an internal data type? The various \"lo_\" functions should eventually\n> > be overloaded (or modified) to be able to use this data type. But it\n> > is not necessary at this time. I believe this addition is a very low\n> > risk change, and I would very much like to get to have it in the 6.3.2\n> > release for distribution. May I submit the patch, or would someone\n> > kindly hack it in for us?\n>\n> I've actually started to look at this for JDBC, as it too has the orphan\n> problem. I went down two routes. One using triggers, but that had the\n> problem that triggers are not inherited, so I started to look at rules.\n>\n> However, as usual, my pay job had to take precidence, so I was about to\n> start looking at it today.\n>\n> I'd like to see your solution to this.\n\nWe are going to wait to get a large object data type built into 6.4. In the\nmeantime we are going to require the DBA to create an \"lo\" data type in the\ndatabase. We will include the SQL create script as part of the driver release.\nThen, we'll query the database for the oid of the \"lo\" data type at connect\ntime. Not very elegant, but it get the job done until 6.4.\n\nAs far as those lo orphans go, we'll will put together a cleanup script. to\nsearch for \"lo\" attributes in each database and make sure that something points\neach large object in pg_class. We will have to distribute this script as part\nof the ODBC package to be run at some interval on the server. Eventually, it\nwould seems, that this should be part of the VACUUM process.Marc,Any word on when\nthis ODBC this solution will be available.", "msg_date": "Mon, 13 Apr 1998 10:29:17 -0400", "msg_from": "David Hartwig <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Re: [INTERFACES] New pg_type for large object" }, { "msg_contents": "On Mon, 13 Apr 1998, David Hartwig wrote:\n\n> Marc,Any word on when this ODBC this solution will be available.\n\n\tSource code replaced...have to do the readme files and whatnot\ntonight from home...submit patches to me as appropriate, and, of course,\nmonitor the interfaces mailing list...\n\n\n\n", "msg_date": "Mon, 13 Apr 1998 11:05:53 -0400 (EDT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [INTERFACES] New pg_type for large object" }, { "msg_contents": "The Hermit Hacker wrote:\n\n> On Mon, 13 Apr 1998, David Hartwig wrote:\n>\n> > Marc,Any word on when this ODBC this solution will be available.\n>\n> Source code replaced...have to do the readme files and whatnot\n> tonight from home...submit patches to me as appropriate, and, of course,\n> monitor the interfaces mailing list...\n\nMarc,\n\nDid you get the README.TXT I sent to you last week? Will resend or revise\nif necessary.\n\nAlso, I need to know when you took (or will take) the last snapshot from our\npage, so that I know our sources will be in sync.\n\nWhat is the target date for the 6.3.2 cut? I would like to get our latest\nsnapshot in that release.", "msg_date": "Mon, 13 Apr 1998 12:10:58 -0400", "msg_from": "David Hartwig <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Re: [INTERFACES] New pg_type for large object" }, { "msg_contents": "On Mon, 13 Apr 1998, David Hartwig wrote:\n\n> \n> \n> The Hermit Hacker wrote:\n> \n> > On Mon, 13 Apr 1998, David Hartwig wrote:\n> >\n> > > Marc,Any word on when this ODBC this solution will be available.\n> >\n> > Source code replaced...have to do the readme files and whatnot\n> > tonight from home...submit patches to me as appropriate, and, of course,\n> > monitor the interfaces mailing list...\n> \n> Marc,\n> \n> Did you get the README.TXT I sent to you last week? Will resend or revise\n> if necessary.\n\n\tGot it, but its in my mailbox at home, so will add it later\ntonight, unless you want to resend it to me...\n\n> Also, I need to know when you took (or will take) the last snapshot from our\n> page, so that I know our sources will be in sync.\n\n\tBest thing to do, at all times, is grab the latest sources via\nCVSup and make sure you stay sync'd with that...not sure the date on the\nlast snapshot, but I leave it up to you to keep me in sync :)\n\n> What is the target date for the 6.3.2 cut? I would like to get our latest\n> snapshot in that release.\n\n\t15th, but I'm being a stickler righ tnow for my problems :)\n\n\n", "msg_date": "Mon, 13 Apr 1998 13:09:32 -0400 (EDT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [INTERFACES] New pg_type for large object" } ]
[ { "msg_contents": "\nTo be released on April 15th...\n\nI've changed the cron job that creates the snapshots to run daily instead\nof weekly...recommend ppl use CVSup instead of ftp, but that's your\nperogative :)\n\nI won't build a v6.3.1-v6.3.2 patch until April 15th itself...\n\nTry it, test it, break it...this will be the final 'version' until v6.4 is\nready to be released :)\n\n\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Thu, 9 Apr 1998 20:07:52 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "v6.3.2 ..." }, { "msg_contents": ">To be released on April 15th...\n>\n>I've changed the cron job that creates the snapshots to run daily instead\n>of weekly...recommend ppl use CVSup instead of ftp, but that's your\n>perogative :)\n>\n>I won't build a v6.3.1-v6.3.2 patch until April 15th itself...\n>\n>Try it, test it, break it...this will be the final 'version' until v6.4 is\n>ready to be released :)\n\nI've just tried the April 10 snapshot on my FreeBSD box. I'm using the\nlocalized version of Tcl/Tk and I want to let configure detect them.\nPlease apply following patches to do that.\n--\nTatsuo Ishii\[email protected]\n---------------------------------------------------------------------\n*** configure.in~\tFri Apr 10 16:00:37 1998\n--- configure.in\tFri Apr 10 17:10:21 1998\n***************\n*** 612,618 ****\n dnl Check for Tcl archive\n if test \"$USE_TCL\" = \"true\"; then\n \tTCL_LIB=\n! \ttcl_libs=\"tcl8.0 tcl80 tcl7.6 tcl76 tcl\"\n \tfor tcl_lib in $tcl_libs; do\n \t\tif test -z \"$TCL_LIB\"; then\n \t\t\tAC_CHECK_LIB($tcl_lib, main, TCL_LIB=$tcl_lib)\n--- 612,618 ----\n dnl Check for Tcl archive\n if test \"$USE_TCL\" = \"true\"; then\n \tTCL_LIB=\n! \ttcl_libs=\"tcl8.0 tcl80 tcl7.6 tcl76 tcl7.6jp tcl76jp tcl\"\n \tfor tcl_lib in $tcl_libs; do\n \t\tif test -z \"$TCL_LIB\"; then\n \t\t\tAC_CHECK_LIB($tcl_lib, main, TCL_LIB=$tcl_lib)\n***************\n*** 667,673 ****\n \tLDFLAGS=\"$LDFLAGS $X_LIBS\"\n \n \tTK_LIB=\n! \ttk_libs=\"tk8.0 tk80 tk4.2 tk42 tk\"\n \tfor tk_lib in $tk_libs; do\n \t\tif test -z \"$TK_LIB\"; then\n \t\t\tAC_CHECK_LIB($tk_lib, main, TK_LIB=$tk_lib,, $TCL_LIB $X_PRE_LIBS $X_LIBS $X11_LIBS)\n--- 667,673 ----\n \tLDFLAGS=\"$LDFLAGS $X_LIBS\"\n \n \tTK_LIB=\n! \ttk_libs=\"tk8.0 tk80 tk4.2 tk42 tj4.2jp tk42jp tk\"\n \tfor tk_lib in $tk_libs; do\n \t\tif test -z \"$TK_LIB\"; then\n \t\t\tAC_CHECK_LIB($tk_lib, main, TK_LIB=$tk_lib,, $TCL_LIB $X_PRE_LIBS $X_LIBS $X11_LIBS)\n", "msg_date": "Fri, 10 Apr 1998 17:21:45 +0900", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: [HACKERS] v6.3.2 ... " }, { "msg_contents": "On Thu, 9 Apr 1998, The Hermit Hacker wrote:\n\n> \n> To be released on April 15th...\n> \n> I've changed the cron job that creates the snapshots to run daily instead\n> of weekly...recommend ppl use CVSup instead of ftp, but that's your\n> perogative :)\n> \n> I won't build a v6.3.1-v6.3.2 patch until April 15th itself...\n> \n> Try it, test it, break it...this will be the final 'version' until v6.4 is\n> ready to be released :)\n\nGood. Apart from bug fixes, I'm already working on the 6.4 JDBC driver (as\nin new features like mapping java <-> postgresql classes), and I have the\nJDBC 2.0 specification to look at yet.\n\n-- \nPeter T Mount [email protected] or [email protected]\nMain Homepage: http://www.demon.co.uk/finder\nWork Homepage: http://www.maidstone.gov.uk Work EMail: [email protected]\n\n", "msg_date": "Fri, 10 Apr 1998 11:48:17 +0100 (BST)", "msg_from": "Peter T Mount <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] v6.3.2 ..." } ]
[ { "msg_contents": "Hello. I'm not sure I'm posting in the appropriate list and please accept my\napologies if not. I'm currently working on a database interface in Linux/X and\nam using Postgres for the preliminary data access and testing. I hope to\nprovide access to other engines in the future. Because I'm not a Pg expert\nsome technical issues arise from time to time which I am unable to answer\nmyself. I'm wondering...\n\n1. If this is an appropriate forum for these type of questions\nor\n2. If anyone could get me in touch with a person/forum for these type\nquestions\nor\n3. If anyone would be willing to share some time and brain cells as a contact\nfor Pg technical issues\nor\n4. If anyone would be willing to contribute code to the project\n\nI doubt I would generate much traffic, however a quick answer is worth a\nthousand lines of read code and much coffee :)\n\nOh yes, I've not subscribed to this list so please e-mail direct if you wish\nto respond.\n\nThanks\nJim\n-----------------\nMail: [email protected]\n", "msg_date": "Thu, 09 Apr 1998 20:40:20 -0400", "msg_from": "JB <[email protected]>", "msg_from_op": true, "msg_subject": "Tech questions" }, { "msg_contents": "On Thu, 9 Apr 1998, JB wrote:\n\n> Hello. I'm not sure I'm posting in the appropriate list and please accept my\n> apologies if not. I'm currently working on a database interface in Linux/X and\n> am using Postgres for the preliminary data access and testing. I hope to\n> provide access to other engines in the future. Because I'm not a Pg expert\n> some technical issues arise from time to time which I am unable to answer\n> myself. I'm wondering...\n> \n> 1. If this is an appropriate forum for these type of questions\n> or\n> 2. If anyone could get me in touch with a person/forum for these type\n> questions\n> or\n> 3. If anyone would be willing to share some time and brain cells as a contact\n> for Pg technical issues\n> or\n> 4. If anyone would be willing to contribute code to the project\n> \n> I doubt I would generate much traffic, however a quick answer is worth a\n> thousand lines of read code and much coffee :)\n> \n> Oh yes, I've not subscribed to this list so please e-mail direct if you wish\n> to respond.\n\nProbably would help to ask the question so that we can redirect as\nappropriate :)\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Thu, 9 Apr 1998 22:17:36 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Tech questions" } ]
[ { "msg_contents": "> > I just installed 6.3.1 and am still encountering the same\n> > problem in 6.2.1 .. that is... libpq can't manage large query \n> > results - -is always returning a segmentation fault . :(\n> > select * from foo, bar;\n> > - - where foo has 73000 records and bar 22000 records,\n> > both with two fields of char(6) each;\n> > this kind of query (when issued using 'psql') will crash and \n> > returns a 'segmentation fault (core dumped)'\n> Holy sh*t, Batman...this is going to return 1.6 _BILLION_ rows!!!!!\n> And you're complaining that it dumps core?!?!?!?!?\n> Not trying to be _too_ sarcastic, but if you _were_ able to process \n> the results for this query at 1000 rows per second around the clock, \n> it would take almost three weeks to finish?!?\n\nThat's funny :)\n\n - Tom\n", "msg_date": "Fri, 10 Apr 1998 01:08:07 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [QUESTIONS] libpg - segmentation fault" } ]
[ { "msg_contents": "> > What I would like to know is, can a large object data type be added as\n> > an internal data type? The various \"lo_\" functions should \n> > eventually be overloaded (or modified) to be able to use this data \n> > type. But it is not necessary at this time. I believe this addition \n> > is a very low risk change, and I would very much like to get to have \n> > it in the 6.3.2 release for distribution. May I submit the patch, or \n> > would someone kindly hack it in for us?\n> \n> I'm not certain exactly what you want (didn't read very closely and it\n> doesn't fall in an area I've worked with) but it is not likely to be in\n> v6.3.2 since we're already in the freeze period. However, I would\n> suggest revisiting the subject just after the release, perhaps roping in\n> others who have worked with large objects (Peter Mount comes to mind).\n\nThink he means that it would be nice if there was a separate type for\nrepresenting large object oids.\n\nHe has managed to get MS Access to store OLE objects in a table as a large\nobject thru the ODBC driver. But the driver needs a way to tell that the\ncolumn represents a large object and not just any old oid.\n\nA sort of sub-class of Oid if you will...a type of lo_oid that _is_ an oid,\nbut has a separate type in the system tables.\n\ndarrenk\n", "msg_date": "Thu, 9 Apr 1998 21:35:34 -0400", "msg_from": "[email protected] (Darren King)", "msg_from_op": true, "msg_subject": "Re: [HACKERS] New pg_type for large object" }, { "msg_contents": "On Thu, 9 Apr 1998, Darren King wrote:\n\n> > > What I would like to know is, can a large object data type be added as\n> > > an internal data type? The various \"lo_\" functions should \n> > > eventually be overloaded (or modified) to be able to use this data \n> > > type. But it is not necessary at this time. I believe this addition \n> > > is a very low risk change, and I would very much like to get to have \n> > > it in the 6.3.2 release for distribution. May I submit the patch, or \n> > > would someone kindly hack it in for us?\n> > \n> > I'm not certain exactly what you want (didn't read very closely and it\n> > doesn't fall in an area I've worked with) but it is not likely to be in\n> > v6.3.2 since we're already in the freeze period. However, I would\n> > suggest revisiting the subject just after the release, perhaps roping in\n> > others who have worked with large objects (Peter Mount comes to mind).\n> \n> Think he means that it would be nice if there was a separate type for\n> representing large object oids.\n\nThat's exactly the same as I've been looking into for JDBC, and we did\ndiscuss this about 3 weeks ago.\n\n> He has managed to get MS Access to store OLE objects in a table as a large\n> object thru the ODBC driver. But the driver needs a way to tell that the\n> column represents a large object and not just any old oid.\n>\n> A sort of sub-class of Oid if you will...a type of lo_oid that _is_ an oid,\n> but has a separate type in the system tables.\n\nWhat I was looking at was a type that stores the oid of the large object\nin the table, but if it is deleted or updated, then the large object that\nwas represented is also destroyed, rather than being orphaned.\n\nInfact, to bring up another idea that was being discussed a few months ago\nwas enabling the text type to store large values (ie those too large to\nfit in the table block side - normally 8k) as large objects. The same\nunderlying rules to handle orphaning would also apply. \n\n-- \nPeter T Mount [email protected] or [email protected]\nMain Homepage: http://www.demon.co.uk/finder\nWork Homepage: http://www.maidstone.gov.uk Work EMail: [email protected]\n\n", "msg_date": "Fri, 10 Apr 1998 11:45:50 +0100 (BST)", "msg_from": "Peter T Mount <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] New pg_type for large object" } ]
[ { "msg_contents": "Hi,\n\nThis question was sent to me by a user who uses PostgreSQL 6.3.1.\nIs this normal?\n(Note that the patch for src/backend/optimizer/path/prune.c created by \nVadim did not help)\n--\nTatsuo Ishii\[email protected]\n------------------------------------------------------------------\nThe following query seems to generate a rather slow query plan. \n\nexplain select * from product,order_tbl where\nproduct.serial=order_tbl.serial and product.serial in (select serial\nfrom order_tbl where cust_id='ABCDE');\n\n NOTICE: QUERY PLAN:\n\n Hash Join (cost=906.09 size=744 width=110)\n -> Seq Scan on order_tbl (cost=296.13 size=6822 width=36)\n -> Hash (cost=0.00 size=0 width=0)\n -> Seq Scan on product (cost=358.29 size=744 width=74)\n SubPlan\n -> Index Scan on order_tbl (cost=2.05 size=1 width=12)\n\n EXPLAIN\n\nproduct and order_tbl are defined as follows:\n\ncreate table product (\nserial char(10) primary key,\npname char(15) not null,\nprice int2);\ncreate index prod_name on product using hash(pname);\n\ncreate table order_tbl (\ncust_id char(5) primary key,\nserial char(10) not null,\nnums int2,\no_date date);\ncreate index order_ser on order_tbl using hash(serial);\n\n* product has 7289 tuples, and order_tbl has 6818 tuples.\n", "msg_date": "Fri, 10 Apr 1998 11:14:32 +0900", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "subselect and optimizer" }, { "msg_contents": "I will say we have an optimization problem with tables being referenced\nmultiple times in a query, but I don't know if this is the cause, though\nyou could test it by making a copy of order_tbl with another name, and\ntesting the speed.\n> \n> Hi,\n> \n> This question was sent to me by a user who uses PostgreSQL 6.3.1.\n> Is this normal?\n> (Note that the patch for src/backend/optimizer/path/prune.c created by \n> Vadim did not help)\n> --\n> Tatsuo Ishii\n> [email protected]\n> ------------------------------------------------------------------\n> The following query seems to generate a rather slow query plan. \n> \n> explain select * from product,order_tbl where\n> product.serial=order_tbl.serial and product.serial in (select serial\n> from order_tbl where cust_id='ABCDE');\n> \n> NOTICE: QUERY PLAN:\n> \n> Hash Join (cost=906.09 size=744 width=110)\n> -> Seq Scan on order_tbl (cost=296.13 size=6822 width=36)\n> -> Hash (cost=0.00 size=0 width=0)\n> -> Seq Scan on product (cost=358.29 size=744 width=74)\n> SubPlan\n> -> Index Scan on order_tbl (cost=2.05 size=1 width=12)\n> \n> EXPLAIN\n> \n> product and order_tbl are defined as follows:\n> \n> create table product (\n> serial char(10) primary key,\n> pname char(15) not null,\n> price int2);\n> create index prod_name on product using hash(pname);\n> \n> create table order_tbl (\n> cust_id char(5) primary key,\n> serial char(10) not null,\n> nums int2,\n> o_date date);\n> create index order_ser on order_tbl using hash(serial);\n> \n> * product has 7289 tuples, and order_tbl has 6818 tuples.\n> \n> \n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Thu, 9 Apr 1998 22:42:42 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] subselect and optimizer" }, { "msg_contents": ">I will say we have an optimization problem with tables being referenced\n>multiple times in a query, but I don't know if this is the cause, though\n>you could test it by making a copy of order_tbl with another name, and\n>testing the speed.\n\nThank you for your suggestion. I made a copy of order_tbl (named\norder_tbl1) and did a query:\n\n explain select * from product,order_tbl where \\\n product.serial=order_tbl.serial and product.serial in \\\n (select serial from order_tbl1 where cust_id='H3550');\n NOTICE: QUERY PLAN:\n\n Hash Join (cost=934.65 size=798 width=112)\n -> Seq Scan on order_tbl (cost=296.82 size=6843 width=36)\n -> Hash (cost=0.00 size=0 width=0)\n -> Seq Scan on product (cost=383.71 size=797 width=76)\n SubPlan\n -> Index Scan on order_tbl1 (cost=2.05 size=1 width=12)\n\nSeems like no change here?\n--\nTatsuo Ishii\[email protected]\n", "msg_date": "Fri, 10 Apr 1998 19:45:44 +0900", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Re: [HACKERS] subselect and optimizer " }, { "msg_contents": "> \n> Hi,\n> \n> Vadim helped me with the patch for my query. \n> \n> But this patch still didn't help for a simple join without a where \n> clause. The query plan says it uses two sequential scans, where 6.2.1 \n> uses two index scans.\n> \n> Strange stuff.\n> \n> Seems like more than one problem it the optimizer code ...\n> \n\nBut we didn't have subselcts in 6.2.1?\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Fri, 10 Apr 1998 17:08:01 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] subselect and optimizer" }, { "msg_contents": "Hi,\n\nVadim helped me with the patch for my query. \n\nBut this patch still didn't help for a simple join without a where \nclause. The query plan says it uses two sequential scans, where 6.2.1 \nuses two index scans.\n\nStrange stuff.\n\nSeems like more than one problem it the optimizer code ...\n\n\nCiao\n\nDas Boersenspielteam.\n\n---------------------------------------------------------------------------\n http://www.boersenspiel.de\n \t Das Boersenspiel im Internet\n *Realitaetsnah* *Kostenlos* *Ueber 6000 Spieler*\n---------------------------------------------------------------------------\n", "msg_date": "Fri, 10 Apr 1998 22:30:53 +0000", "msg_from": "\"Boersenspielteam\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] subselect and optimizer " }, { "msg_contents": "> > But this patch still didn't help for a simple join without a where \n> > clause. The query plan says it uses two sequential scans, where 6.2.1 \n> > uses two index scans.\n>\n> But we didn't have subselcts in 6.2.1?\n\nNo, but in the more general case of a simple join over two tables \nwith fields with an index declared on them.\n\nsay: Select * from Trans, Spieler where \nSpieler.spieler_nr=Trans.spieler_nr\n\nUses indices in 6.2.1, doesn't use them in 6.3.1 (two seq scans).\n\nI just wanted to remind you, that these problems are not restricted \nto subqueries, but seem to be a more general 'flaw' in 6.3.x .\n\nHope this helps.\n\nUlrich\n", "msg_date": "Sat, 11 Apr 1998 12:27:01 +0000", "msg_from": "\"Boersenspielteam\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] subselect and optimizer" }, { "msg_contents": "> \n> > > But this patch still didn't help for a simple join without a where \n> > > clause. The query plan says it uses two sequential scans, where 6.2.1 \n> > > uses two index scans.\n> >\n> > But we didn't have subselcts in 6.2.1?\n> \n> No, but in the more general case of a simple join over two tables \n> with fields with an index declared on them.\n> \n> say: Select * from Trans, Spieler where \n> Spieler.spieler_nr=Trans.spieler_nr\n> \n> Uses indices in 6.2.1, doesn't use them in 6.3.1 (two seq scans).\n> \n> I just wanted to remind you, that these problems are not restricted \n> to subqueries, but seem to be a more general 'flaw' in 6.3.x .\n\nAh, but that is fixed in 6.3.2 beta. We particularly waited for a fix\nfor this before releasing a new beta. But you say you have Vadim's fix\nthat is in 6.3.2, and it still doesn't work?\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Sat, 11 Apr 1998 19:06:15 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] subselect and optimizer" }, { "msg_contents": "> Yep, exactly. The query with the where clause is fixed after \n> applying Vadim's prune.c patch, simple join still uses two seq scans \n> :-(\n> \n> I uploaded test data and Vadim fixed one file, but asked you \n> (Bruce) to look over other files of the optimizer code. There seem \n> to be other bugs in the optimizer code, which were introduced between \n> 6.2.1 and 6.3. We have seen about 5-6 error reports from different \n> people, from the simpliest queries like my simple join to rather \n> complex subqueries. But when a simple join doesn't work (ok, it \n> works, but kind of crawls), this error is supposed to pop up under \n> other circumstances too.\n> \n> Hope you can find this nasty little bug, cause it makes postgres \n> unusable. Especially before going into development again.\n> \n> See the mailinglist archives for a post of mine. There is a link in \n> it,where you can download the test data, it should still be \n> there. (don't have access to this from home)\n> \n> I greatly appreciate all the time and hard work all you \n> PostgreSQL-hackers and contributors put into this fantastic freeware \n> product. Just to let you know.\n\nHere is the prune.c file from 6.2.1. Please try it and let me know if\nit fixes the problem:\n\n---------------------------------------------------------------------------\n\n/*-------------------------------------------------------------------------\n *\n * prune.c--\n *\t Routines to prune redundant paths and relations\n *\n * Copyright (c) 1994, Regents of the University of California\n *\n *\n * IDENTIFICATION\n *\t $Header: /usr/local/cvsroot/pgsql/src/backend/optimizer/path/prune.c,v 1.6 1997/09/08 21:45:08 momjian Exp $\n *\n *-------------------------------------------------------------------------\n */\n#include \"postgres.h\"\n\n#include \"nodes/pg_list.h\"\n#include \"nodes/relation.h\"\n\n#include \"optimizer/internal.h\"\n#include \"optimizer/cost.h\"\n#include \"optimizer/paths.h\"\n#include \"optimizer/pathnode.h\"\n\n#include \"utils/elog.h\"\n\n\nstatic List *prune_joinrel(Rel *rel, List *other_rels);\n\n/*\n * prune-joinrels--\n *\t Removes any redundant relation entries from a list of rel nodes\n *\t 'rel-list'.\n *\n * Returns the resulting list.\n *\n */\nList\t *\nprune_joinrels(List *rel_list)\n{\n\tList\t *temp_list = NIL;\n\n\tif (rel_list != NIL)\n\t{\n\t\ttemp_list = lcons(lfirst(rel_list),\n\t\t\t\t prune_joinrels(prune_joinrel((Rel *) lfirst(rel_list),\n\t\t\t\t\t\t\t\t\t\t\t\tlnext(rel_list))));\n\t}\n\treturn (temp_list);\n}\n\n/*\n * prune-joinrel--\n *\t Prunes those relations from 'other-rels' that are redundant with\n *\t 'rel'. A relation is redundant if it is built up of the same\n *\t relations as 'rel'. Paths for the redundant relation are merged into\n *\t the pathlist of 'rel'.\n *\n * Returns a list of non-redundant relations, and sets the pathlist field\n * of 'rel' appropriately.\n *\n */\nstatic List *\nprune_joinrel(Rel *rel, List *other_rels)\n{\n\tList\t *i = NIL;\n\tList\t *t_list = NIL;\n\tList\t *temp_node = NIL;\n\tRel\t\t *other_rel = (Rel *) NULL;\n\n\tforeach(i, other_rels)\n\t{\n\t\tother_rel = (Rel *) lfirst(i);\n\t\tif (same(rel->relids, other_rel->relids))\n\t\t{\n\t\t\trel->pathlist = add_pathlist(rel,\n\t\t\t\t\t\t\t\t\t\t rel->pathlist,\n\t\t\t\t\t\t\t\t\t\t other_rel->pathlist);\n\t\t\tt_list = nconc(t_list, NIL);\t\t/* XXX is this right ? */\n\t\t}\n\t\telse\n\t\t{\n\t\t\ttemp_node = lcons(other_rel, NIL);\n\t\t\tt_list = nconc(t_list, temp_node);\n\t\t}\n\t}\n\treturn (t_list);\n}\n\n/*\n * prune-rel-paths--\n *\t For each relation entry in 'rel-list' (which corresponds to a join\n *\t relation), set pointers to the unordered path and cheapest paths\n *\t (if the unordered path isn't the cheapest, it is pruned), and\n *\t reset the relation's size field to reflect the join.\n *\n * Returns nothing of interest.\n *\n */\nvoid\nprune_rel_paths(List *rel_list)\n{\n\tList\t *x = NIL;\n\tList\t *y = NIL;\n\tPath\t *path = NULL;\n\tRel\t\t *rel = (Rel *) NULL;\n\tJoinPath *cheapest = (JoinPath *) NULL;\n\n\tforeach(x, rel_list)\n\t{\n\t\trel = (Rel *) lfirst(x);\n\t\trel->size = 0;\n\t\tforeach(y, rel->pathlist)\n\t\t{\n\t\t\tpath = (Path *) lfirst(y);\n\n\t\t\tif (!path->p_ordering.ord.sortop)\n\t\t\t{\n\t\t\t\tbreak;\n\t\t\t}\n\t\t}\n\t\tcheapest = (JoinPath *) prune_rel_path(rel, path);\n\t\tif (IsA_JoinPath(cheapest))\n\t\t{\n\t\t\trel->size = compute_joinrel_size(cheapest);\n\t\t}\n\t\telse\n\t\t\telog(WARN, \"non JoinPath called\");\n\t}\n}\n\n\n/*\n * prune-rel-path--\n *\t Compares the unordered path for a relation with the cheapest path. If\n *\t the unordered path is not cheapest, it is pruned.\n *\n *\t Resets the pointers in 'rel' for unordered and cheapest paths.\n *\n * Returns the cheapest path.\n *\n */\nPath\t *\nprune_rel_path(Rel *rel, Path *unorderedpath)\n{\n\tPath\t *cheapest = set_cheapest(rel, rel->pathlist);\n\n\t/* don't prune if not pruneable -- JMH, 11/23/92 */\n\tif (unorderedpath != cheapest\n\t\t&& rel->pruneable)\n\t{\n\n\t\trel->unorderedpath = (Path *) NULL;\n\t\trel->pathlist = lremove(unorderedpath, rel->pathlist);\n\t}\n\telse\n\t{\n\t\trel->unorderedpath = (Path *) unorderedpath;\n\t}\n\n\treturn (cheapest);\n}\n\n/*\n * merge-joinrels--\n *\t Given two lists of rel nodes that are already\n *\t pruned, merge them into one pruned rel node list\n *\n * 'rel-list1' and\n * 'rel-list2' are the rel node lists\n *\n * Returns one pruned rel node list\n */\nList\t *\nmerge_joinrels(List *rel_list1, List *rel_list2)\n{\n\tList\t *xrel = NIL;\n\n\tforeach(xrel, rel_list1)\n\t{\n\t\tRel\t\t *rel = (Rel *) lfirst(xrel);\n\n\t\trel_list2 = prune_joinrel(rel, rel_list2);\n\t}\n\treturn (append(rel_list1, rel_list2));\n}\n\n/*\n * prune_oldrels--\n *\t If all the joininfo's in a rel node are inactive,\n *\t that means that this node has been joined into\n *\t other nodes in all possible ways, therefore\n *\t this node can be discarded. If not, it will cause\n *\t extra complexity of the optimizer.\n *\n * old_rels is a list of rel nodes\n *\n * Returns a new list of rel nodes\n */\nList\t *\nprune_oldrels(List *old_rels)\n{\n\tRel\t\t *rel;\n\tList\t *joininfo_list,\n\t\t\t *xjoininfo;\n\n\tif (old_rels == NIL)\n\t\treturn (NIL);\n\n\trel = (Rel *) lfirst(old_rels);\n\tjoininfo_list = rel->joininfo;\n\tif (joininfo_list == NIL)\n\t\treturn (lcons(rel, prune_oldrels(lnext(old_rels))));\n\n\tforeach(xjoininfo, joininfo_list)\n\t{\n\t\tJInfo\t *joininfo = (JInfo *) lfirst(xjoininfo);\n\n\t\tif (!joininfo->inactive)\n\t\t\treturn (lcons(rel, prune_oldrels(lnext(old_rels))));\n\t}\n\treturn (prune_oldrels(lnext(old_rels)));\n}\n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Sun, 12 Apr 1998 10:11:02 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] subselect and optimizer" }, { "msg_contents": "Hi Bruce,\n\n\n> > > > But this patch still didn't help for a simple join without a where \n> > > > clause. The query plan says it uses two sequential scans, where 6.2.1 \n> > > > uses two index scans.\n> > >\n> > > But we didn't have subselcts in 6.2.1?\n> > \n> > No, but in the more general case of a simple join over two tables \n> > with fields with an index declared on them.\n> > \n> > say: Select * from Trans, Spieler where \n> > Spieler.spieler_nr=Trans.spieler_nr\n> > \n> > Uses indices in 6.2.1, doesn't use them in 6.3.1 (two seq scans).\n> > \n> > I just wanted to remind you, that these problems are not restricted \n> > to subqueries, but seem to be a more general 'flaw' in 6.3.x .\n> \n> Ah, but that is fixed in 6.3.2 beta. We particularly waited for a fix\n> for this before releasing a new beta. But you say you have Vadim's fix\n> that is in 6.3.2, and it still doesn't work?\n\nYep, exactly. The query with the where clause is fixed after \napplying Vadim's prune.c patch, simple join still uses two seq scans \n:-(\n\nI uploaded test data and Vadim fixed one file, but asked you \n(Bruce) to look over other files of the optimizer code. There seem \nto be other bugs in the optimizer code, which were introduced between \n6.2.1 and 6.3. We have seen about 5-6 error reports from different \npeople, from the simpliest queries like my simple join to rather \ncomplex subqueries. But when a simple join doesn't work (ok, it \nworks, but kind of crawls), this error is supposed to pop up under \nother circumstances too.\n\nHope you can find this nasty little bug, cause it makes postgres \nunusable. Especially before going into development again.\n\nSee the mailinglist archives for a post of mine. There is a link in \nit,where you can download the test data, it should still be \nthere. (don't have access to this from home)\n\nI greatly appreciate all the time and hard work all you \nPostgreSQL-hackers and contributors put into this fantastic freeware \nproduct. Just to let you know.\n\nCiao\n\nUlrich\n\n\n\n\n\nUlrich Voss \\ \\ / /__ / ___|__ _| |\nVoCal web publishing \\ \\ / / _ \\| | / _` | |\[email protected] \\ V / (_) | |__| (_| | |\nhttp://www.vocalweb.de \\_/ \\___/ \\____\\__,_|_|\nhttp://www.boersenspiel.de web publishing\n", "msg_date": "Sun, 12 Apr 1998 14:32:16 +0000", "msg_from": "\"Boersenspielteam\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] subselect and optimizer" }, { "msg_contents": "> \n> \n> > Here is the prune.c file from 6.2.1. Please try it and let me know if\n> > it fixes the problem:\n> \n> Nope, doesn't even compile ...\n> \n> ---\n> \n> gcc -I../../../include -I../../../backend -I/usr/include/curses -O2 \n> -Wall -Wmissing-prototypes -I../.. -c prune.c -o prune.o\n> prune.c:39: conflicting types for `prune_joinrels'\n> ../../../include/optimizer/paths.h:95: previous declaration of\n> `prune_joinrels' prune.c: In function `prune_rel_paths': prune.c:127:\n> `WARN' undeclared (first use this function) prune.c:127: (Each\n> undeclared identifier is reported only once prune.c:127: for each\n> function it appears in.)\n\nOK, please try this patch. It reverts us to 6.2.1 for this file only. \nIt may have to be cleaned up a little. Not sure. This is against the\ncurrent source tree, which is probably the same as 6.3.2 beta.\n\n---------------------------------------------------------------------------\n\n\n*** /pg/backend/optimizer/path/prune.c\tThu Apr 2 10:18:46 1998\n--- /root/prune.c\tThu Apr 2 11:40:32 1998\n***************\n*** 7,13 ****\n *\n *\n * IDENTIFICATION\n! *\t $Header: /usr/local/cvsroot/pgsql/src/backend/optimizer/path/prune.c,v 1.13 1998/04/02 07:27:15 vadim Exp $\n *\n *-------------------------------------------------------------------------\n */\n--- 7,13 ----\n *\n *\n * IDENTIFICATION\n! *\t $Header: /usr/local/cvsroot/pgsql/src/backend/optimizer/path/prune.c,v 1.6 1997/09/08 21:45:08 momjian Exp $\n *\n *-------------------------------------------------------------------------\n */\n***************\n*** 29,50 ****\n /*\n * prune-joinrels--\n *\t Removes any redundant relation entries from a list of rel nodes\n! *\t 'rel-list'. Obviosly, the first relation can't be a duplicate.\n *\n * Returns the resulting list.\n *\n */\n! void\n prune_joinrels(List *rel_list)\n {\n! \tList\t *i;\n \n! \t/*\n! \t * rel_list can shorten while running as duplicate relations are\n! \t * deleted\n! \t */\n! \tforeach(i, rel_list)\n! \t\tlnext(i) = prune_joinrel((Rel *) lfirst(i), lnext(i));\n }\n \n /*\n--- 29,51 ----\n /*\n * prune-joinrels--\n *\t Removes any redundant relation entries from a list of rel nodes\n! *\t 'rel-list'.\n *\n * Returns the resulting list.\n *\n */\n! List\t *\n prune_joinrels(List *rel_list)\n {\n! \tList\t *temp_list = NIL;\n \n! \tif (rel_list != NIL)\n! \t{\n! \t\ttemp_list = lcons(lfirst(rel_list),\n! \t\t\t\t prune_joinrels(prune_joinrel((Rel *) lfirst(rel_list),\n! \t\t\t\t\t\t\t\t\t\t\t\tlnext(rel_list))));\n! \t}\n! \treturn (temp_list);\n }\n \n /*\n***************\n*** 62,85 ****\n prune_joinrel(Rel *rel, List *other_rels)\n {\n \tList\t *i = NIL;\n! \tList\t *result = NIL;\n \n \tforeach(i, other_rels)\n \t{\n! \t\tRel\t *other_rel = (Rel *) lfirst(i);\n! \t\t\n \t\tif (same(rel->relids, other_rel->relids))\n \t\t{\n \t\t\trel->pathlist = add_pathlist(rel,\n \t\t\t\t\t\t\t\t\t\t rel->pathlist,\n \t\t\t\t\t\t\t\t\t\t other_rel->pathlist);\n \t\t}\n \t\telse\n \t\t{\n! \t\t\tresult = nconc(result, lcons(other_rel, NIL));\n \t\t}\n \t}\n! \treturn (result);\n }\n \n /*\n--- 63,89 ----\n prune_joinrel(Rel *rel, List *other_rels)\n {\n \tList\t *i = NIL;\n! \tList\t *t_list = NIL;\n! \tList\t *temp_node = NIL;\n! \tRel\t\t *other_rel = (Rel *) NULL;\n \n \tforeach(i, other_rels)\n \t{\n! \t\tother_rel = (Rel *) lfirst(i);\n \t\tif (same(rel->relids, other_rel->relids))\n \t\t{\n \t\t\trel->pathlist = add_pathlist(rel,\n \t\t\t\t\t\t\t\t\t\t rel->pathlist,\n \t\t\t\t\t\t\t\t\t\t other_rel->pathlist);\n+ \t\t\tt_list = nconc(t_list, NIL);\t\t/* XXX is this right ? */\n \t\t}\n \t\telse\n \t\t{\n! \t\t\ttemp_node = lcons(other_rel, NIL);\n! \t\t\tt_list = nconc(t_list, temp_node);\n \t\t}\n \t}\n! \treturn (t_list);\n }\n \n /*\n***************\n*** 120,126 ****\n \t\t\trel->size = compute_joinrel_size(cheapest);\n \t\t}\n \t\telse\n! \t\t\telog(ERROR, \"non JoinPath called\");\n \t}\n }\n \n--- 124,130 ----\n \t\t\trel->size = compute_joinrel_size(cheapest);\n \t\t}\n \t\telse\n! \t\t\telog(WARN, \"non JoinPath called\");\n \t}\n }\n \n***************\n*** 135,141 ****\n * Returns the cheapest path.\n *\n */\n! Path *\n prune_rel_path(Rel *rel, Path *unorderedpath)\n {\n \tPath\t *cheapest = set_cheapest(rel, rel->pathlist);\n--- 139,145 ----\n * Returns the cheapest path.\n *\n */\n! Path\t *\n prune_rel_path(Rel *rel, Path *unorderedpath)\n {\n \tPath\t *cheapest = set_cheapest(rel, rel->pathlist);\n***************\n*** 166,172 ****\n *\n * Returns one pruned rel node list\n */\n! List *\n merge_joinrels(List *rel_list1, List *rel_list2)\n {\n \tList\t *xrel = NIL;\n--- 170,176 ----\n *\n * Returns one pruned rel node list\n */\n! List\t *\n merge_joinrels(List *rel_list1, List *rel_list2)\n {\n \tList\t *xrel = NIL;\n***************\n*** 192,226 ****\n *\n * Returns a new list of rel nodes\n */\n! List *\n prune_oldrels(List *old_rels)\n {\n \tRel\t\t *rel;\n \tList\t *joininfo_list,\n! \t\t\t *xjoininfo,\n! \t\t\t *i,\n! \t\t\t *temp_list = NIL;\n \n! \tforeach(i, old_rels)\n! \t{\n! \t\trel = (Rel *) lfirst(i);\n! \t\tjoininfo_list = rel->joininfo;\n \n! \t\tif (joininfo_list == NIL)\n! \t\t\ttemp_list = lcons(rel, temp_list);\n! \t\telse\n! \t\t{\n! \t\t\tforeach(xjoininfo, joininfo_list)\n! \t\t\t{\n! \t\t\t\tJInfo\t *joininfo = (JInfo *) lfirst(xjoininfo);\n \n! \t\t\t\tif (!joininfo->inactive)\n! \t\t\t\t{\n! \t\t\t\t\ttemp_list = lcons(rel, temp_list);\n! \t\t\t\t\tbreak;\n! \t\t\t\t}\n! \t\t\t}\n! \t\t}\n \t}\n! \treturn temp_list;\n }\n--- 196,222 ----\n *\n * Returns a new list of rel nodes\n */\n! List\t *\n prune_oldrels(List *old_rels)\n {\n \tRel\t\t *rel;\n \tList\t *joininfo_list,\n! \t\t\t *xjoininfo;\n \n! \tif (old_rels == NIL)\n! \t\treturn (NIL);\n \n! \trel = (Rel *) lfirst(old_rels);\n! \tjoininfo_list = rel->joininfo;\n! \tif (joininfo_list == NIL)\n! \t\treturn (lcons(rel, prune_oldrels(lnext(old_rels))));\n \n! \tforeach(xjoininfo, joininfo_list)\n! \t{\n! \t\tJInfo\t *joininfo = (JInfo *) lfirst(xjoininfo);\n! \n! \t\tif (!joininfo->inactive)\n! \t\t\treturn (lcons(rel, prune_oldrels(lnext(old_rels))));\n \t}\n! \treturn (prune_oldrels(lnext(old_rels)));\n }\n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Mon, 13 Apr 1998 11:06:23 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] subselect and optimizer" }, { "msg_contents": "\n> Here is the prune.c file from 6.2.1. Please try it and let me know if\n> it fixes the problem:\n\nNope, doesn't even compile ...\n\n---\n\ngcc -I../../../include -I../../../backend -I/usr/include/curses -O2 \n-Wall -Wmissing-prototypes -I../.. -c prune.c -o prune.o\nprune.c:39: conflicting types for `prune_joinrels'\n../../../include/optimizer/paths.h:95: previous declaration of\n`prune_joinrels' prune.c: In function `prune_rel_paths': prune.c:127:\n`WARN' undeclared (first use this function) prune.c:127: (Each\nundeclared identifier is reported only once prune.c:127: for each\nfunction it appears in.)\n\n--- \n\nCiao\n\nDas Boersenspielteam.\n\n---------------------------------------------------------------------------\n http://www.boersenspiel.de\n \t Das Boersenspiel im Internet\n *Realitaetsnah* *Kostenlos* *Ueber 6000 Spieler*\n---------------------------------------------------------------------------\n", "msg_date": "Mon, 13 Apr 1998 15:32:31 +0000", "msg_from": "\"Boersenspielteam\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] subselect and optimizer" }, { "msg_contents": "Boersenspielteam wrote:\n> \n> No, but in the more general case of a simple join over two tables\n> with fields with an index declared on them.\n> \n> say: Select * from Trans, Spieler where\n> Spieler.spieler_nr=Trans.spieler_nr\n> \n> Uses indices in 6.2.1, doesn't use them in 6.3.1 (two seq scans).\n\nSorry, old mail from you is lost - what was execution plan in 6.2.1 ?\n\nIn current I see that\n\nHash Join (cost=5905.62 size=3343409 width=8) \n -> Seq Scan on trans (cost=3154.70 size=71112 width=4) \n -> Hash (cost=0.00 size=0 width=0) \n -> Seq Scan on kurse (cost=238.61 size=4958 width=4) \n\nIS FASTEST plan ! Result is returned in ~ 56 sec.\n\nNested Loop (cost=148934.30 size=3343409 width=8)\n -> Seq Scan on trans (cost=3154.70 size=71112 width=4)\n -> Index Scan on kurse (cost=2.05 size=4958 width=4)\n\nreturns result in ~ 80 sec.\n\nMerge Join (cost=7411.81 size=3343409 width=8)\n -> Index Scan on kurse (cost=337.90 size=4958 width=4)\n -> Index Scan on trans (cost=4563.60 size=71112 width=4)\n\nis SLOWEST plan (~200 sec).\n\nPlease don't think that using indices is the best way in all cases...\n\nBTW, you can use -fX _backend_ option to forbid some join methods - \nI used '-o -fh' to get MJ plan and '-o -fh -fm' to test NL plan.\n\nVadim\n", "msg_date": "Tue, 14 Apr 1998 15:28:39 +0800", "msg_from": "\"Vadim B. Mikheev\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] subselect and optimizer" }, { "msg_contents": "> \n> Boersenspielteam wrote:\n> > \n> > No, but in the more general case of a simple join over two tables\n> > with fields with an index declared on them.\n> > \n> > say: Select * from Trans, Spieler where\n> > Spieler.spieler_nr=Trans.spieler_nr\n> > \n> > Uses indices in 6.2.1, doesn't use them in 6.3.1 (two seq scans).\n> \n> Sorry, old mail from you is lost - what was execution plan in 6.2.1 ?\n> \n> In current I see that\n> \n> Hash Join (cost=5905.62 size=3343409 width=8) \n> -> Seq Scan on trans (cost=3154.70 size=71112 width=4) \n> -> Hash (cost=0.00 size=0 width=0) \n> -> Seq Scan on kurse (cost=238.61 size=4958 width=4) \n> \n> IS FASTEST plan ! Result is returned in ~ 56 sec.\n\nThis is very helpful, and what I suspected.\n\nTwo issues. First, I have heard reports that the optimizer in 6.3.2 is\nbetter than 6.2.1, where indexes are used in 6.3.2 that were not used in\n6.2.1. In your case, you are seeing the opposite, but that may be OK\ntoo. \n\nSecond, using an index to join two large tables is not always a good\nthing. The index can be scanned quickly, but it must find the heap for\nevery index entry, and that can cause the system to scan all over the\nheap getting pages. Sometimes, it is better to just scan through the\nheap, and make your own hash index, which is the plan that it is being\nused.\n\n> Nested Loop (cost=148934.30 size=3343409 width=8)\n> -> Seq Scan on trans (cost=3154.70 size=71112 width=4)\n> -> Index Scan on kurse (cost=2.05 size=4958 width=4)\n> \n> returns result in ~ 80 sec.\n> \n> Merge Join (cost=7411.81 size=3343409 width=8)\n> -> Index Scan on kurse (cost=337.90 size=4958 width=4)\n> -> Index Scan on trans (cost=4563.60 size=71112 width=4)\n> \n> is SLOWEST plan (~200 sec).\n> \n> Please don't think that using indices is the best way in all cases...\n> \n> BTW, you can use -fX _backend_ option to forbid some join methods - \n> I used '-o -fh' to get MJ plan and '-o -fh -fm' to test NL plan.\n\nThis is also very helpful. I had forgotten these options existed.\n\nHopefully we don't have a bug here.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Tue, 14 Apr 1998 09:52:06 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] subselect and optimizer" }, { "msg_contents": "Hi,\n\nthen I think this one is solved.\n\nI'll try to reproduce it on my machine, if I get the same results, I \nwill be a quiet and happy Postgres user again ;-)\n\nI don't have the message, that originated this thread, but is the \nslow subselect from Tatsuo fixed?\n\nTatsua, can you test queries with the abckend options suggested by \nVadim?\n\n> In current I see that\n> \n> Hash Join (cost=5905.62 size=3343409 width=8) \n> -> Seq Scan on trans (cost=3154.70 size=71112 width=4) \n> -> Hash (cost=0.00 size=0 width=0) \n> -> Seq Scan on kurse (cost=238.61 size=4958 width=4) \n> \n> IS FASTEST plan ! Result is returned in ~ 56 sec.\n> \n> Nested Loop (cost=148934.30 size=3343409 width=8)\n> -> Seq Scan on trans (cost=3154.70 size=71112 width=4)\n> -> Index Scan on kurse (cost=2.05 size=4958 width=4)\n> \n> returns result in ~ 80 sec.\n> \n> Merge Join (cost=7411.81 size=3343409 width=8)\n> -> Index Scan on kurse (cost=337.90 size=4958 width=4)\n> -> Index Scan on trans (cost=4563.60 size=71112 width=4)\n> \n> is SLOWEST plan (~200 sec).\n> \n> Please don't think that using indices is the best way in all cases...\n> \n> BTW, you can use -fX _backend_ option to forbid some join methods - \n> I used '-o -fh' to get MJ plan and '-o -fh -fm' to test NL plan.\n> \n> Vadim\n> \n\nCiao\n\nDas Boersenspielteam.\n\n---------------------------------------------------------------------------\n http://www.boersenspiel.de\n \t Das Boersenspiel im Internet\n *Realitaetsnah* *Kostenlos* *Ueber 6000 Spieler*\n---------------------------------------------------------------------------\n", "msg_date": "Tue, 14 Apr 1998 14:22:19 +0000", "msg_from": "\"Boersenspielteam\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] subselect and optimizer" }, { "msg_contents": ">then I think this one is solved.\n>\n>I'll try to reproduce it on my machine, if I get the same results, I \n>will be a quiet and happy Postgres user again ;-)\n>\n>I don't have the message, that originated this thread, but is the \n>slow subselect from Tatsuo fixed?\n>\n>Tatsua, can you test queries with the abckend options suggested by \n>Vadim?\n\nI have tested with 6.3.2 beta. (Sorry test data is not same as my\noriginal posting) Here are results:\n\n\"postal\" table holds ~110k records. \"prefecture\" table has 47 records.\nquery is as follows:\n\nselect * from prefecture,postal where prefecture.pid = postal.pid and\npostal.town in (select town from postal where newcode = '1040061');\n\nAll of columns that appear above have btree index.\n\nNo options to backend produced a nested loop plan.\n\nNested Loop (cost=98.90 size=11888 width=92)\n -> Seq Scan on prefecture (cost=2.55 size=47 width=26)\n -> Index Scan on postal (cost=2.05 size=11888 width=66)\n SubPlan\n -> Index Scan on postal (cost=2.05 size=2 width=12)\n\n> 26.78 real 22.35 user 0.58 sys\n\nNext I gave -fn to the backend.\n\nHash Join (cost=6246.48 size=11888 width=92)\n -> Seq Scan on postal (cost=5842.97 size=11888 width=66)\n SubPlan\n -> Index Scan on postal (cost=2.05 size=2 width=12)\n -> Hash (cost=0.00 size=0 width=0)\n -> Seq Scan on prefecture (cost=2.55 size=47 width=26)\n\n> 24.97 real 21.30 user 0.50 sys\n\nFinally I tried merge join.\n\nMerge Join (cost=8580.86 size=11888 width=92)\n -> Seq Scan (cost=2.55 size=0 width=0)\n -> Sort (cost=2.55 size=0 width=0)\n -> Seq Scan on prefecture (cost=2.55 size=47 width=26)\n -> Index Scan on postal (cost=8181.90 size=11888 width=66)\n SubPlan\n -> Index Scan on postal (cost=2.05 size=2 width=12)\n\n>> In current I see that\n>> \n>> Hash Join (cost=5905.62 size=3343409 width=8) \n>> -> Seq Scan on trans (cost=3154.70 size=71112 width=4) \n>> -> Hash (cost=0.00 size=0 width=0) \n>> -> Seq Scan on kurse (cost=238.61 size=4958 width=4) \n> 25.63 real 22.13 user 0.51 sys\n\nSo in my case Merge Join was the fastest, Hash Join and Nested Loop\n(PostgreSQL decides this was the best) were almost same.\nI tried for several times and the tendency seemed not changed.\nAnyway the differences were not so big.\n--\nTatsuo Ishii\[email protected]\n", "msg_date": "Wed, 15 Apr 1998 17:35:21 +0900", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Re: [HACKERS] subselect and optimizer " } ]
[ { "msg_contents": "hi, All!\nJust installed PG 6.3.1 -- really great job, thank you guys!\n\nBut this morning I decided to play a bit with aggregate functions on\nupdate and got a bit strange(for me, at least :-) result.\nHere is an exmaple of what I did:\n========================================\nLet's create two simple tables\n create table a (name text sum float); \n create table b (name text ,val float);\n\n--and then populate them with rows\n\n insert into a values ('bob', 0.0);\n insert into a values ('john', 0.0 );\n insert into a values ('mike', 0.0);\n\n insert into b values ('bob', 1.0);\n insert into b values ('bob', 2.0); \n insert into b values ('bob', 3.0);\n insert into b values ('john', 4.0);\n insert into b values ('john', 5.0);\n insert into b values ('john', 6.0);\n insert into b values ('mike', 670);\n insert into b values ('mike', 8.0); \n insert into b values ('mike', 9.0);\n\n--now I want to update \"sum\" fields of table a in a way they will conatain\n--sums of field \"val\" from table b groupped by name\n--and use for this following query:\nupdate a set sum=sum(b.val) where name=b.name ;\n--Now \n select * from a;\n-- gives me:\nname|sum\n----+---\njohn| 0\nmike| 0\nbob |708\n(3 rows)\n\n===================\nNow I'm wondering if there is reall problem in PostgreSQL or my\nmisundersanding of something important in SQL.\n\nI'm running Linux-2.0.30(Slackware) and gcc-2.7.2.3\n\nThank you, \nAleksey.\n\n", "msg_date": "Fri, 10 Apr 1998 13:31:02 +0300 (IDT)", "msg_from": "Aleksey Dashevsky <[email protected]>", "msg_from_op": true, "msg_subject": "Agregates in update?" }, { "msg_contents": "Seems there's a bug using ESCAPE character (\\) on LIKE clause:\n\nprova=> create table tmp ( a text);\nCREATE\nprova=> insert into tmp values('\\\\');\nINSERT 178729 1\nprova=> select * from tmp where a = '\\\\';\na\n--\n\\\\\n(1 row)\n\nprova=> select * from tmp where a like '%\\\\%';\na\n-\n(0 rows)\n\nprova=> select * from tmp where a like '%\\\\\\\\%';\na\n--\n\\\\\n(1 row)\n\n-- how many \\ do I have to use? 1, 2, 3, 4 or 5 ???\n\nprova=> select * from tmp where a like '%\\\\\\\\\\%';\na\n--\n\\\\\n(1 row)\n Jose'\n\n", "msg_date": "Thu, 16 Apr 1998 11:10:30 +0000 (UTC)", "msg_from": "\"Jose' Soares Da Silva\" <[email protected]>", "msg_from_op": false, "msg_subject": "escape character \\" }, { "msg_contents": "Added to TODO list.\n> \n> hi, All!\n> Just installed PG 6.3.1 -- really great job, thank you guys!\n> \n> But this morning I decided to play a bit with aggregate functions on\n> update and got a bit strange(for me, at least :-) result.\n> Here is an exmaple of what I did:\n> ========================================\n> Let's create two simple tables\n> create table a (name text sum float); \n> create table b (name text ,val float);\n> \n> --and then populate them with rows\n> \n> insert into a values ('bob', 0.0);\n> insert into a values ('john', 0.0 );\n> insert into a values ('mike', 0.0);\n> \n> insert into b values ('bob', 1.0);\n> insert into b values ('bob', 2.0); \n> insert into b values ('bob', 3.0);\n> insert into b values ('john', 4.0);\n> insert into b values ('john', 5.0);\n> insert into b values ('john', 6.0);\n> insert into b values ('mike', 670);\n> insert into b values ('mike', 8.0); \n> insert into b values ('mike', 9.0);\n> \n> --now I want to update \"sum\" fields of table a in a way they will conatain\n> --sums of field \"val\" from table b groupped by name\n> --and use for this following query:\n> update a set sum=sum(b.val) where name=b.name ;\n> --Now \n> select * from a;\n> -- gives me:\n> name|sum\n> ----+---\n> john| 0\n> mike| 0\n> bob |708\n> (3 rows)\n> \n> ===================\n> Now I'm wondering if there is reall problem in PostgreSQL or my\n> misundersanding of something important in SQL.\n> \n> I'm running Linux-2.0.30(Slackware) and gcc-2.7.2.3\n> \n> Thank you, \n> Aleksey.\n> \n> \n> \n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Sun, 26 Apr 1998 23:21:09 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Agregates in update?" }, { "msg_contents": "> \n> Seems there's a bug using ESCAPE character (\\) on LIKE clause:\n> \n> prova=> create table tmp ( a text);\n> CREATE\n> prova=> insert into tmp values('\\\\');\n> INSERT 178729 1\n> prova=> select * from tmp where a = '\\\\';\n> a\n> --\n> \\\\\n> (1 row)\n> \n> prova=> select * from tmp where a like '%\\\\%';\n> a\n> -\n> (0 rows)\n> \n> prova=> select * from tmp where a like '%\\\\\\\\%';\n> a\n> --\n> \\\\\n> (1 row)\n> \n> -- how many \\ do I have to use? 1, 2, 3, 4 or 5 ???\n> \n> prova=> select * from tmp where a like '%\\\\\\\\\\%';\n> a\n> --\n> \\\\\n> (1 row)\n> Jose'\n\nThe problem is that that \\\\ is need to input a backslash, and we support\n\\ to escape special characters like %, so \\\\\\\\ is need to test for a\nbackslash in a LIKE. Is this not standard? I suppose not. Should we\nremove the special use of \\ in LIKE? Comments?\n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Sun, 26 Apr 1998 23:34:25 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] escape character \\" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> Added to TODO list.\n> > update a set sum=sum(b.val) where name=b.name ;\n\nIs this in standards ???\nI thought that subselects should be used in such cases...\nAnd this is one of my plans for 6.4...\n\nVadim\n", "msg_date": "Mon, 27 Apr 1998 15:26:10 +0800", "msg_from": "\"Vadim B. Mikheev\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Agregates in update?" }, { "msg_contents": "Vadim wrote: \n> Bruce Momjian wrote:\n> > \n> > Added to TODO list.\n> > > update a set sum=sum(b.val) where name=b.name ;\n> \n> Is this in standards ???\n> I thought that subselects should be used in such cases...\n> And this is one of my plans for 6.4...\n> \n> Vadim\n\nI tried this with Illustra:\n\n\tcreate table a (name text, sum float); \n\tcreate table b (name text, val float);\n \n\t--and then populate them with rows\n\tinsert into a values ('bob', 0.0);\n\t...\n\tinsert into b values ('mike', 9.0);\n\t\n\t--now I want to update \"sum\" fields of table a in a way they will\n --conatain sums of field \"val\" from table b groupped by name\n\t--and use for this following query:\n\tupdate a set sum=sum(b.val) where name=b.name ;\n\tXL0002:schema b does not exist\n\t\n\nThe problem of course is that the query\n\n update a set sum=sum(b.val) where name=b.name;\n\nis as Vadim points out, not valid SQL. Probably we should return an error.\nI am not especially thrilled with the message above about schemas, but I can\nsee how it got there as the parser tried to find something (in the absence of\na from list) to give meaning to the term 'b.*'.\n\n-dg\n\nDavid Gould [email protected] 510.628.3783 or 510.305.9468 \nInformix Software (No, really) 300 Lakeside Drive Oakland, CA 94612\n\"(Windows NT) version 5.0 will build on a proven system architecture\n and incorporate tens of thousands of bug fixes from version 4.0.\"\n -- <http://www.microsoft.com/y2k.asp?A=7&B=5>\n\n\n", "msg_date": "Mon, 27 Apr 1998 01:49:10 -0700 (PDT)", "msg_from": "[email protected] (David Gould)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Agregates in update?" }, { "msg_contents": "On Sun, 26 Apr 1998, Bruce Momjian wrote:\n\n> > \n> > Seems there's a bug using ESCAPE character (\\) on LIKE clause:\n> > \n> > prova=> create table tmp ( a text);\n> > CREATE\n> > prova=> insert into tmp values('\\\\');\n> > INSERT 178729 1\n> > prova=> select * from tmp where a = '\\\\';\n> > a\n> > --\n> > \\\\\n> > (1 row)\n> > \n> > prova=> select * from tmp where a like '%\\\\%';\n> > a\n> > -\n> > (0 rows)\n> > \n> > prova=> select * from tmp where a like '%\\\\\\\\%';\n> > a\n> > --\n> > \\\\\n> > (1 row)\n> > \n> > -- how many \\ do I have to use? 1, 2, 3, 4 or 5 ???\n> > \n> > prova=> select * from tmp where a like '%\\\\\\\\\\%';\n> > a\n> > --\n> > \\\\\n> > (1 row)\n> > Jose'\n> \n> The problem is that that \\\\ is need to input a backslash, and we support\n> \\ to escape special characters like %, so \\\\\\\\ is need to test for a\n> backslash in a LIKE. Is this not standard? I suppose not.\n\nThe LIKE standard SQL92 has the keyword ESCAPE to specify a character\nas escape, like this:\n\n SELECT * FROM my_table WHERE my_col LIKE '#_pluto' ESCAPE '#';\n\n> Should we remove the special use of \\ in LIKE? Comments?\n\nObviously we need a character escape (back slash or other) to escape\n_ or/and %, but before remove use of back slashes we need to have the\nLIKE SQL92 syntax.\n Jose'\n\n", "msg_date": "Mon, 27 Apr 1998 10:01:40 +0000 (UTC)", "msg_from": "\"Jose' Soares Da Silva\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] escape character \\" }, { "msg_contents": "> \n> Bruce Momjian wrote:\n> > \n> > Added to TODO list.\n> > > update a set sum=sum(b.val) where name=b.name ;\n> \n> Is this in standards ???\n> I thought that subselects should be used in such cases...\n> And this is one of my plans for 6.4...\n\nNo, not standard, but either need to dis-allow it, or make it work properly.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Mon, 27 Apr 1998 09:25:37 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Agregates in update?" } ]
[ { "msg_contents": "\nMorning everyone...\n\n\tAfter upgrading the server to -current, which is meant for\nrelease on April 15th, I'm still receiving a palloc() failure, as shown\nbelow.\n\n\tThe server is a 486DX with 64Meg of RAM, and the able that is\nbeing searched is *only* 5k records large...and the query below will\nreturn *only* 1 of those records...\n\nApr 10 01:59:47 clio radiusd[22585]: query failed: select uniq_id from \\\n\tradlog where uniq_id='237286618' and stop_time=0 and \\ \n\tterm_server='isdn-1.trends.ca';\nApr 10 01:59:47 clio radiusd[22585]: query failed: FATAL 1: palloc \\ \n\tfailure: memory exhausted\n\n\n\tAnd, I'm still getting alot of:\n\nFATAL 1: btree: BTP_CHAIN flag was expected\nFATAL 1: btree: BTP_CHAIN flag was expected\nFATAL 1: btree: BTP_CHAIN flag was expected\nFATAL 1: btree: BTP_CHAIN flag was expected\nFATAL 1: btree: BTP_CHAIN flag was expected\nFATAL 1: btree: BTP_CHAIN flag was expected\nFATAL 1: btree: BTP_CHAIN flag was expected\nFATAL 1: btree: BTP_CHAIN flag was expected\nFATAL 1: btree: BTP_CHAIN flag was expected\n\n\n", "msg_date": "Fri, 10 Apr 1998 13:20:35 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "palloc failures...still..." }, { "msg_contents": "> Apr 10 01:59:47 clio radiusd[22585]: query failed: select uniq_id from \\\n> \tradlog where uniq_id='237286618' and stop_time=0 and \\ \n> \tterm_server='isdn-1.trends.ca';\n> Apr 10 01:59:47 clio radiusd[22585]: query failed: FATAL 1: palloc \\ \n> \tfailure: memory exhausted\n> \n> \n> \tAnd, I'm still getting alot of:\n> \n> FATAL 1: btree: BTP_CHAIN flag was expected\n> FATAL 1: btree: BTP_CHAIN flag was expected\n> FATAL 1: btree: BTP_CHAIN flag was expected\n> FATAL 1: btree: BTP_CHAIN flag was expected\n> FATAL 1: btree: BTP_CHAIN flag was expected\n> FATAL 1: btree: BTP_CHAIN flag was expected\n> FATAL 1: btree: BTP_CHAIN flag was expected\n> FATAL 1: btree: BTP_CHAIN flag was expected\n> FATAL 1: btree: BTP_CHAIN flag was expected\n> \n\nMan, Marc, you get the strangest errors. Time for dump/restore to clean\nup whatever strangeness you have in that database.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Fri, 10 Apr 1998 13:31:26 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] palloc failures...still..." }, { "msg_contents": "On Fri, 10 Apr 1998, Bruce Momjian wrote:\n\n> > Apr 10 01:59:47 clio radiusd[22585]: query failed: select uniq_id from \\\n> > \tradlog where uniq_id='237286618' and stop_time=0 and \\ \n> > \tterm_server='isdn-1.trends.ca';\n> > Apr 10 01:59:47 clio radiusd[22585]: query failed: FATAL 1: palloc \\ \n> > \tfailure: memory exhausted\n> > \n> > \n> > \tAnd, I'm still getting alot of:\n> > \n> > FATAL 1: btree: BTP_CHAIN flag was expected\n> > FATAL 1: btree: BTP_CHAIN flag was expected\n> > FATAL 1: btree: BTP_CHAIN flag was expected\n> > FATAL 1: btree: BTP_CHAIN flag was expected\n> > FATAL 1: btree: BTP_CHAIN flag was expected\n> > FATAL 1: btree: BTP_CHAIN flag was expected\n> > FATAL 1: btree: BTP_CHAIN flag was expected\n> > FATAL 1: btree: BTP_CHAIN flag was expected\n> > FATAL 1: btree: BTP_CHAIN flag was expected\n> > \n> \n> Man, Marc, you get the strangest errors. Time for dump/restore to clean\n> up whatever strangeness you have in that database.\n\n\tthat is easier said then done :( the last time I did do that, but\nthe problem creeped back again...but even ignoring the BTP_CHAIN error,\nwhat about the palloc failure still?\n\n\tGuess I just found my \"release trigger\"...this database working\nproperly :)\n\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Fri, 10 Apr 1998 14:59:30 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] palloc failures...still..." } ]
[ { "msg_contents": "\nHi...\n\n\tI want to add the filename (index) to the error message:\n\nFATAL 1: btree: BTP_CHAIN flag was expected (access = 0)\n\n\tAt least then I can figure out which index to drop and rebuild,\ninstead of having to do them all :)\n\n\tNow, looking at _bt_moveright(), it passes Relation, which, as\npart of its structure, has 'rd_fd', which I'm assuming is the open file\ndescriptor for the index file its doing its search on...\n\n\tIs there a method of taking rd_fd and figuring out the file name\nit is associated with? I looked at fstat(), but that only appears to\nreturn everything but the filename :(\n\n\tIdeas...\n\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Fri, 10 Apr 1998 17:10:27 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "reverse file descriptor to a file name..." }, { "msg_contents": "> \n> \n> Hi...\n> \n> \tI want to add the filename (index) to the error message:\n> \n> FATAL 1: btree: BTP_CHAIN flag was expected (access = 0)\n> \n> \tAt least then I can figure out which index to drop and rebuild,\n> instead of having to do them all :)\n> \n> \tNow, looking at _bt_moveright(), it passes Relation, which, as\n> part of its structure, has 'rd_fd', which I'm assuming is the open file\n> descriptor for the index file its doing its search on...\n> \n> \tIs there a method of taking rd_fd and figuring out the file name\n> it is associated with? I looked at fstat(), but that only appears to\n> return everything but the filename :(\n\nYou can't get a file name from a descriptor, because there can be more\nthan one. The system just returns information about the inode, not\nabout the directory entries pointing to the inode.\n\nFor your purpose, you want:\n\n\tRelation->rd_rel->relname\n\nWorks like champ.\n\nThis is not a trivial question, because the structures in PostgreSQL are\nvery complicated until you get used to them.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Fri, 10 Apr 1998 17:07:06 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] reverse file descriptor to a file name..." }, { "msg_contents": "On Fri, 10 Apr 1998, Bruce Momjian wrote:\n\n> For your purpose, you want:\n> \n> \tRelation->rd_rel->relname\n> \n> Works like champ.\n> \n> This is not a trivial question, because the structures in PostgreSQL are\n> very complicated until you get used to them.\n\n\tDamn...I knew I should have looked deeper :( \n\nThanks...\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Fri, 10 Apr 1998 19:03:07 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] reverse file descriptor to a file name..." }, { "msg_contents": "> You can't get a file name from a descriptor, because there can be more\n> than one. The system just returns information about the inode, not\n> about the directory entries pointing to the inode.\n> \n> For your purpose, you want:\n> \n> \tRelation->rd_rel->relname\n> \n> Works like champ.\n> \n> This is not a trivial question, because the structures in PostgreSQL are\n> very complicated until you get used to them.\n\nSpeaking of this, I always considered this comment in\noptimizer/plan/planner.c to be a classic:\n\n /* reach right in there, why don't you? */\n if (tletype != reln->rd_att->attrs[i - 1]->atttypid)\n elog(ERROR, \"function declared to return type %s does not retrieve (%s.all)\", typeTypeName(typ), typeTypeName(typ));\n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Fri, 10 Apr 1998 18:57:47 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] reverse file descriptor to a file name..." }, { "msg_contents": "On Fri, 10 Apr 1998, The Hermit Hacker wrote:\n\n> On Fri, 10 Apr 1998, Bruce Momjian wrote:\n> \n> > For your purpose, you want:\n> > \n> > \tRelation->rd_rel->relname\n> > \n> > Works like champ.\n> > \n> > This is not a trivial question, because the structures in PostgreSQL are\n> > very complicated until you get used to them.\n> \n> \tDamn...I knew I should have looked deeper :( \n\n\tOops...I must be using that wrong...I'm getting:\n\nNOTICE: Message from PostgreSQL backend: The Postmaster has informed me\n\t that some other backend died abnormally and possibly corrupted\n\t shared memory. I have rolled back the current transaction and am\n\t going to terminate your database system connection and exit. \n\t Please reconnect to the database system and repeat your query.\n\n\n\trelname is of type NameData ... NameData is a struct, so shouldn't\nit be:\n\n\tRelation->rd_rel->relname->data\n\n\tBut, NameData only has one component, data...why make it a struct?\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Fri, 10 Apr 1998 20:23:11 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] reverse file descriptor to a file name..." }, { "msg_contents": "> \n> \tOops...I must be using that wrong...I'm getting:\n> \n> NOTICE: Message from PostgreSQL backend: The Postmaster has informed me\n> \t that some other backend died abnormally and possibly corrupted\n> \t shared memory. I have rolled back the current transaction and am\n> \t going to terminate your database system connection and exit. \n> \t Please reconnect to the database system and repeat your query.\n> \n> \n> \trelname is of type NameData ... NameData is a struct, so shouldn't\n> it be:\n> \n> \tRelation->rd_rel->relname->data\n> \n> \tBut, NameData only has one component, data...why make it a struct?\n\nYep, you are right. They make is a struct so they can pass it around\nby-value, rather than the normal pointer by-reference for normal arrays.\n\nPerhaps I should do a layout for the PostgreSQL data structures as I\nhave done a flowchart for the backend program files.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Fri, 10 Apr 1998 21:58:11 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] reverse file descriptor to a file name..." }, { "msg_contents": "On Fri, 10 Apr 1998, Bruce Momjian wrote:\n\n> > \n> > \tOops...I must be using that wrong...I'm getting:\n> > \n> > NOTICE: Message from PostgreSQL backend: The Postmaster has informed me\n> > \t that some other backend died abnormally and possibly corrupted\n> > \t shared memory. I have rolled back the current transaction and am\n> > \t going to terminate your database system connection and exit. \n> > \t Please reconnect to the database system and repeat your query.\n> > \n> > \n> > \trelname is of type NameData ... NameData is a struct, so shouldn't\n> > it be:\n> > \n> > \tRelation->rd_rel->relname->data\n> > \n> > \tBut, NameData only has one component, data...why make it a struct?\n> \n> Yep, you are right. They make is a struct so they can pass it around\n> by-value, rather than the normal pointer by-reference for normal arrays.\n\n\tWell, that would explain *that* problem :) \n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Sat, 11 Apr 1998 06:11:41 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] reverse file descriptor to a file name..." }, { "msg_contents": "On Fri, 10 Apr 1998, Bruce Momjian wrote:\n\n> > \n> > \tOops...I must be using that wrong...I'm getting:\n> > \n> > NOTICE: Message from PostgreSQL backend: The Postmaster has informed me\n> > \t that some other backend died abnormally and possibly corrupted\n> > \t shared memory. I have rolled back the current transaction and am\n> > \t going to terminate your database system connection and exit. \n> > \t Please reconnect to the database system and repeat your query.\n> > \n> > \n> > \trelname is of type NameData ... NameData is a struct, so shouldn't\n> > it be:\n> > \n> > \tRelation->rd_rel->relname->data\n> > \n> > \tBut, NameData only has one component, data...why make it a struct?\n> \n> Yep, you are right. They make is a struct so they can pass it around\n> by-value, rather than the normal pointer by-reference for normal arrays.\n\nOdd...rel->rd_rel->relname->data produces:\n\nnbtsearch.c: In function `_bt_moveright':\nnbtsearch.c:223: invalid type argument of `->'\n\nBut...rel->rd_rel->relname.data works.\n\n\tNow, I really really hate pointers to start with...always\nhave...but can someone confirm which is (should be) right? :(\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Sat, 11 Apr 1998 06:15:10 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] reverse file descriptor to a file name..." }, { "msg_contents": "> Odd...rel->rd_rel->relname->data produces:\n> \n> nbtsearch.c: In function `_bt_moveright':\n> nbtsearch.c:223: invalid type argument of `->'\n> \n> But...rel->rd_rel->relname.data works.\n> \n> \tNow, I really really hate pointers to start with...always\n> have...but can someone confirm which is (should be) right? :(\n\nYep, that is right. It takes a little trial-and-error to check the type\nof each structure member, and reference it accordingly.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Sat, 11 Apr 1998 19:03:44 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] reverse file descriptor to a file name..." } ]
[ { "msg_contents": "I finally found some time to make minor patch to pgsql 6.3.1 to\nmake it \"sort-of-work\" out of the box for Linux/Alpha. At the current\nmoment, I can get it to compile, and run 'initdb' successfully. Regression\ntests are successful for the most part, with the biggest problems being in\nany dealing with floats, especially the float{4,...} tests. Overall, it\nlooks as things are improving! :)\n\tThe patches are for the two modified files only, as the changes\nwere small. Mainly just replacing 'defined(linuxalpha)' with something\nthat is understood '( defined(linux) && defined(__alpha__) )' by the\ncompiler correctly, since linuxalpha was not getting defined anywhere, and\nthe Linux/Alpha gcc does not generate the linuxalpha symbol itself.\nAppears to have been a slight oversight by some one who was adding\nLinux/Alpha support to the code. This shouldn't break any other platforms,\nwith as small and simple a change as it is. Hopefully it can make it into\n6.3.2? :)\n\tAlso, what is the purpose of the files in\n./src/backend/ports/linuxalpha? I can't find any reference to them\nanywhere else in the sources, and it does not appears they are even\nincluded in the final binary. The files themselves are pretty sparse.\nAlso, if I understand the configure scripts correctly, only a\n./src/backend/port/linux directory would be used, as linuxalpha is\nconsidered a subset of linux. Of course the latter directory existed in\n6.2.x, but is now gone. I think that the former directory can follow,\ni.e. be removed as well.\n\tThats all for now! As usual, any questions about these\npatches, feel free to email me. TTYL.\n\n----------------------------------------------------------------------------\n| \"For to me to live is Christ, and to die is gain.\" |\n| --- Philippians 1:21 (KJV) |\n----------------------------------------------------------------------------\n| Ryan Kirkpatrick | Boulder, Colorado | [email protected] |\n----------------------------------------------------------------------------\n| http://www-ugrad.cs.colorado.edu/~rkirkpat/ |\n----------------------------------------------------------------------------", "msg_date": "Fri, 10 Apr 1998 18:04:59 -0500 (CDT)", "msg_from": "Ryan Kirkpatrick <[email protected]>", "msg_from_op": true, "msg_subject": "Linux/Alpha and pgsql...." }, { "msg_contents": "On Fri, 10 Apr 1998, Ryan Kirkpatrick wrote:\n\n> \n> \tI finally found some time to make minor patch to pgsql 6.3.1 to\n> make it \"sort-of-work\" out of the box for Linux/Alpha. At the current\n> moment, I can get it to compile, and run 'initdb' successfully. Regression\n> tests are successful for the most part, with the biggest problems being in\n> any dealing with floats, especially the float{4,...} tests. Overall, it\n> looks as things are improving! :)\n\n\tCan you grab the latest snapshot, which is going to be v6.3.2, and\nconfirm/tweak your patch with that? Its meant for release on April 15th,\nas long as my personal 'show stoppers' are no longer a problem :)\n\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Fri, 10 Apr 1998 21:02:10 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Linux/Alpha and pgsql...." }, { "msg_contents": "Marc, would you install these as appropriate. Also, you will find that\nI added to templates/linuxalpha the line:\n\n\tlinuxalpha:CFLAGS:-O2 -Dalpha\n ^^^^^^^\n\nso the patches he supplied adding __alpha__ are not needed anymore.\n\nThis addition fixed alpha support, so it should fix alpha-linux too.\n\n\n> \tI finally found some time to make minor patch to pgsql 6.3.1 to\n> make it \"sort-of-work\" out of the box for Linux/Alpha. At the current\n> moment, I can get it to compile, and run 'initdb' successfully. Regression\n> tests are successful for the most part, with the biggest problems being in\n> any dealing with floats, especially the float{4,...} tests. Overall, it\n> looks as things are improving! :)\n> \tThe patches are for the two modified files only, as the changes\n> were small. Mainly just replacing 'defined(linuxalpha)' with something\n> that is understood '( defined(linux) && defined(__alpha__) )' by the\n> compiler correctly, since linuxalpha was not getting defined anywhere, and\n> the Linux/Alpha gcc does not generate the linuxalpha symbol itself.\n> Appears to have been a slight oversight by some one who was adding\n> Linux/Alpha support to the code. This shouldn't break any other platforms,\n> with as small and simple a change as it is. Hopefully it can make it into\n> 6.3.2? :)\n> \tAlso, what is the purpose of the files in\n> ./src/backend/ports/linuxalpha? I can't find any reference to them\n> anywhere else in the sources, and it does not appears they are even\n> included in the final binary. The files themselves are pretty sparse.\n> Also, if I understand the configure scripts correctly, only a\n> ./src/backend/port/linux directory would be used, as linuxalpha is\n> considered a subset of linux. Of course the latter directory existed in\n> 6.2.x, but is now gone. I think that the former directory can follow,\n> i.e. be removed as well.\n> \tThats all for now! As usual, any questions about these\n> patches, feel free to email me. TTYL.\n> \n> ----------------------------------------------------------------------------\n> | \"For to me to live is Christ, and to die is gain.\" |\n> | --- Philippians 1:21 (KJV) |\n> ----------------------------------------------------------------------------\n> | Ryan Kirkpatrick | Boulder, Colorado | [email protected] |\n> ----------------------------------------------------------------------------\n> | http://www-ugrad.cs.colorado.edu/~rkirkpat/ |\n> ----------------------------------------------------------------------------\n> \n> --8323328-875907394-892249499=:6356\n> Content-Type: TEXT/PLAIN; charset=US-ASCII; name=\"pgsql-6.2.1.alpha.patch.4\"\n> Content-Transfer-Encoding: BASE64\n> Content-ID: <Pine.LNX.3.95.980410180459.6356B@stargazer>\n> Content-Description: \n> \n> LS0tIHBvc3RncmVzcWwtNi4zLjEuYWN0dWFsL3NyYy9pbmNsdWRlL3V0aWxz\n> L21lbXV0aWxzLmgJV2VkIEZlYiAyNSAyMjo0NDowOCAxOTk4DQorKysgcG9z\n> dGdyZXNxbC02LjMuMS9zcmMvaW5jbHVkZS91dGlscy9tZW11dGlscy5oCUZy\n> aSBBcHIgMTAgMTU6MjI6MTcgMTk5OA0KQEAgLTY3LDcgKzY3LDcgQEANCiAg\n> Ki8NCiAjaWYgZGVmaW5lZChzdW4pICYmICEgZGVmaW5lZChzcGFyYykNCiAj\n> ZGVmaW5lIExPTkdBTElHTihMRU4pCVNIT1JUQUxJR04oTEVOKQ0KLSNlbGlm\n> IGRlZmluZWQgKGFscGhhKSB8fCBkZWZpbmVkKGxpbnV4YWxwaGEpDQorI2Vs\n> aWYgZGVmaW5lZCAoYWxwaGEpIHx8ICggZGVmaW5lZChsaW51eCkgJiYgZGVm\n> aW5lZChfX2FscGhhX18pKQ0KIA0KICAvKg0KICAgKiBldmVuIHRob3VnaCAi\n> bG9uZyBhbGlnbm1lbnQiIHNob3VsZCByZWFsbHkgYmUgb24gOC1ieXRlIGJv\n> dW5kYXJpZXMgZm9yDQo=\n> --8323328-875907394-892249499=:6356\n> Content-Type: TEXT/PLAIN; charset=US-ASCII; name=\"pgsql-6.2.1.alpha.patch.3\"\n> Content-Transfer-Encoding: BASE64\n> Content-ID: <Pine.LNX.3.95.980410180459.6356C@stargazer>\n> Content-Description: \n> \n> LS0tIHBvc3RncmVzcWwtNi4zLjEuYWN0dWFsL3NyYy9iYWNrZW5kL3V0aWxz\n> L2FkdC9mbG9hdC5jCVdlZCBGZWIgMjUgMjI6Mzc6MDcgMTk5OA0KKysrIHBv\n> c3RncmVzcWwtNi4zLjEvc3JjL2JhY2tlbmQvdXRpbHMvYWR0L2Zsb2F0LmMJ\n> RnJpIEFwciAxMCAxNToyMToxMSAxOTk4DQpAQCAtMTMyLDcgKzEzMiw3IEBA\n> DQogICogdW50aWwgdGhlIGRpc3RyaWJ1dGlvbnMgYXJlIHVwZGF0ZWQuDQog\n> ICoJCQkJCQkJCS0tZGptIDEyLzE2Lzk2DQogICovDQotI2lmIGRlZmluZWQo\n> bGludXhhbHBoYSkgJiYgIWRlZmluZWQoVU5TQUZFX0ZMT0FUUykNCisjaWYg\n> KCBkZWZpbmVkKGxpbnV4KSAmJiBkZWZpbmVkKF9fYWxwaGFfXykgKSAmJiAh\n> ZGVmaW5lZChVTlNBRkVfRkxPQVRTKQ0KICNkZWZpbmUgVU5TQUZFX0ZMT0FU\n> Uw0KICNlbmRpZg0KIA0K\n> --8323328-875907394-892249499=:6356--\n> \n> \n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Fri, 10 Apr 1998 21:56:32 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Linux/Alpha and pgsql...." }, { "msg_contents": "> \n> On Fri, 10 Apr 1998, Ryan Kirkpatrick wrote:\n> \n> > \n> > \tI finally found some time to make minor patch to pgsql 6.3.1 to\n> > make it \"sort-of-work\" out of the box for Linux/Alpha. At the current\n> > moment, I can get it to compile, and run 'initdb' successfully. Regression\n> > tests are successful for the most part, with the biggest problems being in\n> > any dealing with floats, especially the float{4,...} tests. Overall, it\n> > looks as things are improving! :)\n> \n> \tCan you grab the latest snapshot, which is going to be v6.3.2, and\n> confirm/tweak your patch with that? Its meant for release on April 15th,\n> as long as my personal 'show stoppers' are no longer a problem :)\n\nAgain, the addition in templates/linuxalpha should fix most of your\nproblems. Let us know.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Fri, 10 Apr 1998 21:59:09 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Linux/Alpha and pgsql...." }, { "msg_contents": "On Fri, 10 Apr 1998, Bruce Momjian wrote:\n\n> > \tCan you grab the latest snapshot, which is going to be v6.3.2, and\n> > confirm/tweak your patch with that? Its meant for release on April 15th,\n> > as long as my personal 'show stoppers' are no longer a problem :)\n> \n> Again, the addition in templates/linuxalpha should fix most of your\n> problems. Let us know.\n\n\tOk, I grabbed the latest snapshot, April 11, 7am. The changes in\ntemplates/linuxalpha do fix some of the problems, and make fixing the rest\neasier. As it was \"out-of-the-box\", it (nearly) compiled and ran initdb\nsucessfully on my UDB. But there was quite a few failures and coredumps by\npostgres upon the running of regression tests. \n\tTo get it compile succesfully, I had to make sure the linux/alpha\ndid not include a few .h files that the alpha version was supposed to\ninclude. This occured in ./backend/main/main.c, and the patch for it is\nattached.\n\tI then added -Dlinuxalpha to the ./template/linuxalpha CFLAGS\nline, and recompiled. 'initdb' ran fine, and regression tests went much\nbetter. Now about 75% of the regression tests complete successfully. From\nthe looks of things, floats are still causing most of the problems. I\nhaven't looked yet to see if there are major problems, or just minor\nformatting ones. A patch for the ./template/linuxalpha changes is also\nattached.\n\tThese two patches should bring the current version of pgsql very\nclose to working cleaning on Linux/Alpha! :) As usual, if you have any\nquestions, feel free to email me.\n\n----------------------------------------------------------------------------\n| \"For to me to live is Christ, and to die is gain.\" |\n| --- Philippians 1:21 (KJV) |\n----------------------------------------------------------------------------\n| Ryan Kirkpatrick | Boulder, Colorado | [email protected] |\n----------------------------------------------------------------------------\n| http://www-ugrad..cs.colorado.edu/~rkirkpat/ |\n----------------------------------------------------------------------------", "msg_date": "Sat, 11 Apr 1998 13:48:20 -0500 (CDT)", "msg_from": "Ryan Kirkpatrick <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Linux/Alpha and pgsql...." }, { "msg_contents": "On Sat, 11 Apr 1998, Ryan Kirkpatrick wrote:\n\n> \tThese two patches should bring the current version of pgsql very\n> close to working cleaning on Linux/Alpha! :) As usual, if you have any\n> questions, feel free to email me.\n\n\tSeeing as how close we are to a v6.3.2 release, I put these in,\nbut I don't like them...\n\n\tWe've been moving *away* from using -D's in order to deal with\noperating system/hardware related issues...using tools/ccsym, are there no\ncompiler defined variables you can use for this instead?\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Sat, 11 Apr 1998 18:21:03 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Linux/Alpha and pgsql...." }, { "msg_contents": "On Sat, 11 Apr 1998, The Hermit Hacker wrote:\n\n> On Sat, 11 Apr 1998, Ryan Kirkpatrick wrote:\n> \n> > \tThese two patches should bring the current version of pgsql very\n> > close to working cleaning on Linux/Alpha! :) As usual, if you have any\n> > questions, feel free to email me.\n> \n> \tSeeing as how close we are to a v6.3.2 release, I put these in,\n> but I don't like them...\n> \n> \tWe've been moving *away* from using -D's in order to deal with\n> operating system/hardware related issues...using tools/ccsym, are there no\n> compiler defined variables you can use for this instead?\n\n\tHold on a second here.... The first patches I sent you were done\nby modifying the defined(linuxalpha) to (defined(linux) &&\ndefined(alpha)) (the tools/ccsym method). But I was told that there was a\ntemplate for linux/alpha, and I should modify that, and so I used the\n-D's. Which way do you all want it???\n\tFor what its worth, I agree, the output from tools/ccsym should be\nused and not -D's.\n\tOk, attached are a new set of patches. First of all, ignore all\nprevious patches I have sent on this thread. Now these three work on the\nidea that 'alpha' and 'linux' are both defined. The former is defined by\nCFLAGS in template/linuxalpha, and latter is defined by the gcc on my UDB\n(and all other Linux/Alpha systems). This removes the need to add any\ndefines to the CFLAGS line. To get rid of the other define (-Dalpha), we\nsimply need to replace each 'defined(alpha)' with '(defined(alpha) ||\ndefined(__alpha__))', as gcc automatically defines '__alpha__' like it\ndoes 'linux'. If you want me to do this, just ask.\n\tOk, now are these patches ok by everyone? :) Thanks.\n\n----------------------------------------------------------------------------\n| \"For to me to live is Christ, and to die is gain.\" |\n| --- Philippians 1:21 (KJV) |\n----------------------------------------------------------------------------\n| Ryan Kirkpatrick | Boulder, Colorado | [email protected] |\n----------------------------------------------------------------------------\n| http://www-ugrad.cs.colorado.edu/~rkirkpat/ |\n----------------------------------------------------------------------------", "msg_date": "Sat, 11 Apr 1998 16:41:48 -0500 (CDT)", "msg_from": "Ryan Kirkpatrick <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Linux/Alpha and pgsql...." }, { "msg_contents": "> \tOk, I grabbed the latest snapshot, April 11, 7am. The changes in\n> templates/linuxalpha do fix some of the problems, and make fixing the rest\n> easier. As it was \"out-of-the-box\", it (nearly) compiled and ran initdb\n> sucessfully on my UDB. But there was quite a few failures and coredumps by\n> postgres upon the running of regression tests. \n\nOK, this is good. Now my only issue is that is seems the 'alpha' define\nit now too broad. We are using 'alpha' to fix alpha issues, and to fix\n'dec alpha' issues. Is that true?\n\nIf it is, can we have the DEC alpha port define 'alpha' and 'decalpha'\nand change the 'decalpha'-specific defines to use 'decalpha', the\nlinuxalpha defines to use 'linuxalpha', and the purely 'alpha'-specific\nchanges to use just normal 'alpha'.\n\nI am requesting this because 'alpha' has been such a problem port for\nus, and would like to have things REALLY clear now that we understand\nthem, so later, they will not get broken.\n\nCan you review all the 'alpha' defines, and send us a patch that adds\n'decalpha' define to the non-linux alpha port, and change the other\n'ifdef's appropriately to everything works and is clean?\n\nRelease of 6.3.2 is due April 15th. Hope that gives you enough time.\nWe hopefully can get a dec alpha person to test the changes you make, so\nalpha works properly for 6.3.2.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Sat, 11 Apr 1998 19:15:24 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Linux/Alpha and pgsql...." }, { "msg_contents": "On Sat, 11 Apr 1998, Ryan Kirkpatrick wrote:\n\n> \tOk, attached are a new set of patches. First of all, ignore all\n\n\tPerfect...applied...\n\n> previous patches I have sent on this thread. Now these three work on the\n\n\tPlease confirm, since I haven't removed anything except those\nthings that these patches replaced...\n\n> idea that 'alpha' and 'linux' are both defined. The former is defined by\n> CFLAGS in template/linuxalpha, and latter is defined by the gcc on my UDB\n> (and all other Linux/Alpha systems). This removes the need to add any\n> defines to the CFLAGS line. To get rid of the other define (-Dalpha), we\n> simply need to replace each 'defined(alpha)' with '(defined(alpha) ||\n> defined(__alpha__))', as gcc automatically defines '__alpha__' like it\n> does 'linux'. If you want me to do this, just ask.\n\n\tPlease do...\n\n\n", "msg_date": "Sat, 11 Apr 1998 23:59:11 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Linux/Alpha and pgsql...." }, { "msg_contents": "On Fri, 10 Apr 1998, Bruce Momjian wrote:\n\n> Marc, would you install these as appropriate. Also, you will find that\n> I added to templates/linuxalpha the line:\n> \n> \tlinuxalpha:CFLAGS:-O2 -Dalpha\n> ^^^^^^^\n\n\tThis should be derived from the compiler...Ryan has reposted\npatches that do this, an dwill be doing more to get rid of the -Dalpha...\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Sun, 12 Apr 1998 00:00:47 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Linux/Alpha and pgsql...." }, { "msg_contents": "On Sat, 11 Apr 1998, The Hermit Hacker wrote:\n\n> > \tOk, attached are a new set of patches. First of all, ignore all\n> \n> \tPerfect...applied...\n\n\tThank you. :)\n\n> \tPlease do...\n\n\tOk, I will try and get the Linux/Alpha versus Dec/Alpha defines\nsorted out by Tuesday evening. Though I can't make any promises, as the\nwork I have gotten done was thanks to a three day weekend. I will let you\nall know when they are ready. But at the very least, pgsql will compile\nand pretty much work on Linux/Alpha out of the box at this point. A major\nstep forward! :)\n\n----------------------------------------------------------------------------\n| \"For to me to live is Christ, and to die is gain.\" |\n| --- Philippians 1:21 (KJV) |\n----------------------------------------------------------------------------\n| Ryan Kirkpatrick | Boulder, Colorado | [email protected] |\n----------------------------------------------------------------------------\n| http://www-ugrad.cs.colorado.edu/~rkirkpat/ |\n----------------------------------------------------------------------------\n\n", "msg_date": "Sun, 12 Apr 1998 18:44:19 -0500 (CDT)", "msg_from": "Ryan Kirkpatrick <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Linux/Alpha and pgsql...." }, { "msg_contents": "\n\tI downloaded today's (13th) snapshot, and it compiled and ran\ninitdb just fine, right out of the box. So it looks like the patches made\nit in just fine.\n\nOn Sun, 12 Apr 1998, The Hermit Hacker wrote:\n\n> \tThis should be derived from the compiler...Ryan has reposted\n> patches that do this, an dwill be doing more to get rid of the -Dalpha...\n\n\tUnfortunetly, I will not have patches to fix this until after the\n15th deadline. I am too busy with school work (finals in two weeks!) at\nthe moment to take the time to get it done. I can't promise them anytime\nearlier than a month from now. But, for the moment things are working\nreasonably well (from a Linux/Alpha standpoint), and so while the current\nstate isn't pretty, it will have to do for now. Sorry.\n\n\tAlso, in actually testing pgsql on Linux/Alpha on some real world\napplications, I found most everything works. Though I hit a snag, that\nwhen ever I try to select a datetime field from a table, postgres throws\nan arthimetic trap, and the psql core dumps. Just a data point for you\nall.\n\n\tThanks, and TTYL.\n\n\n----------------------------------------------------------------------------\n| \"For to me to live is Christ, and to die is gain.\" |\n| --- Philippians 1:21 (KJV) |\n----------------------------------------------------------------------------\n| Ryan Kirkpatrick | Boulder, Colorado | [email protected] |\n----------------------------------------------------------------------------\n| http://www-ugrad.cs.colorado.edu/~rkirkpat/ |\n----------------------------------------------------------------------------\n\n", "msg_date": "Tue, 14 Apr 1998 20:45:49 -0500 (CDT)", "msg_from": "Ryan Kirkpatrick <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Linux/Alpha and pgsql...." } ]
[ { "msg_contents": "Hello, \n\nPostgres 6.3.1. I was just trying to profile the backend. Somehow I\ncannot drop the log table. \nxxx=> \\d\n\nDatabase = xxx\n +------------------+----------------------------------+----------+\n | Owner | Relation | Type |\n +------------------+----------------------------------+----------+\n | mimo | a | sequence |\n | postgres | log | table |\n | mimo | test | table |\n +------------------+----------------------------------+----------+\nxxx=> drop table log;\nERROR: DeletePgTypeTuple: log type nonexistent \n\nMike\n\n-- \nWWW: http://www.lodz.pdi.net/~mimo tel: Int. Acc. Code + 48 42 148340\nadd: Michal Mosiewicz * Bugaj 66 m.54 * 95-200 Pabianice * POLAND\n", "msg_date": "Sat, 11 Apr 1998 14:18:42 +0200", "msg_from": "Michal Mosiewicz <[email protected]>", "msg_from_op": true, "msg_subject": "Strange..." }, { "msg_contents": "> \n> Hello, \n> \n> Postgres 6.3.1. I was just trying to profile the backend. Somehow I\n> cannot drop the log table. \n> xxx=> \\d\n> \n> Database = xxx\n> +------------------+----------------------------------+----------+\n> | Owner | Relation | Type |\n> +------------------+----------------------------------+----------+\n> | mimo | a | sequence |\n> | postgres | log | table |\n> | mimo | test | table |\n> +------------------+----------------------------------+----------+\n> xxx=> drop table log;\n> ERROR: DeletePgTypeTuple: log type nonexistent \n\nJust tried it \n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Sat, 11 Apr 1998 19:04:49 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Strange..." }, { "msg_contents": "> \n> Hello, \n> \n> Postgres 6.3.1. I was just trying to profile the backend. Somehow I\n> cannot drop the log table. \n> xxx=> \\d\n> \n> Database = xxx\n> +------------------+----------------------------------+----------+\n> | Owner | Relation | Type |\n> +------------------+----------------------------------+----------+\n> | mimo | a | sequence |\n> | postgres | log | table |\n> | mimo | test | table |\n> +------------------+----------------------------------+----------+\n> xxx=> drop table log;\n> ERROR: DeletePgTypeTuple: log type nonexistent \n\nJust tried it with 6.3.2:\n\n\ttest=> create table log(x int); \n\tCREATE\n\ttest=> drop table log;\n\tDROP\n\ttest=> \n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Sat, 11 Apr 1998 19:05:11 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Strange..." } ]
[ { "msg_contents": "\nperfect, and error message that tells me what to fix:\n\nFATAL 1: btree: BTP_CHAIN flag was expected in radhist_userid (access =\nbt_read)\n\nWhoop! :) Rebuilding that index now...\n\n\nBut, still have, and don't know where to begin diagnosing it...\n\nFATAL 1: palloc failure: memory exhausted\n\nExcept, the query appears to be:\n\nApr 11 10:25:23 clio radiusd[13005]: query failed: select uniq_id\nfrom radlog where uniq_id='237287829' and stop_time=0\nand term_server='isdn-1.trends.ca';\n\nWhen it failed...radlog being 7k records large...\n\n\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Sat, 11 Apr 1998 11:31:54 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "Got it...and..." }, { "msg_contents": "The Hermit Hacker wrote:\n> \n> perfect, and error message that tells me what to fix:\n> \n> FATAL 1: btree: BTP_CHAIN flag was expected in radhist_userid (access =\n> bt_read)\n> \n> Whoop! :) Rebuilding that index now...\n> \n> But, still have, and don't know where to begin diagnosing it...\n> \n> FATAL 1: palloc failure: memory exhausted\n> \n> Except, the query appears to be:\n> \n> Apr 11 10:25:23 clio radiusd[13005]: query failed: select uniq_id\n> from radlog where uniq_id='237287829' and stop_time=0\n> and term_server='isdn-1.trends.ca';\n> \n> When it failed...radlog being 7k records large...\n\nDid you restart postmaster after FATALs ?\n\nVadim\n", "msg_date": "Mon, 13 Apr 1998 00:44:49 +0800", "msg_from": "\"Vadim B. Mikheev\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Got it...and..." }, { "msg_contents": "On Mon, 13 Apr 1998, Vadim B. Mikheev wrote:\n\n> The Hermit Hacker wrote:\n> > \n> > perfect, and error message that tells me what to fix:\n> > \n> > FATAL 1: btree: BTP_CHAIN flag was expected in radhist_userid (access =\n> > bt_read)\n> > \n> > Whoop! :) Rebuilding that index now...\n> > \n> > But, still have, and don't know where to begin diagnosing it...\n> > \n> > FATAL 1: palloc failure: memory exhausted\n> > \n> > Except, the query appears to be:\n> > \n> > Apr 11 10:25:23 clio radiusd[13005]: query failed: select uniq_id\n> > from radlog where uniq_id='237287829' and stop_time=0\n> > and term_server='isdn-1.trends.ca';\n> > \n> > When it failed...radlog being 7k records large...\n> \n> Did you restart postmaster after FATALs ?\n\n\tThis is a production server, so restarting it isn't much of an\noption...as well, by the time its noticed (several hours after it\nhappens), its too late anyway, no?\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Sun, 12 Apr 1998 14:27:07 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Got it...and..." } ]
[ { "msg_contents": "auth 83f0295d subscribe pgsql-hackers [email protected]\n\nDavid M. Witten II\nManager, Research Support and Development Group\nIntegrated Technology Services\nUniversity of Missouri, Columbia\[email protected]\n\n\n\n\n\n\n\n\n\n\nauth 83f0295d subscribe pgsql-hackers [email protected]\n\nDavid M. Witten II\nManager, Research Support and Development Group\nIntegrated Technology Services\nUniversity of Missouri, Columbia\[email protected]", "msg_date": "Sat, 11 Apr 1998 12:21:22 -0500", "msg_from": "\"Witten II, David M.\" <[email protected]>", "msg_from_op": true, "msg_subject": "None" } ]
[ { "msg_contents": "\nfigure out what this means?\n\nNOTICE: Rel radhist: Uninitialized page 9422 - fixing\nNOTICE: Rel radhist: Uninitialized page 9426 - fixing\nNOTICE: Rel radhist: Uninitialized page 9428 - fixing\nNOTICE: Rel radhist: Uninitialized page 9431 - fixing\nNOTICE: Rel radhist: Uninitialized page 9433 - fixing\nNOTICE: Rel radhist: Uninitialized page 9435 - fixing\nNOTICE: Rel radhist: Uninitialized page 9439 - fixing\nNOTICE: Rel radhist: Uninitialized page 9441 - fixing\n\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Sat, 11 Apr 1998 15:01:47 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "Did we ever..." }, { "msg_contents": "The Hermit Hacker wrote:\n> \n> figure out what this means?\n> \n> NOTICE: Rel radhist: Uninitialized page 9422 - fixing\n> NOTICE: Rel radhist: Uninitialized page 9426 - fixing\n> NOTICE: Rel radhist: Uninitialized page 9428 - fixing\n> NOTICE: Rel radhist: Uninitialized page 9431 - fixing\n> NOTICE: Rel radhist: Uninitialized page 9433 - fixing\n> NOTICE: Rel radhist: Uninitialized page 9435 - fixing\n> NOTICE: Rel radhist: Uninitialized page 9439 - fixing\n> NOTICE: Rel radhist: Uninitialized page 9441 - fixing\n\nradhist was expanded but new blocks with new data inserted\nthere were not flushed on disk (you got ASSERTion ?)\n\nVadim\n", "msg_date": "Mon, 13 Apr 1998 00:00:05 +0800", "msg_from": "\"Vadim B. Mikheev\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Did we ever..." }, { "msg_contents": "On Mon, 13 Apr 1998, Vadim B. Mikheev wrote:\n\n> The Hermit Hacker wrote:\n> > \n> > figure out what this means?\n> > \n> > NOTICE: Rel radhist: Uninitialized page 9422 - fixing\n> > NOTICE: Rel radhist: Uninitialized page 9426 - fixing\n> > NOTICE: Rel radhist: Uninitialized page 9428 - fixing\n> > NOTICE: Rel radhist: Uninitialized page 9431 - fixing\n> > NOTICE: Rel radhist: Uninitialized page 9433 - fixing\n> > NOTICE: Rel radhist: Uninitialized page 9435 - fixing\n> > NOTICE: Rel radhist: Uninitialized page 9439 - fixing\n> > NOTICE: Rel radhist: Uninitialized page 9441 - fixing\n> \n> radhist was expanded but new blocks with new data inserted\n> there were not flushed on disk (you got ASSERTion ?)\n\n\tI don't have ASSERTion enabled on that server...its a production\nserver, and downtime is what we are trying to prevent :(\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Sun, 12 Apr 1998 14:26:03 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Did we ever..." } ]
[ { "msg_contents": "Has anyone else noticed that we certainly have a lot of people involved\nlately? I used to see about 50-75 messages a day. Now if I have not\nchecked in 6 hours, I have that same volume in just six hours.\n\nMarc, can you check the posting volume and let us know how it profiles\nfor, say, the last six months, if that is easy to do?\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Sat, 11 Apr 1998 21:30:01 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "List volume" }, { "msg_contents": "On Sat, 11 Apr 1998, Bruce Momjian wrote:\n\n> Has anyone else noticed that we certainly have a lot of people involved\n> lately? I used to see about 50-75 messages a day. Now if I have not\n> checked in 6 hours, I have that same volume in just six hours.\n> \n> Marc, can you check the posting volume and let us know how it profiles\n> for, say, the last six months, if that is easy to do?\n\npgsql-hackers.9701: 964\npgsql-hackers.9702: 308\npgsql-hackers.9703: 557\npgsql-hackers.9704: 891\npgsql-hackers.9705: 488\npgsql-hackers.9706: 598\npgsql-hackers.9707: 401\npgsql-hackers.9708: 406\npgsql-hackers.9709: 582\npgsql-hackers.9710: 591\npgsql-hackers.9711: 375\npgsql-hackers.9712: 348\npgsql-hackers.9801: 864\npgsql-hackers.9802: 1324\npgsql-hackers.9803: 1128\npgsql-hackers.9804: 216\n\nNot 100% accurate, but it gives a view of the monthly volume...\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Sat, 11 Apr 1998 23:52:39 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] List volume" }, { "msg_contents": "> \n> On Sat, 11 Apr 1998, Bruce Momjian wrote:\n> \n> > Has anyone else noticed that we certainly have a lot of people involved\n> > lately? I used to see about 50-75 messages a day. Now if I have not\n> > checked in 6 hours, I have that same volume in just six hours.\n> > \n> > Marc, can you check the posting volume and let us know how it profiles\n> > for, say, the last six months, if that is easy to do?\n> \n> pgsql-hackers.9701: 964\n> pgsql-hackers.9702: 308\n> pgsql-hackers.9703: 557\n> pgsql-hackers.9704: 891\n> pgsql-hackers.9705: 488\n> pgsql-hackers.9706: 598\n> pgsql-hackers.9707: 401\n> pgsql-hackers.9708: 406\n> pgsql-hackers.9709: 582\n> pgsql-hackers.9710: 591\n> pgsql-hackers.9711: 375\n> pgsql-hackers.9712: 348\n> pgsql-hackers.9801: 864\n> pgsql-hackers.9802: 1324\n> pgsql-hackers.9803: 1128\n> pgsql-hackers.9804: 216\n> \n\nCertainly shows the trend. And of course, this only the hackers list. \nQuestions has more too.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Sat, 11 Apr 1998 23:13:52 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] List volume" }, { "msg_contents": "On Sat, 11 Apr 1998, Bruce Momjian wrote:\n\n> > \n> > On Sat, 11 Apr 1998, Bruce Momjian wrote:\n> > \n> > > Has anyone else noticed that we certainly have a lot of people involved\n> > > lately? I used to see about 50-75 messages a day. Now if I have not\n> > > checked in 6 hours, I have that same volume in just six hours.\n> > > \n> > > Marc, can you check the posting volume and let us know how it profiles\n> > > for, say, the last six months, if that is easy to do?\n> > \n> > pgsql-hackers.9701: 964\n> > pgsql-hackers.9702: 308\n> > pgsql-hackers.9703: 557\n> > pgsql-hackers.9704: 891\n> > pgsql-hackers.9705: 488\n> > pgsql-hackers.9706: 598\n> > pgsql-hackers.9707: 401\n> > pgsql-hackers.9708: 406\n> > pgsql-hackers.9709: 582\n> > pgsql-hackers.9710: 591\n> > pgsql-hackers.9711: 375\n> > pgsql-hackers.9712: 348\n> > pgsql-hackers.9801: 864\n> > pgsql-hackers.9802: 1324\n> > pgsql-hackers.9803: 1128\n> > pgsql-hackers.9804: 216\n> > \n> \n> Certainly shows the trend. And of course, this only the hackers list. \n> Questions has more too.\n\npgsql-questions.9612: 40\npgsql-questions.9701: 142\npgsql-questions.9702: 316\npgsql-questions.9703: 550\npgsql-questions.9704: 674\npgsql-questions.9705: 525\npgsql-questions.9706: 460\npgsql-questions.9707: 642\npgsql-questions.9708: 467\npgsql-questions.9709: 573\npgsql-questions.9710: 1121\npgsql-questions.9711: 559\npgsql-questions.9712: 757\npgsql-questions.9801: 997\npgsql-questions.9802: 664\npgsql-questions.9803: 1091\npgsql-questions.9804: 293\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Sun, 12 Apr 1998 00:34:27 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] List volume" } ]
[ { "msg_contents": "\nHas anyone looked into this? I'm just getting ready to download it and\nplay with it, see what's involved in using it. From what I can see, its\nessentially an optimized stdio library...\n\nURL is at: http://www.research.att.com/sw/tools/sfio\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Sun, 12 Apr 1998 00:32:15 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "Safe/Fast I/O ..." }, { "msg_contents": "On Sun, 12 Apr 1998, The Hermit Hacker wrote:\n> Has anyone looked into this? I'm just getting ready to download it and\n> play with it, see what's involved in using it. From what I can see, its\n> essentially an optimized stdio library...\n> \n> URL is at: http://www.research.att.com/sw/tools/sfio\n\nUsing mmap and/or AIO would be better...\n\nFreeBSD and Solaris support AIO I believe. Given past trends Linux will\nas well.\n\n/* \n Matthew N. Dodd\t\t| A memory retaining a love you had for life\t\n [email protected]\t\t| As cruel as it seems nothing ever seems to\n http://www.jurai.net/~winter | go right - FLA M 3.1:53\t\n*/\n\n", "msg_date": "Sun, 12 Apr 1998 01:41:31 -0400 (EDT)", "msg_from": "\"Matthew N. Dodd\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Safe/Fast I/O ..." }, { "msg_contents": "On Sun, 12 Apr 1998, Matthew N. Dodd wrote:\n\n> On Sun, 12 Apr 1998, The Hermit Hacker wrote:\n> > Has anyone looked into this? I'm just getting ready to download it and\n> > play with it, see what's involved in using it. From what I can see, its\n> > essentially an optimized stdio library...\n> > \n> > URL is at: http://www.research.att.com/sw/tools/sfio\n> \n> Using mmap and/or AIO would be better...\n> \n> FreeBSD and Solaris support AIO I believe. Given past trends Linux will\n> as well.\n\n\tI hate to have to ask, but how is MMAP or AIO better then sfio? I\nhaven't had enough time to research any of this, and am just starting to\nlook at it...\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Sun, 12 Apr 1998 02:57:53 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Safe/Fast I/O ..." }, { "msg_contents": "On Sun, 12 Apr 1998, The Hermit Hacker wrote:\n> \tI hate to have to ask, but how is MMAP or AIO better then sfio? I\n> haven't had enough time to research any of this, and am just starting to\n> look at it...\n\nIf its simple to compile and works as a drop in replacement AND is faster,\nI see no reason why PostgreSQL shouldn't try to link with it.\n\nKeep in mind though that in order to use MMAP or AIO you'd be\nrestructuring the code to be more efficient rather than doing more of the\nsame old thing but optimized.\n\nOnly testing will prove me right or wrong though. :)\n\n/* \n Matthew N. Dodd\t\t| A memory retaining a love you had for life\t\n [email protected]\t\t| As cruel as it seems nothing ever seems to\n http://www.jurai.net/~winter | go right - FLA M 3.1:53\t\n*/\n\n", "msg_date": "Sun, 12 Apr 1998 02:08:57 -0400 (EDT)", "msg_from": "\"Matthew N. Dodd\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Safe/Fast I/O ..." }, { "msg_contents": "On Sun, 12 Apr 1998, Matthew N. Dodd wrote:\n\n> On Sun, 12 Apr 1998, The Hermit Hacker wrote:\n> > \tI hate to have to ask, but how is MMAP or AIO better then sfio? I\n> > haven't had enough time to research any of this, and am just starting to\n> > look at it...\n> \n> If its simple to compile and works as a drop in replacement AND is faster,\n> I see no reason why PostgreSQL shouldn't try to link with it.\n\n\tThat didn't really answer the question :( \n\n> Keep in mind though that in order to use MMAP or AIO you'd be\n> restructuring the code to be more efficient rather than doing more of the\n> same old thing but optimized.\n\n\tI don't know anything about AIO, so if you can give me a pointer\nto where I can read up on it, please do...\n\n\t...but, with MMAP, unless I'm mistaken, you'd essentially be\nreading the file(s) into memory and then manipulating the file(s) there.\nWhich means one helluva large amount of RAM being required...no?\n\n\tUsing stdio vs sfio, to read a 1.2million line file, the time to\ncomplete goes from 7sec to 5sec ... that makes for a substantial savings\nin time, if its applicable.\n\n\tthe problem, as I see it right now, is the docs for it suck ...\nso, right now, I'm fumbling through figuring it all out :)\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Sun, 12 Apr 1998 03:37:53 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Safe/Fast I/O ..." }, { "msg_contents": "Marc G. Fournier wrote: \n> On Sun, 12 Apr 1998, Matthew N. Dodd wrote:\n> > On Sun, 12 Apr 1998, The Hermit Hacker wrote:\n> > > \tI hate to have to ask, but how is MMAP or AIO better then sfio? I\n> > > haven't had enough time to research any of this, and am just starting to\n> > > look at it...\n> > \n> > If its simple to compile and works as a drop in replacement AND is faster,\n> > I see no reason why PostgreSQL shouldn't try to link with it.\n> \n> \tThat didn't really answer the question :( \n> \n> > Keep in mind though that in order to use MMAP or AIO you'd be\n> > restructuring the code to be more efficient rather than doing more of the\n> > same old thing but optimized.\n> \n> \tI don't know anything about AIO, so if you can give me a pointer\n> to where I can read up on it, please do...\n> \n> \t...but, with MMAP, unless I'm mistaken, you'd essentially be\n> reading the file(s) into memory and then manipulating the file(s) there.\n> Which means one helluva large amount of RAM being required...no?\n> \n> \tUsing stdio vs sfio, to read a 1.2million line file, the time to\n> complete goes from 7sec to 5sec ... that makes for a substantial savings\n> in time, if its applicable.\n> \n> \tthe problem, as I see it right now, is the docs for it suck ...\n> so, right now, I'm fumbling through figuring it all out :)\n\nOne of the options when building perl5 is to use sfio instead of stdio. I\nhaven't tried it, but they seem to think it works.\n\nThat said, The only place I see this helping pgsql is in copyin and copyout\nas these use the stdio: fread(), fwrite(), etc interfaces.\n\nEverywhere else we use the system call IO interfaces: read(), write(),\nrecv(), send(), select() etc, and do our own buffering.\n\nMy prediction is that sfio vs stdio will have undetectable performance\nimpact on sql performance and only very minor impact on copyin, copyout (as\nmost of the overhead is in pgsql, not libc). \n\nAs far as IO, the problem we have is fsync(). To get rid of it means doing\na real writeahead log system and (maybe) aio to the log. As soon as we get\nreal logging then we don't need to force datapages out so we can get rid\nof all the fsync and (given how slow we are otherwise) completely eliminate\nIO as a bottleneck.\n\nPgsql was built for comfort, not for speed. Fine tuning and code\ntweeking and microoptimization is fine as far as it goes. But there is\nprobably a maximum 2x speed up to be had that way. Total.\n\nWe need a 10x speedup to play with serious databases. This will take real\narchitectural changes.\n\nIf you are interested in what is necessary, I highly recommend the book\n\"Transaction Processing\" by Jim Gray (and someone whose name escapes me\njust now). It is a great big thing and will take a while to get through, but\nis is decently written and very well worth the time. It pretty much gives\naway the whole candy store as far as building high performance, reliable,\nand scalable database and TP systems. I wish it had been available 10\nyears ago when I got into the DB game. \n\n-dg\n\nDavid Gould [email protected] 510.628.3783 or 510.305.9468 \nInformix Software (No, really) 300 Lakeside Drive Oakland, CA 94612\n - Linux. Not because it is free. Because it is better.\n\n", "msg_date": "Sun, 12 Apr 1998 00:47:40 -0700 (PDT)", "msg_from": "[email protected] (David Gould)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Safe/Fast I/O ..." }, { "msg_contents": "> \n> On Sun, 12 Apr 1998, The Hermit Hacker wrote:\n> > \tI hate to have to ask, but how is MMAP or AIO better then sfio? I\n> > haven't had enough time to research any of this, and am just starting to\n> > look at it...\n> \n> If its simple to compile and works as a drop in replacement AND is faster,\n> I see no reason why PostgreSQL shouldn't try to link with it.\n> \n> Keep in mind though that in order to use MMAP or AIO you'd be\n> restructuring the code to be more efficient rather than doing more of the\n> same old thing but optimized.\n> \n> Only testing will prove me right or wrong though. :)\n\nAs David Gould mentioned, we need to do pre-fetching of data pages\nsomehow.\n\nWhen doing a sequential scan on a table, the OS is doing a one-page\nprefetch, which is probably enough. The problem is index scans of the\ntable. Those are not sequential in the main heap table (unless it is\nclustered on the index), so a prefetch would help here a lot.\n\nThat is where we need async i/o. I am looking in BSDI, and I don't see\nany way to do async i/o. The only way I can think of doing it is via\nthreads.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Sun, 12 Apr 1998 09:28:30 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Safe/Fast I/O ..." }, { "msg_contents": "> If you are interested in what is necessary, I highly recommend the book\n> \"Transaction Processing\" by Jim Gray (and someone whose name escapes me\n> just now). It is a great big thing and will take a while to get through, but\n> is is decently written and very well worth the time. It pretty much gives\n> away the whole candy store as far as building high performance, reliable,\n> and scalable database and TP systems. I wish it had been available 10\n> years ago when I got into the DB game. \n\nDavid is 100% correct here. We need major overhaul. \n\nHe is also 100% correct about the book he is recommending. I got it\nlast week, and was going to make a big pitch for this, but now that he\nhas mentioned it again, let me support it. His quote:\n\n\tIt pretty much gives away the whole candy store...\n\nis right on the mark. This book is big, and meaty. Date has it listed\nin his bibliography, and says:\n\nIf any computer science text ever deserved the epithet \"instant\nclassic,\" it is surely this one. Its size is daunting at first(over\n1000 pages), but the authors display an enviable lightness of touch that\nmakes even the driest aspects of the subject enjoyable reading. In\ntheir preface, they state their intent as being \"to help...solve real\nproblems\"; the book is \"pragmatic, covering basic transaction issues in\nconsiderable detail\"; and the presentation \"is full of code fragments\nshowing...basic algorithm and data structures\" and is not\n\"encyclopedic.\" Despite this last claim, the book is (not surprisingly)\ncomprehensive, and is surely destined to become the standard work. \nStrongly recommended.\n\nWhat more can I say. I will add this book recommendation to\ntools/FAQ_DEV. The book is not cheap, at ~$90.\n\nThe book is \"Transaction Processing: Concepts and Techniques,\" by Jim\nGray and Andreas Reuter, Morgan Kaufmann publishers, ISBN 1-55860-190-2.\n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Sun, 12 Apr 1998 10:04:14 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Book recommendation, was Re: [HACKERS] Safe/Fast I/O ..." }, { "msg_contents": "> As David Gould mentioned, we need to do pre-fetching of data pages\n> somehow.\n> \n> When doing a sequential scan on a table, the OS is doing a one-page\n> prefetch, which is probably enough. The problem is index scans of the\n> table. Those are not sequential in the main heap table (unless it is\n> clustered on the index), so a prefetch would help here a lot.\n> \n> That is where we need async i/o. I am looking in BSDI, and I don't see\n> any way to do async i/o. The only way I can think of doing it is via\n> threads.\n\nI found it. It is an fcntl option. From man fcntl:\n\n O_ASYNC Enable the SIGIO signal to be sent to the process group when\n I/O is possible, e.g., upon availability of data to be read.\n\nWho else supports this?\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Sun, 12 Apr 1998 10:06:14 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Safe/Fast I/O ..." }, { "msg_contents": "> When doing a sequential scan on a table, the OS is doing a one-page\n> prefetch, which is probably enough. The problem is index scans of the\n> table. Those are not sequential in the main heap table (unless it is\n> clustered on the index), so a prefetch would help here a lot.\n> \n> That is where we need async i/o. I am looking in BSDI, and I don't see\n> any way to do async i/o. The only way I can think of doing it is via\n> threads.\n\n\n O_ASYNC Enable the SIGIO signal to be sent to the process group when\n I/O is possible, e.g., upon availability of data to be read.\n\nNow I am questioning this. I am not sure this acually for file i/o, or\nonly tty i/o.\n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Sun, 12 Apr 1998 10:22:42 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Safe/Fast I/O ..." }, { "msg_contents": "On Sun, 12 Apr 1998, Bruce Momjian wrote:\n\n> > As David Gould mentioned, we need to do pre-fetching of data pages\n> > somehow.\n> > \n> > When doing a sequential scan on a table, the OS is doing a one-page\n> > prefetch, which is probably enough. The problem is index scans of the\n> > table. Those are not sequential in the main heap table (unless it is\n> > clustered on the index), so a prefetch would help here a lot.\n> > \n> > That is where we need async i/o. I am looking in BSDI, and I don't see\n> > any way to do async i/o. The only way I can think of doing it is via\n> > threads.\n> \n> I found it. It is an fcntl option. From man fcntl:\n> \n> O_ASYNC Enable the SIGIO signal to be sent to the process group when\n> I/O is possible, e.g., upon availability of data to be read.\n> \n> Who else supports this?\n\n\tFreeBSD...\n\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Sun, 12 Apr 1998 14:24:18 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Safe/Fast I/O ..." }, { "msg_contents": ">> When doing a sequential scan on a table, the OS is doing a one-page\n>> prefetch, which is probably enough. The problem is index scans of the\n>> table. Those are not sequential in the main heap table (unless it is\n>> clustered on the index), so a prefetch would help here a lot.\n>> \n>> That is where we need async i/o. I am looking in BSDI, and I don't see\n>> any way to do async i/o. The only way I can think of doing it is via\n>> threads.\n>\n>\n> O_ASYNC Enable the SIGIO signal to be sent to the process group when\n> I/O is possible, e.g., upon availability of data to be read.\n>\n>Now I am questioning this. I am not sure this acually for file i/o, or\n>only tty i/o.\n>\n\nasync file calls:\n\taio_cancel\n\taio_error\n\taio_read\n\taio_return -- gets status of pending io call\n\taio_suspend\n\taio_write\n\nAnd yes the Gray book is great!\n\nJordan Henderson\n\n", "msg_date": "Sun, 12 Apr 1998 13:37:00 -0400", "msg_from": "Jordan Henderson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Safe/Fast I/O ..." }, { "msg_contents": "On Sun, 12 Apr 1998, Bruce Momjian wrote:\n> I found it. It is an fcntl option. From man fcntl:\n> \n> O_ASYNC Enable the SIGIO signal to be sent to the process group when\n> I/O is possible, e.g., upon availability of data to be read.\n> \n> Who else supports this?\n\nFreeBSD, and NetBSD appearto.\n\nLinux and Solaris appear not to.\n\nI was really speaking of the POSIX 1003.1B AIO/LIO calls when I originally\nbrought this up. (aio_read/aio_write)\n\n/* \n Matthew N. Dodd\t\t| A memory retaining a love you had for life\t\n [email protected]\t\t| As cruel as it seems nothing ever seems to\n http://www.jurai.net/~winter | go right - FLA M 3.1:53\t\n*/\n\n", "msg_date": "Sun, 12 Apr 1998 13:38:44 -0400 (EDT)", "msg_from": "\"Matthew N. Dodd\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Safe/Fast I/O ..." }, { "msg_contents": "> \n> async file calls:\n> \taio_cancel\n> \taio_error\n> \taio_read\n> \taio_return -- gets status of pending io call\n> \taio_suspend\n> \taio_write\n\nCan you elaborate on this? Does it cause a read() to return right away,\nand signal when data is ready?\n\n> \n> And yes the Gray book is great!\n> \n> Jordan Henderson\n> \n> \n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Sun, 12 Apr 1998 13:42:41 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Safe/Fast I/O ..." }, { "msg_contents": "> > async file calls:\n> > \taio_cancel\n> > \taio_error\n> > \taio_read\n> > \taio_return -- gets status of pending io call\n> > \taio_suspend\n> > \taio_write\n>\n> Can you elaborate on this? Does it cause a read() to return right away,\n> and signal when data is ready?\n\n\nThese are posix calls. Many systems support them and they are fairly easy\nto emulate (with threads or io processes) on systems that don't. If we\nare going to do Async IO, I suggest that we code to the posix interface and\nbuild emulators for the systems that don't have the posix calls.\n\nI think there is an implementation of this for Linux, but it is a separate\npackage, not part of the base system as far as I know. Of course with Linux\nanything you know it didn't do two weeks ago, it will do next week...\n\nHere is the Solaris man page for aio_read() and aio_write:\n\n-dg\n\n-----------------------------------------------------------------------------\n\n\nSunOS 5.5.1 Last change: 19 Aug 1993 1\naio_read(3R) Realtime Library aio_read(3R)\n\n\nNAME\n aio_read, aio_write - asynchronous read and write operations\n\nSYNOPSIS\n cc [ flag ... ] file ... -lposix4 [ library ... ]\n\n #include <aio.h>\n\n int aio_read(struct aiocb *aiocbp);\n\n int aio_write(struct aiocb *aiocbp);\n\n struct aiocb {\n int aio_fildes; /* file descriptor */\n volatile void *aio_buf; /* buffer location */\n size_t aio_nbytes; /* length of transfer\n */\n off_t aio_offset; /* file offset */\n int aio_reqprio; /* request priority\n offset */\n struct sigevent aio_sigevent; /* signal number and\n offset */\n int aio_lio_opcode; /* listio operation */\n };\n\n struct sigevent {\n int sigev_notify; /* notification mode */\n int sigev_signo; /* signal number */\n union sigval sigev_value; /* signal value */\n };\n\n union sigval {\n int sival_int; /* integer value */\n void *sival_ptr; /* pointer value */\n };\n\nMT-LEVEL\n MT-Safe\n\nDESCRIPTION\n aio_read() queues an asynchronous read request, and returns\n control immediately. Rather than blocking until completion,\n the read operation continues concurrently with other\n activity of the process.\n\n Upon enqueuing the request, the calling process reads\n aiocbp->nbytes from the file referred to by aiocbp->fildes\n into the buffer pointed to by aiocbp->aio_buf.\n aiocbp->offset marks the absolute position from the begin-\n ning of the file (in bytes) at which the read begins.\n\n aio_write() queues an asynchronous write request, and\n returns control immediately. Rather than blocking until\n completion, the write operation continues concurrently with\n other activity of the process.\n\n Upon enqueuing the request, the calling process writes\n aiocbp->nbytes from the buffer pointed to by aiocbp-\n >aio_buf into the file referred to by aiocbp->fildes. If\n O_APPEND is set for aiocbp->fildes, aio_write() operations\n append to the file in the same order as the calls were made.\n\n If O_APPEND is not set for the file descriptor, then the\n write operation will occur at the absolute position from the\n beginning of the file plus aiocbp->offset (in bytes).\n\n These asynchronous operations are submitted at a priority\n equal to the calling process' scheduling priority minus\n aiocbp->aio_reqprio.\n\n aiocb->aio_sigevent defines both the signal to be generated\n and how the calling process will be notified upon I/O com-\n pletion. If aio_sigevent.sigev_notify is SIGEV_NONE, then\n no signal will be posted upon I/O completion, but the error\n status and the return status for the operation will be set\n appropriately. If aio_sigevent.sigev_notify is\n SIGEV_SIGNAL, then the signal specified in\n aio_sigevent.sigev_signo will be sent to the process. If\n the SA_SIGINFO flag is set for that signal number, then the\n signal will be queued to the process and the value specified\n in aio_sigevent.sigev_value will be the si_value component\n of the generated signal (see siginfo(5)).\n\nRETURN VALUES\n If the I/O operation is successfully queued, aio_read() and\n aio_write() return 0, otherwise, they return -1, and set\n errno to indicate the error condition. aiocbp may be used\n as an argument to aio_error(3R) and aio_return(3R) in order\n to determine the error status and the return status of the\n asynchronous operation while it is proceeding.\n\nERRORS\n EAGAIN The requested asynchronous I/O operation was\n not queued due to system resource limita-\n tions.\n\n ENOSYS aio_read() or aio_write() is not supported by\n this implementation.\n\n EBADF If the calling function is aio_read(), and\n aiocbp->fildes is not a valid file descriptor\n open for reading. If the calling function is\n aio_write(), and aiocbp->fildes is not a\n valid file descriptor open for writing.\n\n EINVAL The file offset value implied by aiocbp->aio_offset\n would be invalid,\n aiocbp->aio_reqprio is not a valid value,\n or aiocbp->aio_nbytes is an invalid value.\n\n ECANCELED The requested I/O was canceled before the I/O\n completed due to an explicit aio_cancel(3R)\n request.\n\n EINVAL The file offset value implied by aiocbp-\n >aio_offset would be invalid.\n\nSEE ALSO\n close(2), exec(2), exit(2), fork(2), lseek(2), read(2),\n write(2), aio_cancel(3R), aio_return(3R), lio_listio(3R),\n siginfo(5)\n\nNOTES\n For portability, the application should set aiocb- >aio_reqprio\n to 0.\n\n Applications compiled under Solaris 2.3 and 2.4 and using\n POSIX aio must be recompiled to work correctly when Solaris\n supports the Asynchronous Input and Output option.\n\nBUGS\n In Solaris 2.5, these functions always return - 1 and set\n errno to ENOSYS, because this release does not support the\n Asynchronous Input and Output option. It is our intention\n\n", "msg_date": "Sun, 12 Apr 1998 12:25:24 -0700 (PDT)", "msg_from": "[email protected] (David Gould)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Safe/Fast I/O ..." }, { "msg_contents": "Bruce Momjian wrote:\n> \n> > \n> > On Sun, 12 Apr 1998, The Hermit Hacker wrote:\n> > > \tI hate to have to ask, but how is MMAP or AIO better then sfio? I\n> > > haven't had enough time to research any of this, and am just starting to\n> > > look at it...\n> > \n> > If its simple to compile and works as a drop in replacement AND is faster,\n> > I see no reason why PostgreSQL shouldn't try to link with it.\n> > \n> > Keep in mind though that in order to use MMAP or AIO you'd be\n> > restructuring the code to be more efficient rather than doing more of the\n> > same old thing but optimized.\n> > \n> > Only testing will prove me right or wrong though. :)\n> \n> As David Gould mentioned, we need to do pre-fetching of data pages\n> somehow.\n> \n> When doing a sequential scan on a table, the OS is doing a one-page\n> prefetch, which is probably enough. The problem is index scans of the\n> table. Those are not sequential in the main heap table (unless it is\n> clustered on the index), so a prefetch would help here a lot.\n> \n> That is where we need async i/o. I am looking in BSDI, and I don't see\n> any way to do async i/o. The only way I can think of doing it is via\n> threads.\n\nI have heard the glibc version 2.0 will support the Posix AIO spec.\nSolaris currently has AN implementation of AIO, but it is not the\nPOSIX one. This prefetch could be done in another process or thread,\nrather than tying the code to a given AIO implementation.\n\nOcie\n", "msg_date": "Sun, 12 Apr 1998 17:50:48 -0700 (PDT)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Safe/Fast I/O ..." }, { "msg_contents": "The Hermit Hacker wrote:\n\n> ...but, with MMAP, unless I'm mistaken, you'd essentially be\n> reading the file(s) into memory and then manipulating the file(s) there.\n> Which means one helluva large amount of RAM being required...no?\n\nNot exactly. Memory mapping is used only to map file into some memory\naddresses but not put into memory. Disk sectors are copied into memory\non demand. If some mmaped page is accessed - it is copied from disk into\nmemory.\n\nThe main reason of using memory mapping is that you don't have to create\nunnecessary buffers. Normally, for every operation you have to create\nsome in-memory buffer, copy the data there, do some operations, put the\ndata back into file. In case of memory mapping you may avoid of creating\nof unnecessary buffers, and moreover you may call your system functions\nless frequently. There are also additional savings. (Less memory\ncopying, reusing memory if several processes map the same file)\n\nI don't think there exist more efficient solutions. \n\nMike\n\n-- \nWWW: http://www.lodz.pdi.net/~mimo tel: Int. Acc. Code + 48 42 148340\nadd: Michal Mosiewicz * Bugaj 66 m.54 * 95-200 Pabianice * POLAND\n", "msg_date": "Mon, 13 Apr 1998 03:23:13 +0200", "msg_from": "Michal Mosiewicz <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Safe/Fast I/O ..." } ]
[ { "msg_contents": "mmap() is cool since it avoids copying data between kernel and user address\nspaces. However, mmap() is going to be either synchronous (\"won't return 'til\nit has set up the page table stuff and maybe allocated backing store\") or not\n(\"will return immediately but your process will silently block if you try to\naccess the address range before the back office work is done for the region\").\nThere is no callback facility and no way to poll for region readiness.\n\naio_*() is cool since you can queue a read or write and then either get a\ncallback when it's complete or poll it. However, there's no way to allocate\nthe backing store before you start scribbling, so there is always a copy on\naio_write(). And there's no page flipping in aio_read()'s definition, so\nunless you allocate your read buffers in page boundaries and unless your \nkernel is really smart, you're always going to see a copy in aio_read().\n\nO_ASYNC and select() are only useful for externally synchronized I/O like\nTTY and network. select() always returns both readable and writable for\neither files in a file system or for block or character special disk files.\n\nAs far as I know, other than on the MASSCOMP (which more or less did what\nVMS did and what Win/NT now does in this area), no UNIX system, especially\nincluding POSIX.1B systems, has quite what's wanted for high performance\ntransactional I/O.\n\nTrue asynchrony means having the ability to choose when to block, and to\nparallize computation with I/O, and to get more total work done per unit time\nby doing application level seek ordering and write buffering (avoiding excess\nmechanical movement). In the last I/O intensive system I helped build here,\nwe decided that mmap(), even with its periodic time losses, gave us better\ntotal throughput due to the lack of copy overhead. It helps if you both mmap\nthings with a lot of regionality, and access them with high locality of\nreference. But it was the savings of memory bus bandwidth that bought us\nthe most.\n\n#ifndef BUFFER_H\n#define BUFFER_H\n\n#include <stdio.h>\n#include \"misc.h\"\n\n#define\tBUF_SIZE\t\t4096\n\ntypedef struct buffer {\n\tvoid *\t\t\topaque;\n} buffer;\n\ntypedef enum bufprot {\n\tbuf_ro,\n\tbuf_rw\n\t/* Note that there is no buf_wo since RMW is the processor standard. */\n} bufprot;\n\nint\t\tbuf_init(int nmax, int grow);\nint\t\tbuf_shutdown(FILE *);\nint\t\tbuf_get(buffer *);\nint\t\tbuf_mget(buffer *, int, off_t, bufprot);\nint\t\tbuf_refcount(buffer);\nvoid\t\tbuf_ref(buffer);\nvoid\t\tbuf_unref(buffer);\nvoid\t\tbuf_clear(buffer);\nvoid\t\tbuf_add(buffer, size_t);\nvoid\t\tbuf_sub(buffer, size_t);\nvoid\t\tbuf_shift(buffer, size_t);\nsize_t\t\tbuf_used(buffer);\nsize_t\t\tbuf_avail(buffer);\nvoid *\t\tbuf_used_ptr(buffer);\nvoid *\t\tbuf_avail_ptr(buffer);\nstruct iovec\tbuf_used_iov(buffer);\nstruct iovec\tbuf_avail_iov(buffer);\nregion\t\tbuf_used_reg(buffer);\nregion\t\tbuf_avail_reg(buffer);\nint\t\tbuf_printf(buffer, const char *, ...);\n\n#endif /* !BUFFER_H */\n", "msg_date": "Sun, 12 Apr 1998 16:38:02 -0700", "msg_from": "Paul A Vixie <[email protected]>", "msg_from_op": true, "msg_subject": "Re: hackers-digest V1 #771 (safe/fast I/O)" }, { "msg_contents": "Here's a belated footnote to Paul Vixie's helpful posting of April 12:\n\n> Date: Sun, 12 Apr 1998 16:38:02 -0700\n> From: Paul A Vixie <[email protected]>\n> Sender: [email protected]\n> Precedence: bulk\n> \n> mmap() is cool since it avoids copying data between kernel and user address\n> spaces. However, mmap() is going to be either synchronous (\"won't return 'til\n> it has set up the page table stuff and maybe allocated backing store\") or not\n> (\"will return immediately but your process will silently block if you try to\n> access the address range before the back office work is done for the region\").\n> There is no callback facility and no way to poll for region readiness.\n...\n\nIn the case of FreeBSD, there is no callback facility, this is true,\nbut you can poll for region readiness via mincore().\n\n", "msg_date": "Wed, 22 Apr 1998 16:21:50 -0500 (CDT)", "msg_from": "Hal Snyder <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: hackers-digest V1 #771 (safe/fast I/O)" }, { "msg_contents": "Hal Snyder wrote:\n> \n> Here's a belated footnote to Paul Vixie's helpful posting of April 12:\n> \n> > Date: Sun, 12 Apr 1998 16:38:02 -0700\n> > From: Paul A Vixie <[email protected]>\n> > Sender: [email protected]\n> > Precedence: bulk\n> > \n> > mmap() is cool since it avoids copying data between kernel and user address\n> > spaces. However, mmap() is going to be either synchronous (\"won't return 'til\n> > it has set up the page table stuff and maybe allocated backing store\") or not\n> > (\"will return immediately but your process will silently block if you try to\n> > access the address range before the back office work is done for the region\").\n> > There is no callback facility and no way to poll for region readiness.\n> ...\n> \n> In the case of FreeBSD, there is no callback facility, this is true,\n> but you can poll for region readiness via mincore().\n\nI don't believe mincore is universally implemented either.\n\nOcie\n", "msg_date": "Wed, 22 Apr 1998 14:32:51 -0700 (PDT)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: hackers-digest V1 #771 (safe/fast I/O)" } ]
[ { "msg_contents": "golem$ cd doc\ngolem$ make install\nmake all\nmake[1]: Entering directory `/opt/postgres/pgsql/doc'\nrm -rf ./admin unpacked/admin\nif test ! -d unpacked/admin ; then mkdir unpacked/admin ; fi\nmkdir: cannot make directory `unpacked/admin': No such file or directory\nmake[1]: *** [admin] Error 1\nmake[1]: Leaving directory `/opt/postgres/pgsql/doc'\nmake: *** [install] Error 2\n\nWhat is \"unpacked\"? And why does the docs Makefile want it??\n\n - Tom\n", "msg_date": "Mon, 13 Apr 1998 04:12:13 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": true, "msg_subject": "Broken makefile for docs" }, { "msg_contents": "On Mon, 13 Apr 1998, Thomas G. Lockhart wrote:\n\n> golem$ cd doc\n> golem$ make install\n> make all\n> make[1]: Entering directory `/opt/postgres/pgsql/doc'\n> rm -rf ./admin unpacked/admin\n> if test ! -d unpacked/admin ; then mkdir unpacked/admin ; fi\n> mkdir: cannot make directory `unpacked/admin': No such file or directory\n> make[1]: *** [admin] Error 1\n> make[1]: Leaving directory `/opt/postgres/pgsql/doc'\n> make: *** [install] Error 2\n> \n> What is \"unpacked\"? And why does the docs Makefile want it??\n\n> cvs diff -r1.5 -r1.6 Makefile\nIndex: Makefile\n===================================================================\nRCS file: /usr/local/cvsroot/pgsql/doc/Makefile,v\nretrieving revision 1.5\nretrieving revision 1.6\ndiff -r1.5 -r1.6\n11c11\n< # $Header: /usr/local/cvsroot/pgsql/doc/Makefile,v 1.5 1998/03/15\n07:37:51 scrappy Exp $\n---\n> # $Header: /usr/local/cvsroot/pgsql/doc/Makefile,v 1.6 1998/04/06\n01:35:16 momjian Exp $\n15c15\n< PGDOCS= /usr/local/cdrom/docs\n---\n> PGDOCS= unpacked\n\n\nChanges for DESTDIR/linux that Bruce committed, but what I had before that\nwould have broken on your machine also, I fear :)\n\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Mon, 13 Apr 1998 02:05:55 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Broken makefile for docs" }, { "msg_contents": "> \n> golem$ cd doc\n> golem$ make install\n> make all\n> make[1]: Entering directory `/opt/postgres/pgsql/doc'\n> rm -rf ./admin unpacked/admin\n> if test ! -d unpacked/admin ; then mkdir unpacked/admin ; fi\n> mkdir: cannot make directory `unpacked/admin': No such file or directory\n> make[1]: *** [admin] Error 1\n> make[1]: Leaving directory `/opt/postgres/pgsql/doc'\n> make: *** [install] Error 2\n> \n> What is \"unpacked\"? And why does the docs Makefile want it??\n\nOops, I installed that patch. Someone supplied a patch to put it in a\nsubdirectory called 'unpacked' rather than a hard-coded patch. I think\nit was the Linux Redhat guy. Guess he forgot to create the directory.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Mon, 13 Apr 1998 10:33:59 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Broken makefile for docs" }, { "msg_contents": "> > What is \"unpacked\"? And why does the docs Makefile want it??\n> < PGDOCS= /usr/local/cdrom/docs\n> ---\n> > PGDOCS= unpacked\n> \n> Changes for DESTDIR/linux that Bruce committed, but what I had before \n> that would have broken on your machine also, I fear :)\n\nWell, /usr/local/cdrom/docs doesn't quite work either :)\n\nWill revert to the original stuff unless there is a \"grand scheme\" for\nthis. We're now using POSTGRESDIR for the DESTDIR function?\n\n - Tom\n", "msg_date": "Mon, 13 Apr 1998 14:39:15 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Broken makefile for docs" }, { "msg_contents": "On Mon, 13 Apr 1998, Thomas G. Lockhart wrote:\n\n> > > What is \"unpacked\"? And why does the docs Makefile want it??\n> > < PGDOCS= /usr/local/cdrom/docs\n> > ---\n> > > PGDOCS= unpacked\n> > \n> > Changes for DESTDIR/linux that Bruce committed, but what I had before \n> > that would have broken on your machine also, I fear :)\n> \n> Well, /usr/local/cdrom/docs doesn't quite work either :)\n\n\tNope, that was just me working on the cd's :) I must have done a\ncommit without realizing the change...\n\n> Will revert to the original stuff unless there is a \"grand scheme\" for\n> this. We're now using POSTGRESDIR for the DESTDIR function?\n\n\tNo, I believe we got rid of the DESTDIR functionality altogether\nas it was unrequired...\n\n\n", "msg_date": "Mon, 13 Apr 1998 10:40:06 -0400 (EDT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Broken makefile for docs" } ]
[ { "msg_contents": "\nI'm still getting the following BTP_CHAIN errors on my btree index. Funny\nthing is that its the *same* index each time, and this is after dropping\nand rebulding it...\n\n...where next to investigate? Recommendations? IMHO, this is critical\nenough to hold off a v6.3.2 release :(\n\n\nFATAL 1: btree: BTP_CHAIN flag was expected in radhist_userid (access =\nbt_read)\nFATAL 1: btree: BTP_CHAIN flag was expected in radhist_userid (access =\nbt_read)\nFATAL 1: btree: BTP_CHAIN flag was expected in radhist_userid (access =\nbt_read)\nFATAL 1: btree: BTP_CHAIN flag was expected in radhist_userid (access =\nbt_read)\nFATAL 1: btree: BTP_CHAIN flag was expected in radhist_userid (access =\nbt_read)\n\n\n", "msg_date": "Mon, 13 Apr 1998 12:00:39 -0400 (EDT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "still getting FATAL errors on btree's..." }, { "msg_contents": "> \n> \n> I'm still getting the following BTP_CHAIN errors on my btree index. Funny\n> thing is that its the *same* index each time, and this is after dropping\n> and rebulding it...\n> \n> ...where next to investigate? Recommendations? IMHO, this is critical\n> enough to hold off a v6.3.2 release :(\n\nObiously there is something strange going on, or many more people would\nbe seeing it. The question is what.\n\nCould it be the data? Concentrate on that table, load only half, and\nsee if it happens. Try loading first half of the file twice, to the\nfile is the same size, but the data is only from the first half. Try it\nwith the second half too. Does the problem change. If so, there is\nsomething in the data that is causing the problem.\n\nIs it something that we can repeat? Can you put it on the ftp server\nwith a script so others can check it? If you load just that table into\nan empty database, does the problem still occur?\n\n\n> \n> \n> FATAL 1: btree: BTP_CHAIN flag was expected in radhist_userid (access =\n> bt_read)\n> FATAL 1: btree: BTP_CHAIN flag was expected in radhist_userid (access =\n> bt_read)\n> FATAL 1: btree: BTP_CHAIN flag was expected in radhist_userid (access =\n> bt_read)\n> FATAL 1: btree: BTP_CHAIN flag was expected in radhist_userid (access =\n> bt_read)\n> FATAL 1: btree: BTP_CHAIN flag was expected in radhist_userid (access =\n> bt_read)\n> \n> \n> \n> \n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Mon, 13 Apr 1998 12:31:10 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] still getting FATAL errors on btree's..." }, { "msg_contents": "On Mon, 13 Apr 1998, Bruce Momjian wrote:\n\n> > \n> > \n> > I'm still getting the following BTP_CHAIN errors on my btree index. Funny\n> > thing is that its the *same* index each time, and this is after dropping\n> > and rebulding it...\n> > \n> > ...where next to investigate? Recommendations? IMHO, this is critical\n> > enough to hold off a v6.3.2 release :(\n> \n> Obiously there is something strange going on, or many more people would\n> be seeing it. The question is what.\n> \n> Could it be the data? Concentrate on that table, load only half, and\n> see if it happens. Try loading first half of the file twice, to the\n> file is the same size, but the data is only from the first half. Try it\n> with the second half too. Does the problem change. If so, there is\n> something in the data that is causing the problem.\n\n\tWell, I kinda figured it had something to do with the data, but\nnarrowing it down (500+k records) is something that isn't that easy :(\n\n\tI know its the radhist_userid index, which is indexed on one\nfield, userid...if there was some way of translating location in the index\nwith a record number...?\n\n\tOh well...will continue to investigate and use your ideas...\n\n\n", "msg_date": "Mon, 13 Apr 1998 13:01:19 -0400 (EDT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] still getting FATAL errors on btree's..." }, { "msg_contents": "On Mon, 13 Apr 1998, The Hermit Hacker wrote:\n\n> On Mon, 13 Apr 1998, Bruce Momjian wrote:\n> \n> > > \n> > > \n> > > I'm still getting the following BTP_CHAIN errors on my btree index. Funny\n> > > thing is that its the *same* index each time, and this is after dropping\n> > > and rebulding it...\n> > > \n> > > ...where next to investigate? Recommendations? IMHO, this is critical\n> > > enough to hold off a v6.3.2 release :(\n> > \n> > Obiously there is something strange going on, or many more people would\n> > be seeing it. The question is what.\n> > \n> > Could it be the data? Concentrate on that table, load only half, and\n> > see if it happens. Try loading first half of the file twice, to the\n> > file is the same size, but the data is only from the first half. Try it\n> > with the second half too. Does the problem change. If so, there is\n> > something in the data that is causing the problem.\n> \n> \tWell, I kinda figured it had something to do with the data, but\n> narrowing it down (500+k records) is something that isn't that easy :(\n> \n> \tI know its the radhist_userid index, which is indexed on one\n> field, userid...if there was some way of translating location in the index\n> with a record number...?\n> \n> \tOh well...will continue to investigate and use your ideas...\n\nThis is very quickly doing downhill ;(\n\nI took all entries in radhist newer then 01/01/98 and copied them into\nradnew, then deleted those entries (first bad move), then I did an 'alter\ntable' to move radhist to radhist_old, and another 'alter table' to move\nradnew back to radhist...\n\nTotally locked up postmaster, so I had to kill off the processes (second\nbad move)...\n\n ls -lt rad*\n-rw------- 1 postgres wheel 77144064 Apr 13 13:10 radhist_old\n-rw------- 1 postgres wheel 3842048 Apr 13 13:08 radlog\n-rw------- 1 postgres wheel 1073152 Apr 13 13:07 radlog_userid\n-rw------- 1 postgres wheel 1646592 Apr 13 13:07 radlog_uniq_id\n-rw------- 1 postgres wheel 999424 Apr 13 13:07 radlog_stop_time\n-rw------- 1 postgres wheel 1294336 Apr 13 13:07 radlog_start_time\n-rw------- 1 postgres wheel 36921344 Apr 13 12:55 radhist\n-rw------- 1 postgres wheel 6864896 Apr 6 10:14 radold\n\n\nNow, I can't access radhist, even though the database is there:\n\nacctng=> select * from radhist;\nERROR: radhist: Table does not exist.\n\nChecked the pg_class table, and radnew still existed, but radhist didn't,\nso did the following to \"fix\" it...\n\nupdate pg_class set relname = 'radhist' where relname = 'radnew';\n\nAny particular reason why that was a bad idea? I appears to have\nworked...\n\n\n\n", "msg_date": "Mon, 13 Apr 1998 13:20:42 -0400 (EDT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] still getting FATAL errors on btree's..." }, { "msg_contents": "> Checked the pg_class table, and radnew still existed, but radhist didn't,\n> so did the following to \"fix\" it...\n> \n> update pg_class set relname = 'radhist' where relname = 'radnew';\n> \n> Any particular reason why that was a bad idea? I appears to have\n> worked...\n\nI believe this is what alter table does.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Mon, 13 Apr 1998 14:17:51 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] still getting FATAL errors on btree's..." }, { "msg_contents": "On Mon, 13 Apr 1998, Bruce Momjian wrote:\n\n> > Checked the pg_class table, and radnew still existed, but radhist didn't,\n> > so did the following to \"fix\" it...\n> > \n> > update pg_class set relname = 'radhist' where relname = 'radnew';\n> > \n> > Any particular reason why that was a bad idea? I appears to have\n> > worked...\n> \n> I believe this is what alter table does.\n\n\tThat's what I think too...I was just worried that it might do\nsomething else on top of it all:(\n\n\n", "msg_date": "Mon, 13 Apr 1998 14:28:13 -0400 (EDT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] still getting FATAL errors on btree's..." }, { "msg_contents": "On Mon, 13 Apr 1998, Bruce Momjian wrote:\n\n> > \n> > \n> > I'm still getting the following BTP_CHAIN errors on my btree index. Funny\n> > thing is that its the *same* index each time, and this is after dropping\n> > and rebulding it...\n> > \n> > ...where next to investigate? Recommendations? IMHO, this is critical\n> > enough to hold off a v6.3.2 release :(\n> \n> Obiously there is something strange going on, or many more people would\n> be seeing it. The question is what.\n\nI have seen the same message on a 6.2 system. However, after I had \ndropped and rebuilt the indices, the porblem disappaered completely and I \nhaven't seen it since.\n\nMaarten\n\n_____________________________________________________________________________\n| TU Delft, The Netherlands, Faculty of Information Technology and Systems |\n| Department of Electrical Engineering |\n| Computer Architecture and Digital Technique section |\n| [email protected] |\n-----------------------------------------------------------------------------\n\n", "msg_date": "Tue, 14 Apr 1998 10:42:00 +0200 (MET DST)", "msg_from": "Maarten Boekhold <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] still getting FATAL errors on btree's..." } ]
[ { "msg_contents": "[Please forgive me for the way this post is put together; I'm not\nactually on your mailing-list, but was just perusing the archives.]\n\nMichal Mosiewicz <[email protected]> writes:\n\n> The main reason of using memory mapping is that you don't have to create\n> unnecessary buffers. Normally, for every operation you have to create\n> some in-memory buffer, copy the data there, do some operations, put the\n> data back into file. In case of memory mapping you may avoid of creating\n> of unnecessary buffers, and moreover you may call your system functions\n> less frequently. There are also additional savings. (Less memory\n> copying, reusing memory if several processes map the same file)\n\nAdditionally, if your operating system is at all reasonable, using\nmemory mapping allows you to take advantage of all the work that has\ngone into tuning your VM system. If you map a large file, and then\naccess in some way that shows reasonable locality, the VM system will\nprobably be able to do a better job of page replacement on a\nsystem-wide basis than you could do with a cache built into your\napplication. (A good system will also provide other benefits, such as\npre-faulting and extended read ahead.)\n\nOf course, it does have one significant drawback: memory-mapped regions\ndo not automatically extend when their underlying files do. So, for\ninteracting with a structure that shows effectively linear access and\ngrowth, asynchronous I/O is more likely to be a benefit, since AIO can\nextend a file asynchronously, whereas other mechanisms will block\nwhile the file is being extended. (Depending on the system, this may\nnot be true for multi-threaded programs.)\n\n-GAWollman\n\n--\nGarrett A. Wollman | O Siem / We are all family / O Siem / We're all the same\[email protected] | O Siem / The fires of freedom \nOpinions not those of| Dance in the burning flame\nMIT, LCS, CRS, or NSA| - Susan Aglukark and Chad Irschick\n", "msg_date": "Mon, 13 Apr 1998 12:26:59 -0400 (EDT)", "msg_from": "Garrett Wollman <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Safe/Fast I/O ..." }, { "msg_contents": "While having some spare two hours I was just looking at the current code\nof postgres. I was trying to estimate how would it fit to the current\npostgres guts.\n\nFinally I've found more proofs that memory mapping would do a lot to\ncurrent performance, but I must admit that current storage manager is\npretty read/write oriented. It would be easier to integrate memory\nmapping into buffer manager. Actually buffer manager role is to map some\nparts of files into memory buffers. However it takes a lot to get\nthrough several layers (smgr and finally md). \n\nI noticed that one of the very important features of mmaping is that you\ncan sync the buffer (even some part of it), not the whole file. So if\nthere would be some kind of page level locking, it would be absolutly\nnecessary to make sure that only committed pages are synced and we don't\noverload the IO with unfinished things.\n\nAlso, I think that there is no need to create buffers in shared memory.\nI have just tested that if you map files with MAP_SHARED attribute set,\nthen each proces is working on exactly the same copy of memory. \n\nI have also noticed more interesting things, maybe somebody would\nclarify on that since I'm not so literate with mmaping. First thing I\nwas wondering about was how would we deal with open descriptor limits if\nwe use direct buffer-to-file mappings. While currently buffers are\nisolated from files it's possible to close some descriptors without\nthrowing buffers. However it seems (tried it) that memory mapping works\neven after a file descriptor is closed. So, is this possible to cross\nthe limit of open files by using memory mapping? Or maybe the descriptor\nremains open until munmap call? Or maybe it's just a Linux feature?\n\nMike\n\n-- \nWWW: http://www.lodz.pdi.net/~mimo tel: Int. Acc. Code + 48 42 148340\nadd: Michal Mosiewicz * Bugaj 66 m.54 * 95-200 Pabianice * POLAND\n", "msg_date": "Wed, 15 Apr 1998 02:10:50 +0200", "msg_from": "Michal Mosiewicz <[email protected]>", "msg_from_op": false, "msg_subject": "Memory mapping (Was: Safe/Fast I/O ...)" }, { "msg_contents": "Michal Mosiewicz wrote:\n> \n> While having some spare two hours I was just looking at the current code\n> of postgres. I was trying to estimate how would it fit to the current\n> postgres guts.\n> \n> Finally I've found more proofs that memory mapping would do a lot to\n> current performance, but I must admit that current storage manager is\n> pretty read/write oriented. It would be easier to integrate memory\n> mapping into buffer manager. Actually buffer manager role is to map some\n> parts of files into memory buffers. However it takes a lot to get\n> through several layers (smgr and finally md). \n> \n> I noticed that one of the very important features of mmaping is that you\n> can sync the buffer (even some part of it), not the whole file. So if\n> there would be some kind of page level locking, it would be absolutly\n> necessary to make sure that only committed pages are synced and we don't\n> overload the IO with unfinished things.\n> \n> Also, I think that there is no need to create buffers in shared memory.\n> I have just tested that if you map files with MAP_SHARED attribute set,\n> then each proces is working on exactly the same copy of memory. \n\nThis means that the processes can share the memory, but these pages\nmust be explicitly mapped in the other process before it can get to\nthem and must be explicitly unmapped from all processes before the\nmemory is freed up.\n\nIt seems like there are basically two ways we could use this.\n\n1) mmap in all files that might be used and just access them directly.\n\n2) mmap in pages from files as they are needed and munmap the pages\nout when they are no longer needed.\n\n#1 seems easier, but it does limit us to 2gb databases on 32 bit\nmachines. \n\n#2 could be done by having a sort of mmap helper. As soon as process\nA knows that it will need (might need?) a given page from a given\nfile, it communicates this to another process B, which attempts to\ncreate a shared mmap for that page. When process A actually needs to\nuse the page, it uses the real mmap, which should be fast if process B\nhas already mapped this page into memory. \n\nOther processes could make use of this mapping (following proper\nlocking etiquette), each making their request to B, which simply\nincrements a counter on that mapping for each request after the first\none. When a process is done with one of these mappings, it unmaps the\npage itself, and then tells B that it is done with the page. When B\nsees that the count on this page has gone to zero, it can either\nremove its own map, or retain it in some sort of cache in case it is\nrequested again in the near future. Either way, when B figures the\npage is no longer being used, it unmaps the page itself.\n\nThis mapping might get synced by the OS at unknown intervals, but\nprocesses can sync the pages themselves, say at the end of a\ntransaction.\n\nOcie\n", "msg_date": "Tue, 14 Apr 1998 18:43:23 -0700 (PDT)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Memory mapping (Was: Safe/Fast I/O ...)" }, { "msg_contents": "On Wed, 15 Apr 1998, Michal Mosiewicz wrote:\n> isolated from files it's possible to close some descriptors without\n> throwing buffers. However it seems (tried it) that memory mapping works\n> even after a file descriptor is closed. So, is this possible to cross\n> the limit of open files by using memory mapping? Or maybe the descriptor\n> remains open until munmap call? Or maybe it's just a Linux feature?\n\nNope, thats how it works.\n\nA good friend of mine used this in some modifications to INN (probably in\nINN -current right now). \n\nSending an article involved opening the file, mmapping it, closing the fd,\nwriting the mapped area and munmap-ing.\n\nIts pretty slick.\n\nBe careful of the file changing under you.\n\n/* \n Matthew N. Dodd\t\t| A memory retaining a love you had for life\t\n [email protected]\t\t| As cruel as it seems nothing ever seems to\n http://www.jurai.net/~winter | go right - FLA M 3.1:53\t\n*/\n\n", "msg_date": "Tue, 14 Apr 1998 22:13:54 -0400 (EDT)", "msg_from": "\"Matthew N. Dodd\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Memory mapping (Was: Safe/Fast I/O ...)" }, { "msg_contents": "> \n> While having some spare two hours I was just looking at the current code\n> of postgres. I was trying to estimate how would it fit to the current\n> postgres guts.\n> \n> Finally I've found more proofs that memory mapping would do a lot to\n> current performance, but I must admit that current storage manager is\n> pretty read/write oriented. It would be easier to integrate memory\n> mapping into buffer manager. Actually buffer manager role is to map some\n> parts of files into memory buffers. However it takes a lot to get\n> through several layers (smgr and finally md). \n> \n> I noticed that one of the very important features of mmaping is that you\n> can sync the buffer (even some part of it), not the whole file. So if\n> there would be some kind of page level locking, it would be absolutly\n> necessary to make sure that only committed pages are synced and we don't\n> overload the IO with unfinished things.\n\nWe really don't need to worry about it. Our goal it to control flushing\nof pg_log to disk. If we control that, we don't care if the non-pg_log\npages go to disk. In a crash, any non-synced pg_log transactions are\nrolled-back.\n\nWe are spoiled because we have just one compact central file to worry\nabout sync-ing.\n\n> \n> Also, I think that there is no need to create buffers in shared memory.\n> I have just tested that if you map files with MAP_SHARED attribute set,\n> then each proces is working on exactly the same copy of memory. \n> \n> I have also noticed more interesting things, maybe somebody would\n> clarify on that since I'm not so literate with mmaping. First thing I\n> was wondering about was how would we deal with open descriptor limits if\n> we use direct buffer-to-file mappings. While currently buffers are\n> isolated from files it's possible to close some descriptors without\n> throwing buffers. However it seems (tried it) that memory mapping works\n> even after a file descriptor is closed. So, is this possible to cross\n> the limit of open files by using memory mapping? Or maybe the descriptor\n> remains open until munmap call? Or maybe it's just a Linux feature?\n\nNot sure about this, but the open file limit is not a restriction for us\nvery often, it is. It is a per-backend issue, and I can't imagine cases\nwhere a backend has more than 64 file descriptors open. If so, you can\nincrease the kernel limits, usually.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Tue, 14 Apr 1998 22:24:23 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Memory mapping (Was: Safe/Fast I/O ...)" } ]
[ { "msg_contents": "\nChecking mail2news gateway...\n\n\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Mon, 13 Apr 1998 17:07:38 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "Quick Test..." } ]
[ { "msg_contents": "\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Mon, 13 Apr 1998 17:49:13 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "Damn...ignore this..." } ]
[ { "msg_contents": "unsubscribe\n\n", "msg_date": "Tue, 14 Apr 1998 09:42:11 +0900", "msg_from": "Yu HyungSic <[email protected]>", "msg_from_op": true, "msg_subject": "(no subject)" } ]
[ { "msg_contents": "> I'm sure you've probably thought about this (and discarded it for\n> a variety of reasons), but here it goes anyway:\n> \n> Have some daemon (\"postmaster\" itself, perhaps) service requests for\n> pages asynchronously. For example, have each \"postgres\" client request\n> the pages from \"postmaster\" and continue running; \"postmaster\" could\n> service the request, map the resulting pages into the shared segment, and\n> notify the appropriate \"postgres\" client (either through the socket\n> or some other way) asynchronously. The \"postgres\" client could then\n> be checking the socket with a non-blocking select whenever it wanted to.\n> (Or you could notify the client with a signal telling it that the request\n> has been serviced and it can go and get the pages.)\n> \n> It's just a suggestion. Good luck and congratulations for the excellent\n> work you've done with postgresql.\n> \n> -Ted.\n> \n> p.s. I've got a table that's 77 MB (340,000 records), with six indexes on\n> it totaling another 66 MB. It does pretty well, actually.\n\nAbove is a very good review of a platform-independent way for us to do\nasync read-aheads, particularly for heap reads from indexes. I think it\ndeserves consideration as a good way to perform what we need done.\n\nShared memory certainly gives us a way to communicate between these\nprocesses.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Mon, 13 Apr 1998 22:40:08 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Async I/O" } ]
[ { "msg_contents": "============================================================================\n POSTGRESQL BUG REPORT TEMPLATE\n============================================================================\n\nYour name\t\t:\tJose' Soares Da Silva\nYour email address\t:\[email protected] \n\n\nSystem Configuration\n---------------------\n Architecture (example: Intel Pentium) \t: Intel Pentium\n\n Operating System (example: Linux 2.0.26 ELF) \t: Linux 2.0.31 Elf\n\n PostgreSQL version (example: PostgreSQL-6.1) : PostgreSQL-snapshot april 6, 1998\n\n Compiler used (example: gcc 2.7.2)\t\t: gcc 2.7.2.1\n\n\nPlease enter a FULL description of your problem:\n------------------------------------------------\n\nThere are some bugs on INTERVALs...\n...the keywords YEAR,MONTH,DAY,HOUR, MINUTE and SECOND must be specified\noutside quotes not inside.\n\n/*\nINTERVAL year-month:\n written as the keyword INTERVAL, followed by a (year-month) interval string\n consisting of an opening single quote, an optional sign, either or both\n yyyy and mm (with a hyphen separator if both are specified), and closing\n single quote, followed by YEAR, MONTH or YEAR TO MONTH (as applicable).\n examples:\n INTERVAL '-1' YEAR;\n INTERVAL '2-6' YEAR TO MONTH;\n\n*/\npostgres=> SELECT INTERVAL '2-6' YEAR TO MONTH; <-- year to month outside '' ??\nERROR: parser: parse error at or near \"year\"\n\npostgres=> SELECT INTERVAL '2-6 YEAR TO MONTH';\nERROR: Bad timespan external representation '2-6 YEAR TO MONTH'\n\n/*\nINTERVAL day-time:\n written as the keyword INTERVAL, followed by a (day-time) interval string\n consisting of an opening single quote, an optional sign, a contiguous\n nonempty subsequence of dd, hh, mm and ss[.[nnnnnn]] (with a space\n separator between dd and the rest, if dd is specified, and colon separators\n elsewhere), and a closing single quote, followed by the appropriate\n \"start [TO end]\" specification.\n examples:\n INTERVAL '2 12' DAY TO HOUR;\n INTERVAL '-4.50' SECOND;\n*/\npostgres=> SELECT INTERVAL '2 12 DAY TO HOUR' AS two_days_12_hrs;\ntwo_days_12_hrs\n---------------\n@ 14 days <--- this should be 2 days and 12 hours !!\n(1 row)\n\npostgres=> SELECT INTERVAL '-4.50 SECOND' AS four_sec_half_ago;\nERROR: Bad timespan external representation '-4.50 SECOND' \n ^^^^ decimal point ??\n\npostgres=> SELECT INTERVAL '-4 SECOND' AS four_sec_half_ago;\nfour_sec_half_ag ^^^ without decimal point it's ok.\n-----------------\n@ 4 secs ago\n(1 row)\n \n--arithmetic:\n\npostgres=> SELECT INTERVAL '3 hour' / INTERVAL '1 hour';\n?column?\n--------\n@ 3 secs <---- why 3 secs ? It should be 3 hour !!\n(1 row)\n\npostgres=> SELECT INTERVAL '4 hour' * INTERVAL '3 hour';\nERROR: There is no operator '*' for types 'timespan' and 'timespan'\n You will either have to retype this query using an explicit cast,\n or you will have to define the operator using CREATE OPERATOR\n\npostgres=> SELECT INTERVAL '4 hour' * 3;\nERROR: There is no operator '*' for types 'timespan' and 'int4'\n\tYou will either have to retype this query using an explicit cast,\n\tor you will have to define the operator using CREATE OPERATOR\n\npostgres=> SELECT INTERVAL '4 hour' / 2;\nERROR: There is no operator '/' for types 'timespan' and 'int4'\n\tYou will either have to retype this query using an explicit cast,\n\tor you will have to define the operator using CREATE OPERATOR\n\npostgres=> SELECT DATE '1998-07-31' + INTERVAL '1 MONTH';\nERROR: There is no operator '+' for types 'date' and 'timespan'\n\tYou will either have to retype this query using an explicit cast,\n\tor you will have to define the operator using CREATE OPERATOR\n\npostgres=> SELECT CURRENT_TIME + INTERVAL '1 HOUR';\nERROR: There is no operator '+' for types 'time' and 'timespan'\n\tYou will either have to retype this query using an explicit cast,\n\tor you will have to define the operator using CREATE OPERATOR\n\npostgres=> SELECT CURRENT_TIMESTAMP + INTERVAL '1 DAY';\nERROR: There is no operator '+' for types 'timestamp' and 'timespan'\n\tYou will either have to retype this query using an explicit cast,\n\tor you will have to define the operator using CREATE OPERATOR\n\npostgres=> SELECT CURRENT_TIME - TIME '12:34';\nERROR: There is no operator '-' for types 'time' and 'time'\n\tYou will either have to retype this query using an explicit cast,\n\tor you will have to define the operator using CREATE OPERATOR\n\nCREATE TABLE inter (\n\tinter1\tINTERVAL YEAR,\n\tinter2\tINTERVAL YEAR TO MONTH,\n\tinter3\tINTERVAL MONTH,\n\tinter4\tINTERVAL DAY,\n\tinter5\tINTERVAL HOUR TO MINUTE,\n\tinter6\tINTERVAL MINUTE TO SECOND, <---error on this one.\nERROR: parser: parse error at or near \"to\"\n\tinter7\tINTERVAL DAY (3) TO SECOND (3) <---error on this one.\n);\nERROR: parser: parse error at or near \"(\"\n\n\nIf you know how this problem might be fixed, list the solution below:\n---------------------------------------------------------------------\n\n??\n\n", "msg_date": "Tue, 14 Apr 1998 10:13:07 +0000 (UTC)", "msg_from": "\"Jose' Soares Da Silva\" <[email protected]>", "msg_from_op": true, "msg_subject": "INTERVALs" }, { "msg_contents": "> There are some bugs on INTERVALs...\n> ...the keywords YEAR,MONTH,DAY,HOUR, MINUTE and SECOND must be \n> specified outside quotes not inside.\n\nThe full syntax is supported for column declarations, but is not\nsupported for data entry. The problem is that this is the _only_ type\nwhere the type specification appears on _both_ sides of the value! This\nmakes for really ugly syntax, and appears that it might vastly\ncomplicate the parser. I'll admit I haven't spent much time looking at\nthat part of it though.\n\n> postgres=> SELECT INTERVAL '2 12 DAY TO HOUR' AS two_days_12_hrs;\n> two_days_12_hrs\n> ---------------\n> @ 14 days <--- this should be 2 days and 12 hours !!\n> (1 row)\n\nThe range is not really supposed to go inside the quotes, but for\nhistorical reasons the parser is too forgiving and ignores things it\ndoesn't understand. That was for backward compatibility with v1.09/v6.0,\nand perhaps we can tighten it up now...\n\n> postgres=> SELECT INTERVAL '-4.50 SECOND' AS four_sec_half_ago;\n> ERROR: Bad timespan external representation '-4.50 SECOND'\n> ^^^^ decimal point ??\n\nThanks. I'll look at it. '4.5 secs ago' does work at the moment.\n\n> --arithmetic:\n> \n> postgres=> SELECT INTERVAL '3 hour' / INTERVAL '1 hour';\n> ?column?\n> --------\n> @ 3 secs <---- why 3 secs ? It should be 3 hour !!\n> (1 row)\n\nNo, it should be \"3\" (no units). I was probably trying to do the right\nthing with the \"qualitative units\" of year and month by keeping them\nseparate; instead, this should assume 30 days/month for the math and\nreturn a double. Will look at it.\n\n> postgres=> SELECT INTERVAL '4 hour' * INTERVAL '3 hour';\n> ERROR: There is no operator '*' for types 'timespan' and 'timespan'\n\nGood. This would make no sense.\n\n> postgres=> SELECT INTERVAL '4 hour' * 3;\n> ERROR: There is no operator '*' for types 'timespan' and 'int4'\n\nBad. This could make sense. Will put it on my list, and it may be helped\nby my current project on type conversions.\n\n> postgres=> SELECT INTERVAL '4 hour' / 2;\n> ERROR: There is no operator '/' for types 'timespan' and 'int4'\n\nDitto.\n\n> postgres=> SELECT DATE '1998-07-31' + INTERVAL '1 MONTH';\n> ERROR: There is no operator '+' for types 'date' and 'timespan'\n\nThis works for DATETIME and INTERVAL. I'm currently working on the\nautomatic type conversion stuff, and it may help with this.\n\n> postgres=> SELECT CURRENT_TIME + INTERVAL '1 HOUR';\n> ERROR: There is no operator '+' for types 'time' and 'timespan'\n\nHmm. But TIME is restricted to 0-23:59:59. I would have thought that\nsafe time arithmetic would need DATETIME or INTERVAL to allow overflow.\nDo other systems implement this?\n\n> postgres=> SELECT CURRENT_TIMESTAMP + INTERVAL '1 DAY';\n> ERROR: There is no operator '+' for types 'timestamp' and 'timespan'\n\nTIMESTAMP does not have as many operators as DATETIME. They should\nmerge, unless there is a requirement that TIMESTAMP implement _all_ of\nSQL92. That would damage it so much that we should leave DATETIME as the\nmore useful data type :(\n\n> postgres=> SELECT CURRENT_TIME - TIME '12:34';\n> ERROR: There is no operator '-' for types 'time' and 'time'\n\nAddition/subtraction on two absolute TIME fields does not make sense.\nINTERVAL (or TIMESPAN) makes sense here as the second field. See above\ncomments on TIMESTAMP.\n\n> CREATE TABLE inter (\n> inter1 INTERVAL YEAR,\n> inter2 INTERVAL YEAR TO MONTH,\n> inter3 INTERVAL MONTH,\n> inter4 INTERVAL DAY,\n> inter5 INTERVAL HOUR TO MINUTE,\n> inter6 INTERVAL MINUTE TO SECOND, <---error on this one.\n> ERROR: parser: parse error at or near \"to\"\n\nOmission. Will fix.\n\n> inter7 INTERVAL DAY (3) TO SECOND (3) <---error on this one.\n> );\n> ERROR: parser: parse error at or near \"(\"\n\nYup.\n\n> ??\n\nA fundamental problem is that SQL92 has _really bad_ date/time datatypes\nand features. We're trying to do better than that by having datatypes\nwhich are more self consistant and capable. This may be a little\npresumptuous, but what the heck :) However, I know that there is\ninterest in having full SQL92 compatibility, which is why TIMESTAMP has\nnot become as capable as DATETIME; I'm reluctant to merge them and then\nbe forced to damage the capabilities for compatibility reasons.\n\n - Tom\n", "msg_date": "Tue, 14 Apr 1998 15:29:20 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] INTERVALs" } ]
[ { "msg_contents": "On Tue, 14 Apr 1998, nicolas Gillot wrote:\n\n> Does anybody know if postgreSQL can run under Windows NT4 ?\n> The answer seems to be no, so which are the known bugs for this ?\n\n\tIt doesn't compile?\n\n> And when do you think, the problem will be solved ?\n\n\tWhen someone decides to work on it? :)\n\n\n", "msg_date": "Tue, 14 Apr 1998 09:56:08 -0400 (EDT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] postgreSQL on Windows NT4" }, { "msg_contents": "Does anybody know if postgreSQL can run under Windows NT4 ?\nThe answer seems to be no, so which are the known bugs for this ?\n\nAnd when do you think, the problem will be solved ?\n\nthanks,\n\n--\nNicolas Gillot --------------------------------------------------\nIUP | Ingenierie des Systemes Informatiques\nUniversite Paul Sabatier | TOULOUSE III\nStagiaire a l'IRIT | [email protected]\n-----------------------------------------------------------------\n\n\n", "msg_date": "Tue, 14 Apr 1998 15:46:04 +0100", "msg_from": "nicolas Gillot <[email protected]>", "msg_from_op": false, "msg_subject": "postgreSQL on Windows NT4" } ]
[ { "msg_contents": "I have noticed that when float types are divided by zero in a query, the\nthe query aborts (via elog(WARN)) with a complaint about divide by zero.\n\nAlso an integer divide by zero produces a result. On our AIX 4.1.4\nsystem 1 / 0 = 15. And 10 / 0 = 31. There is some pattern here with\nintegers, but it is of little use.\n\nI have two assertions that I would like to make.\n\n1. The result of these numeric division queries should be consistent.\nIf one aborts, then they probably should both abort.\n\n2. I don't think that division by zero should abort.\n\nThis problem was brought to my attention by a user the was computing\n\"Percent Profit\". Profit / Net = %Profit. It is considered\nreasonable, in sales circles, to offer a free line item on an invoice.\nThus, the calculation becomes (Profit / 0).\n\nI am suggesting that something be returned on a divide by zero.\nPossible return values for float types include NULL and INFINITY. An\nelog(NOTICE) may also be sent. Of the two possibilities NULL would be\nrelativity easy. Simply detect the offending division, send a NOTICE,\nand return null. INFINITY, on the other hand, would be a bit more\ntricky. This may involve some platform porting issues. Plus INFINITY\nwould have to be handled by each function that processes float numbers.\n\nInteger type functions, however, appear not to be capable of returning\nanything other than a legal integer. They are passed by value. I can\nonly come up with one possibility. That would be to reserve one of the\nboundary values, such as MAX_INT, to represent INFINITY (or NULL for\nthat matter) and handle the max value in each integer function. I\nwould think, though, that on a detected divide by zero there should at\nleast be an elog(WARN).\n\nI must resolve the problem at my site. And I would like to contribute\nthese change, assuming they are acceptable to the other hackers.\n\nSuggestions?", "msg_date": "Tue, 14 Apr 1998 12:09:24 -0400", "msg_from": "David Hartwig <[email protected]>", "msg_from_op": true, "msg_subject": "Division by Zero" }, { "msg_contents": "> \n> This is a multi-part message in MIME format.\n> --------------0BE30DC54DE04265764E3F7C\n> Content-Type: text/plain; charset=us-ascii\n> Content-Transfer-Encoding: 7bit\n> \n> I have noticed that when float types are divided by zero in a query, the\n> the query aborts (via elog(WARN)) with a complaint about divide by zero.\n> \n> Also an integer divide by zero produces a result. On our AIX 4.1.4\n> system 1 / 0 = 15. And 10 / 0 = 31. There is some pattern here with\n> integers, but it is of little use.\n\nOn BSDI:\n\ntest=> select 1/0;\nERROR: floating point exception! The last floating point operation\neither exceeded legal ranges or was a divide by zero\n\nI think a transaction abort is the only normal solution.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Tue, 14 Apr 1998 13:04:48 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Division by Zero" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> >\n> > This is a multi-part message in MIME format.\n> > --------------0BE30DC54DE04265764E3F7C\n> > Content-Type: text/plain; charset=us-ascii\n> > Content-Transfer-Encoding: 7bit\n> >\n> > I have noticed that when float types are divided by zero in a query, the\n> > the query aborts (via elog(WARN)) with a complaint about divide by zero.\n> >\n> > Also an integer divide by zero produces a result. On our AIX 4.1.4\n> > system 1 / 0 = 15. And 10 / 0 = 31. There is some pattern here with\n> > integers, but it is of little use.\n> \n> On BSDI:\n> \n> test=> select 1/0;\n> ERROR: floating point exception! The last floating point operation\n> either exceeded legal ranges or was a divide by zero\n> \n> I think a transaction abort is the only normal solution.\n\nI get the same behavior on my Linux box, so at least we have consistant\nbehavior across some platforms! David, if you want to find out what it\ntakes to change the floating point exception handling to allow\ndivide-by-zero and to have integer overflows caught be an exception\nhandler, then we can discuss what the default behavior should be.\n\nIf it is a simple matter of throwing an exception and catching it, then\nperhaps we can make it a compile-time or run-time option. With IEEE\narithmetic, infinity results for floats are possible. I don't really\nlike uncaught integer overflows which is what we have now...\n\ntgl=> select 2000000000*2;\n----------\n-294967296\n(1 row)\n\nDon't know where else integer overflows might be used in the backend, so\nwe would have to do extensive testing.\n\n - Tom\n", "msg_date": "Wed, 15 Apr 1998 01:08:52 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Division by Zero" }, { "msg_contents": "Thomas G. Lockhart wrote:\n\n> Bruce Momjian wrote:\n> >\n> > >\n> > > This is a multi-part message in MIME format.\n> > > --------------0BE30DC54DE04265764E3F7C\n> > > Content-Type: text/plain; charset=us-ascii\n> > > Content-Transfer-Encoding: 7bit\n> > >\n> > > I have noticed that when float types are divided by zero in a query, the\n> > > the query aborts (via elog(WARN)) with a complaint about divide by zero.\n> > >\n> > > Also an integer divide by zero produces a result. On our AIX 4.1.4\n> > > system 1 / 0 = 15. And 10 / 0 = 31. There is some pattern here with\n> > > integers, but it is of little use.\n> >\n> > On BSDI:\n> >\n> > test=> select 1/0;\n> > ERROR: floating point exception! The last floating point operation\n> > either exceeded legal ranges or was a divide by zero\n> >\n> > I think a transaction abort is the only normal solution.\n>\n> I get the same behavior on my Linux box, so at least we have consistant\n> behavior across some platforms! David, if you want to find out what it\n> takes to change the floating point exception handling to allow\n> divide-by-zero and to have integer overflows caught be an exception\n> handler, then we can discuss what the default behavior should be.\n>\n\nI have since, discovered the that our compiler does not trap divide by zero\nunless we provide an extra compile option. Rats. I did not realize that such\nan option even existed. I have not recompiled the backend with the options\nturned on, but, I suspect this explains why Bruce gets the exception and I/we\ndon't.\n\n> If it is a simple matter of throwing an exception and catching it, then\n> perhaps we can make it a compile-time or run-time option. With IEEE\n> arithmetic, infinity results for floats are possible. I don't really\n> like uncaught integer overflows which is what we have now...\n>\n> tgl=> select 2000000000*2;\n> ----------\n> -294967296\n> (1 row)\n>\n> Don't know where else integer overflows might be used in the backend, so\n> we would have to do extensive testing.\n>\n\nI don't know if the SQL standard addresses division by zero or not. Nor, am\nnot sure what normal behavior is in this instance. From MS Access, SQL Server\nreturns NULL in the offending column of the result; From the monitor, Personal\nOracle throws an exception with no result. I'm no fan of MS, but I am partial\nto their solution to this problem. (Because it solves my problem.)\n\nMost fortunately, PostgreSQL allows me to rewrite these division function for my\nown solution. PostgreSQL is good. Now, if I can only get the NULL return value\nto propagate to the result set.\n\nThis issue is obviously larger than my larger than my (float / 0.0) problem.\nAt a minimum though, I must provide a solution for my site. I wish to make\nmy solution available. However, I am new to this list, and as such will adhere\nto the advice of its core activist.", "msg_date": "Wed, 15 Apr 1998 10:18:32 -0400", "msg_from": "David Hartwig <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Division by Zero" }, { "msg_contents": "> > > > ...float types are divided by zero in a query, the\n> > > > the query aborts (via elog(WARN))...\n> > > > Also an integer divide by zero produces a result...\n> > > I think a transaction abort is the only normal solution.\n> > ... if you want to find out what it\n> > takes to change the floating point exception handling to allow\n> > divide-by-zero and to have integer overflows caught be an exception\n> > handler, then we can discuss what the default behavior should be.\n> ... our compiler does not trap divide by zero\n> unless we provide an extra compile option. I did not realize that such\n> an option even existed.\n\nYes, this is typical.\n\n> > If it is a simple matter of throwing an exception and catching it,\n> > then we can make it a compile-time or run-time option. With IEEE\n> > arithmetic, infinity results for floats are possible. I don't really\n> > like uncaught integer overflows which is what we have now...\n> > Don't know where else integer overflows might be used in the \n> > backend, so we would have to do extensive testing.\n> I don't know if the SQL standard addresses division by zero or not. \n> From MS Access, SQL Server\n> returns NULL in the offending column of the result; ... Personal\n> Oracle throws an exception with no result. I'm no fan of MS, but I am \n> partial to their solution to this problem. (Because it solves my \n> problem.)\n> Most fortunately, PostgreSQL allows me to rewrite these division \n> function for my own solution. PostgreSQL is good. Now, if I can only \n> get the NULL return value to propagate to the result set.\n\nThat has been an outstanding issue for a long time; the claim is that it\nshould be fairly easy to do since some hooks for this are already in the\nbackend. Look in the archives for some hints on where to look which were\nposted a month or two ago.\n\nThe MS Access solution is bad in general, Oracle's is better. As you\npoint out, you can modify the behavior of the divide operator in your\ninstallation by replacing the appropriate function with your own.\n\nNULL is not the same as infinity; it means \"unspecified\" or \"don't\nknow\". We shouldn't hide divide-by-zero in NULL returns.\n\n> This issue is obviously larger than my (float / 0.0) problem.\n> At a minimum though, I must provide a solution for my site. I wish to \n> make my solution available. However, I am new to this list, and as \n> such will adhere to the advice of its core activist.\n\nWell, you might get differing opinions, but...\n\nThere are three issues:\n\n1) Allowing functions to return NULL would be very nice, though not for\ndefault behavior of divide-by-zero.\n\n2) Throwing an error on an integer divide-by-zero on every platform\nshould be the default behavior. There are a few (well, at least one :)\nactive participants in Postgres development running on AIX; perhaps you\nshould work together on the right combination of compiler flags for all\nversions of AIX (they have big library variations, don't know about the\ncompiler).\n\n3) Allowing \"Inf\" results for floating point divide-by-zero could be an\ninstallation option. SQL does not take advantage of all features of IEEE\narithmetic. However, note that a few of our supported platforms do not\nuse IEEE arithmetic (e.g. VAX), so we should have this as an option\nonly.\n\nHave fun with it...\n\n - Tom\n", "msg_date": "Thu, 16 Apr 1998 13:19:24 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Division by Zero" } ]
[ { "msg_contents": "> Date: Mon, 13 Apr 1998 12:26:59 -0400 (EDT)\n> From: Garrett Wollman <[email protected]>\n> Subject: Re: [HACKERS] Safe/Fast I/O ... \n> \n> [Please forgive me for the way this post is put together; I'm not\n> actually on your mailing-list, but was just perusing the archives.]\n> if your operating system is at all reasonable, using\n> memory mapping allows you to take advantage of all the work that has\n> gone into tuning your VM system. If you map a large file, and then\n> access in some way that shows reasonable locality, the VM system will\n> probably be able to do a better job of page replacement on a\n> system-wide basis than you could do with a cache built into your\n> application.\n\nnot necessarily. in this case, the application (the database) has\nseveral very different page access patterns, some of which (e.g.,\nnon-boustrophedonic nested-loops join, index leaf accesses) *do not*\nexhibit reasonable locality and therefore benefit from the ability to\nturn on hate-hints or MRU paging on a selective basis. database query\nprocessing is one of the classic examples why \"one size does not fit\nall\" when it comes to page replacement -- no amount of \"tuning\" of an\nLRU/clock algorithm will help if the access pattern is wrong.\n\nstonebraker's 20-year-old CACM flame on operating system services for\ndatabases has motivated a lot of work, e.g., microkernel external\npagers and the more recent work at stanford and princeton on\napplication-specific paging, but many older VM systems still don't\nhave a working madvise().. meaning that a *portable* database still\nhas to implement its own buffer cache if it wants to exploit its\napplication-specific paging behavior.\n--\n Paul M. Aoki | University of California at Berkeley\n [email protected] | Dept. of EECS, Computer Science Division #1776\n | Berkeley, CA 94720-1776\n", "msg_date": "Tue, 14 Apr 98 15:25:47 -0700", "msg_from": "[email protected] (Paul M. Aoki)", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Safe/Fast I/O ..." } ]
[ { "msg_contents": "\nOh whoa is me...is that how you spell whoa in this case? *raised eyebrow*\n\nOh well, irrelevant...did some deeper looking inot my local problem, and\nit turns out I had a postgres process that was just growing and growing.\nI'm suspecting that that is what caused the problem :(\n\nI've changed the code so that instead of holding the postgres process open\nfor the duration of the process (radiusd), just open/close it as required,\nto see if it gets rid of the problem...\n\nI should know sometime tomorrow whether this cures it or not ... so, if\nnobody else has any really big outstanding issues, Friday targetting for\nv6.3.2 to go out?\n\n\n\n", "msg_date": "Wed, 15 Apr 1998 12:08:46 -0400 (EDT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "My problems..." }, { "msg_contents": "The Hermit Hacker <[email protected]> writes:\n\n> Oh whoa is me...is that how you spell whoa in this case? *raised eyebrow*\n\nThe proper exclamation is \"Woe is me!\".\n\nWe now return you to your regularly scheduled program... :-)\n\n-tih\n-- \nPopularity is the hallmark of mediocrity. --Niles Crane, \"Frasier\"\n", "msg_date": "15 Apr 1998 19:43:59 +0200", "msg_from": "Tom Ivar Helbekkmo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] My problems..." }, { "msg_contents": "On Wed, 15 Apr 1998, The Hermit Hacker wrote:\n\n> I should know sometime tomorrow whether this cures it or not ... so, if\n> nobody else has any really big outstanding issues, Friday targetting for\n> v6.3.2 to go out?\n\nI have another JDBC patch which I'm finishing off tonight.\n\n-- \nPeter T Mount [email protected] or [email protected]\nMain Homepage: http://www.demon.co.uk/finder\nWork Homepage: http://www.maidstone.gov.uk Work EMail: [email protected]\n\n", "msg_date": "Wed, 15 Apr 1998 19:16:05 +0100 (BST)", "msg_from": "Peter T Mount <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] My problems..." }, { "msg_contents": "Marc - sorry to bother you - but: were any changes made to ecpg1.1\nin the forthcoming 6.3.2?\n\nI'm asking as, other than some odd behaviour with calls to gets(),\necpg on 6.3 ran fine but although 6.3.1 compiled w/out complaint,\ntests went fine, etc, ecpg failed to preprocess. I would get a\nseg-fault and a zero'd out .c file...I was able to replicate this\non a 2nd box. :-(\n\nI cannot articulate why ecpg is misbehaving, only that I am hoping\nfor a fix! It is my principal interface...\n\nThanks alot,\nTom Good\n\n ----------- Sisters of Charity Medical Center ----------\n Department of Psychiatry\n ---- \n Thomas Good, System Administrator <[email protected]>\n North Richmond CMHC/Residential Services Phone: 718-354-5528\n 75 Vanderbilt Ave, Quarters 8 Fax: 718-354-5056\n Staten Island, NY 10305\n\n", "msg_date": "Wed, 15 Apr 1998 14:30:50 -0400 (EDT)", "msg_from": "Tom Good <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] My problems..." } ]
[ { "msg_contents": "Hi folks,\n Hope I'm not making a fool of myself by posting to a list I just\njoined ... but I couldn't find much about this in the list archives.\n\nI'm looking at an application that involves several client processes\ncommunicating in real time via a pgsql database. \"Real time\" means\nthat when one client writes something, any other clients that are\ninterested need to know about it within a few seconds at most.\nThe other clients can use LISTEN/NOTIFY to detect updates --- but\nI don't think I can accept the notion of continuously doing empty\nqueries to receive the notifies. That'll drive performance into the\nground. What I want is for a client to be able to sleep until\nsomething interesting happens.\n\nAs near as I can tell from backend/commands/async.c, notify messages\nactually are sent out to the frontends asynchronously, as soon as\npossible (either immediately or at the end of the current transaction).\nThe problem is simply that libpq is designed in such a way that it can't\nread in the notify message except while processing a new query.\n\nI am thinking about revising libpq so that it doesn't force synchronous\nreading, but can be called from an application's main loop whenever the\nbackend connection is ready for reading according to select(). This\nwould seem to be a major win for Tcl and other environments, as well as\nfor my problem: an app waiting for a server response would not have to\nbe dead to the rest of the world.\n\nIs this a correct description of the situation? Has anyone already\nstarted to work on this issue? If not, would someone who knows the\ncode be willing to give me guidance? I'm entirely new to Postgres\nand am likely to make some dumb choices without advice...\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 15 Apr 1998 16:25:36 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Anyone working on asynchronous NOTIFY reception?" }, { "msg_contents": "> \n> Hi folks,\n> Hope I'm not making a fool of myself by posting to a list I just\n> joined ... but I couldn't find much about this in the list archives.\n> \n> I'm looking at an application that involves several client processes\n> communicating in real time via a pgsql database. \"Real time\" means\n> that when one client writes something, any other clients that are\n> interested need to know about it within a few seconds at most.\n> The other clients can use LISTEN/NOTIFY to detect updates --- but\n> I don't think I can accept the notion of continuously doing empty\n> queries to receive the notifies. That'll drive performance into the\n> ground. What I want is for a client to be able to sleep until\n> something interesting happens.\n\nThe person who knows the most about this is:\n\n\[email protected] (Massimo Dal Zotto)\n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Wed, 15 Apr 1998 16:57:43 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Anyone working on asynchronous NOTIFY reception?" }, { "msg_contents": "\nI don't know how many clients you're running, but sending an empty\nquery every second or three isn't too much of a performance hit (I\nknow it still sucks though)..\n\ncould you cc copies of mail to the list? I'm sure we're very\ninterested..\n\nOn Wed, 15 April 1998, at 16:57:43, Bruce Momjian wrote:\n\n> > I'm looking at an application that involves several client processes\n> > communicating in real time via a pgsql database. \"Real time\" means\n> > that when one client writes something, any other clients that are\n> > interested need to know about it within a few seconds at most.\n> > The other clients can use LISTEN/NOTIFY to detect updates --- but\n> > I don't think I can accept the notion of continuously doing empty\n> > queries to receive the notifies. That'll drive performance into the\n> > ground. What I want is for a client to be able to sleep until\n> > something interesting happens.\n> \n> The person who knows the most about this is:\n> \n> \[email protected] (Massimo Dal Zotto)\n> \n> \n> -- \n> Bruce Momjian | 830 Blythe Avenue\n> [email protected] | Drexel Hill, Pennsylvania 19026\n> + If your life is a hard drive, | (610) 353-9879(w)\n> + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Wed, 15 Apr 1998 14:28:44 -0700 (PDT)", "msg_from": "Brett McCormick <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Anyone working on asynchronous NOTIFY reception?" }, { "msg_contents": "Brett McCormick <[email protected]> writes:\n> I don't know how many clients you're running, but sending an empty\n> query every second or three isn't too much of a performance hit (I\n> know it still sucks though)..\n\nWell, raw performance is only part of it. Some of the clients will\nbe accessing the database server across interstate dial-on-demand\nISDN links. Popping up the link for a minute whenever something\nhappens (which is likely to be only a few times a day) is cool.\nNailing it up 24x7 to pass a steady flow of empty queries is not cool.\n\n> could you cc copies of mail to the list? I'm sure we're very\n> interested..\n\nSure, I'll keep you posted. If anything comes of this I'll be\nsubmitting the mods, of course.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 15 Apr 1998 17:38:31 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Anyone working on asynchronous NOTIFY reception? " } ]
[ { "msg_contents": "Hi everyone.\n\nI have made a patch to DBD-Pg-0.69 so that queries that contain the\ncasing operator \"::\" work with DBD::Pg v0.69. Without this patch, any\nquery with teh casting operator in it will not run. I could not wait\nfor a fix from the author for this, so I made the change myself. \nAnyways, in case anyone else is strugling with this, the following patch\nmade against DBD-Pg-0.69 seems to do the trick for me.\n\nThought I would post this up in case anyone else would like to try it.\n\nMike\ndiff -uNr DBD-Pg-0.69.orig/dbdimp.c DBD-Pg-0.69/dbdimp.c\n--- DBD-Pg-0.69.orig/dbdimp.c Fri Mar 6 15:58:50 1998\n+++ DBD-Pg-0.69/dbdimp.c Wed Apr 15 20:37:15 1998\n@@ -447,6 +447,19 @@\n *dest++ = *src++;\n continue;\n }\n+\n+ /*\n+ * The above loop will not copy \"::\" for casting. This\n+ * works around that by copying a colon if it followed by or\n+ * is preceded by another colon.\n+ * Michael schout, 4/15/1998 ([email protected])\n+ */\n+ if (*src == ':' && (*(src-1) == ':' || *(src+1) == ':'))\n+ {\n+ *dest++ = *src++;\n+ continue;\n+ }\n+\n start = dest; /* save name inc colon */\n *dest++ = *src++;\n if (*start == '?') { /* X/Open standard */\n", "msg_date": "Wed, 15 Apr 1998 20:46:01 -0500", "msg_from": "Michael J Schout <[email protected]>", "msg_from_op": true, "msg_subject": "Patch for DBD-Pg-0.69.." } ]
[ { "msg_contents": "Attached is a list of bug reports for the HAVING clause.\n\nMy question is, \"Do we disable the HAVING clause for 6.3.2?\" The bugs\nare serious and cause crashes.\n\nI have looked at the issues, and the basic problems are that the\naggregate logic expects to be attached to an actual field in the target\nlist, and the HAVING clause does not properly handle non-aggregate\nretrictions, nor does it prevent them. COUNT(*) uses the oid of the\nfirst FROM table, so that is a problem too.\n\nI have looked at the code, but don't have time to fix it before Friday,\nand holding up the release for that would be silly. I don't think there\nis one thing wrong, but several places that have to be change to get\nthis working solidly.\n\nDo we disable it?\n\n---------------------------------------------------------------------------\n\n", "msg_date": "Wed, 15 Apr 1998 23:55:25 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "HAVING clause and 6.3.2 release" }, { "msg_contents": "On Wed, 15 Apr 1998, Bruce Momjian wrote:\n\n> Attached is a list of bug reports for the HAVING clause.\n> \n> My question is, \"Do we disable the HAVING clause for 6.3.2?\" The bugs\n> are serious and cause crashes.\n> \n> I have looked at the issues, and the basic problems are that the\n> aggregate logic expects to be attached to an actual field in the target\n> list, and the HAVING clause does not properly handle non-aggregate\n> retrictions, nor does it prevent them. COUNT(*) uses the oid of the\n> first FROM table, so that is a problem too.\n> \n> I have looked at the code, but don't have time to fix it before Friday,\n> and holding up the release for that would be silly. I don't think there\n> is one thing wrong, but several places that have to be change to get\n> this working solidly.\n> \n> Do we disable it?\n\n\tYes...but disabling means that it *will not* be available until\nv6.4...no v6.3.3 :)\n\n\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Thu, 16 Apr 1998 01:47:20 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] HAVING clause and 6.3.2 release" }, { "msg_contents": "On Thu, 16 Apr 1998, The Hermit Hacker wrote:\n\n> On Wed, 15 Apr 1998, Bruce Momjian wrote:\n> \n> > Attached is a list of bug reports for the HAVING clause.\n> > \n> > My question is, \"Do we disable the HAVING clause for 6.3.2?\" The bugs\n> > are serious and cause crashes.\n> > \n> > I have looked at the issues, and the basic problems are that the\n> > aggregate logic expects to be attached to an actual field in the target\n> > list, and the HAVING clause does not properly handle non-aggregate\n> > retrictions, nor does it prevent them. COUNT(*) uses the oid of the\n> > first FROM table, so that is a problem too.\n> > \n> > I have looked at the code, but don't have time to fix it before Friday,\n> > and holding up the release for that would be silly. I don't think there\n> > is one thing wrong, but several places that have to be change to get\n> > this working solidly.\n> > \n> > Do we disable it?\n> \n> \tYes...but disabling means that it *will not* be available until\n> v6.4...no v6.3.3 :)\n> \n> \nWhat about including it as an optional feature by defining something like\n\n/* #define BUGGY_HAVING_CLAUSE */\n\nMarc Zuckman\[email protected]\n\n_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\n_ Visit The Home and Condo MarketPlace\t\t _\n_ http://www.ClassyAd.com\t\t\t _\n_\t\t\t\t\t\t\t _\n_ FREE basic property listings/advertisements and searches. _\n_\t\t\t\t\t\t\t _\n_ Try our premium, yet inexpensive services for a real\t _\n_ selling or buying edge!\t\t\t\t _\n_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\n\n", "msg_date": "Thu, 16 Apr 1998 09:01:11 -0400 (EDT)", "msg_from": "Marc Howard Zuckman <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] HAVING clause and 6.3.2 release" }, { "msg_contents": "> > My question is, \"Do we disable the HAVING clause for 6.3.2?\" The\n> > bugs are serious and cause crashes.\n> > Do we disable it?\n> Yes...but disabling means that it *will not* be available until\n> v6.4...no v6.3.3 :)\n\nHmm. What is the downside to leaving it in with caveats or \"stay away\"\nwarnings in the release notes? Since it didn't exist as a feature\nbefore, the only downside I see is somewhat increased traffic on the\nquestions list...\n\n - Tom\n", "msg_date": "Thu, 16 Apr 1998 13:47:40 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] HAVING clause and 6.3.2 release" }, { "msg_contents": "On Thu, 16 Apr 1998, Thomas G. Lockhart wrote:\n\n> > > My question is, \"Do we disable the HAVING clause for 6.3.2?\" The\n> > > bugs are serious and cause crashes.\n> > > Do we disable it?\n> > Yes...but disabling means that it *will not* be available until\n> > v6.4...no v6.3.3 :)\n> \n> Hmm. What is the downside to leaving it in with caveats or \"stay away\"\n> warnings in the release notes? Since it didn't exist as a feature\n> before, the only downside I see is somewhat increased traffic on the\n> questions list...\n\n\tI liked the one suggestion about having it as a compile time\noption until its fixed...\n\n\n", "msg_date": "Thu, 16 Apr 1998 09:53:53 -0400 (EDT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] HAVING clause and 6.3.2 release" }, { "msg_contents": "> \n> On Thu, 16 Apr 1998, Thomas G. Lockhart wrote:\n> \n> > > > My question is, \"Do we disable the HAVING clause for 6.3.2?\" The\n> > > > bugs are serious and cause crashes.\n> > > > Do we disable it?\n> > > Yes...but disabling means that it *will not* be available until\n> > > v6.4...no v6.3.3 :)\n> > \n> > Hmm. What is the downside to leaving it in with caveats or \"stay away\"\n> > warnings in the release notes? Since it didn't exist as a feature\n> > before, the only downside I see is somewhat increased traffic on the\n> > questions list...\n> \n> \tI liked the one suggestion about having it as a compile time\n> option until its fixed...\n\nHow about an elog(NOTICE,\"...\") so it runs, but they see the NOTICE\nevery time.\n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Thu, 16 Apr 1998 10:54:36 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] HAVING clause and 6.3.2 release" }, { "msg_contents": "> \n> > > My question is, \"Do we disable the HAVING clause for 6.3.2?\" The\n> > > bugs are serious and cause crashes.\n> > > Do we disable it?\n> > Yes...but disabling means that it *will not* be available until\n> > v6.4...no v6.3.3 :)\n> \n> Hmm. What is the downside to leaving it in with caveats or \"stay away\"\n> warnings in the release notes? Since it didn't exist as a feature\n> before, the only downside I see is somewhat increased traffic on the\n> questions list...\n\nWe could do a elog(NOTICE,...) and have a small patch to fix all the\nissues once we have a final fix.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Thu, 16 Apr 1998 10:56:57 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] HAVING clause and 6.3.2 release" }, { "msg_contents": "On Thu, 16 Apr 1998, Bruce Momjian wrote:\n\n> > \n> > On Thu, 16 Apr 1998, Thomas G. Lockhart wrote:\n> > \n> > > > > My question is, \"Do we disable the HAVING clause for 6.3.2?\" The\n> > > > > bugs are serious and cause crashes.\n> > > > > Do we disable it?\n> > > > Yes...but disabling means that it *will not* be available until\n> > > > v6.4...no v6.3.3 :)\n> > > \n> > > Hmm. What is the downside to leaving it in with caveats or \"stay away\"\n> > > warnings in the release notes? Since it didn't exist as a feature\n> > > before, the only downside I see is somewhat increased traffic on the\n> > > questions list...\n> > \n> > \tI liked the one suggestion about having it as a compile time\n> > option until its fixed...\n> \n> How about an elog(NOTICE,\"...\") so it runs, but they see the NOTICE\n> every time.\n\n\tThat works too...but how does something like that work from within\na C program? Or Perl?\n\n\n", "msg_date": "Thu, 16 Apr 1998 11:01:42 -0400 (EDT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] HAVING clause and 6.3.2 release" }, { "msg_contents": "On Wed, 15 Apr 1998, Bruce Momjian wrote:\n\n> Attached is a list of bug reports for the HAVING clause.\n> \n> My question is, \"Do we disable the HAVING clause for 6.3.2?\" The bugs\n> are serious and cause crashes.\n> \n> I have looked at the issues, and the basic problems are that the\n> aggregate logic expects to be attached to an actual field in the target\n> list, and the HAVING clause does not properly handle non-aggregate\n> retrictions, nor does it prevent them. COUNT(*) uses the oid of the\n> first FROM table, so that is a problem too.\n> \n> I have looked at the code, but don't have time to fix it before Friday,\n> and holding up the release for that would be silly. I don't think there\n> is one thing wrong, but several places that have to be change to get\n> this working solidly.\n> \n> Do we disable it?\n> \nDon't do that. If you disable it, we can't help you to correct bugs ?\n Jose'\n\n", "msg_date": "Thu, 16 Apr 1998 15:43:42 +0000 (UTC)", "msg_from": "\"Jose' Soares Da Silva\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] HAVING clause and 6.3.2 release" }, { "msg_contents": "> > How about an elog(NOTICE,\"...\") so it runs, but they see the NOTICE\n> > every time.\n> \n> \tThat works too...but how does something like that work from within\n> a C program? Or Perl?\n\nI have disabled HAVING completely, and removed it from the features\nlist. I think we have enough bug reports on it that allowing people to\nuse it is really not going to give us any additional bug-fixing\ninformation.\n\nWe can always release a 6.3.2 patch that will enable it when we have it\nworking 100%.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Fri, 17 Apr 1998 00:12:28 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] HAVING clause and 6.3.2 release" } ]
[ { "msg_contents": "Is this patch planned to be included in 6.3.2 release? I do not see in\nApr13 snapshot.\n\n>Included is a patch for ecpg which seems to have some compiling\n>problems on non POSIX systems such as SunOS 4.1.x.\n>--\n>Tatsuo Ishii\n>[email protected]\n>-------------------------------------------------------------------\n>Index: postgresql-6.3.1/src/interfaces/ecpg/preproc/pgc.l\n>diff -c postgresql-6.3.1/src/interfaces/ecpg/preproc/pgc.l:1.1.1.1 postgresql-6.3.1/src/interfaces/ecpg/preproc/pgc.l:1.1.1.1.4.1\n>*** postgresql-6.3.1/src/interfaces/ecpg/preproc/pgc.l:1.1.1.1\tThu Apr 2 15:45:49 1998\n>--- postgresql-6.3.1/src/interfaces/ecpg/preproc/pgc.l\tMon Apr 6 18:15:51 1998\n>***************\n>*** 2,7 ****\n>--- 2,13 ----\n> %{\n> #include <sys/types.h>\n> #include <limits.h>\n>+ \n>+ #ifndef PATH_MAX\n>+ #include <sys/param.h>\n>+ #define PATH_MAX MAXPATHLEN\n>+ #endif\n>+ \n> #if defined(HAVE_STRING_H)\n> #include <string.h>\n> #else\n>\n\n", "msg_date": "Thu, 16 Apr 1998 13:20:19 +0900", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Re: [PATCHES] ecpg patch " }, { "msg_contents": "[email protected] writes:\n> Is this patch planned to be included in 6.3.2 release? I do not see in\n> Apr13 snapshot.\n\nYes, please include it.\n\nMichael\n\nP.S.: Yes, I'm back from vacation. :-)\n\n-- \nDr. Michael Meskes, Project-Manager | topsystem Systemhaus GmbH\[email protected] | Europark A2, Adenauerstr. 20\[email protected] | 52146 Wuerselen\nGo SF49ers! Go Rhein Fire! | Tel: (+49) 2405/4670-44\nUse Debian GNU/Linux! | Fax: (+49) 2405/4670-10\n", "msg_date": "Thu, 16 Apr 1998 13:19:39 +0200 (CEST)", "msg_from": "Michael Meskes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [PATCHES] ecpg patch" }, { "msg_contents": "> P.S.: Yes, I'm back from vacation. :-)\n\nMichael, would you have time to look at Tom Good's reported problems\nwith v6.3.1 ecpg? We have a few days to make changes if necessary...\n\nMarc, I'd like to make sure that we haven't broken too much in the\nMakefiles since v6.3.1. I'll try patching the docs makefile (how did we\nget _two_ versions with breakage? tsk tsk :)\n\nI have a few patches for the docs sources and html output.\n\nAlso, we should verify that distribution builds can still happen by\nusing the POSTGRESDIR env var since we've ripped out DESTDIR.\n\n - Tom\n", "msg_date": "Thu, 16 Apr 1998 13:57:14 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [PATCHES] ecpg patch" }, { "msg_contents": "On Thu, 16 Apr 1998, Thomas G. Lockhart wrote:\n\n> > P.S.: Yes, I'm back from vacation. :-)\n> \n> Michael, would you have time to look at Tom Good's reported problems\n> with v6.3.1 ecpg? We have a few days to make changes if necessary...\n> \n> Marc, I'd like to make sure that we haven't broken too much in the\n> Makefiles since v6.3.1. I'll try patching the docs makefile (how did we\n> get _two_ versions with breakage? tsk tsk :)\n\n\tI'm grabbing a current copy right now and will run through it all\nunder FreeBSD...will do a quick run through under Solaris tomorrow\nmorning.\n\n\tUnless any problem reports between now and 4:30, whatever is in\nthe snapshot tomorrow morning will be what is called v6.3.2 ... any\nproblems reported will offset that by *one* day...\n\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Thu, 16 Apr 1998 22:22:50 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [PATCHES] ecpg patch" } ]
[ { "msg_contents": "============================================================================\n POSTGRESQL BUG REPORT TEMPLATE\n============================================================================\n\nYour name\t\t:\tJose' Soares\nYour email address\t:\[email protected] \n\n\nSystem Configuration\n---------------------\n Architecture (example: Intel Pentium) \t: Intel Pentium\n\n Operating System (example: Linux 2.0.26 ELF) \t: Linux 2.0.31 Elf\n\n PostgreSQL version (example: PostgreSQL-6.1) : PostgreSQL-snapshot april 6, 1998\n\n Compiler used (example: gcc 2.7.2)\t\t: gcc 2.7.2.1\n\n\nPlease enter a FULL description of your problem:\n------------------------------------------------\n\nI found a lock error on transactions using drop table.\nTake a look...\n\n--first user:------------------------------------------------------\nBEGIN WORK;\nprova=> SELECT * FROM tmp;\na\n-----\nfirst\nlast\n(2 rows)\n\nprova=> DROP TABLE tmp;\nDROP\nprova=> SELECT * FROM tmp;\nERROR: tmp: Table does not exist.\n\n--second user:---------------------------------------------------\n\nprova=> select * from tmp;\na\n-\n(0 rows)\n\nprova=> insert into tmp values ('new');\nINSERT 178789 1\nprova=> select * from tmp;\na\n-----\nfirst\nlast\nnew\n(3 rows)\n\n--again first user:--------------------------------------------------\n\nprova=> select * from tmp;\nERROR: tmp: Table does not exist.\nprova=> commit;\nEND\nprova=> select * from tmp;\na\n-----\nfirst\nlast\nnew\n(3 rows)\n\n", "msg_date": "Thu, 16 Apr 1998 13:18:43 +0000 (UTC)", "msg_from": "\"Jose' Soares Da Silva\" <[email protected]>", "msg_from_op": true, "msg_subject": "DROP TABLE inside transactions" } ]
[ { "msg_contents": "I had some time during vacation to thing about postgresql. And here's the\nfeature I'd really like to see. This one would put us miles in front of all\nothers: recursive view definitions.\n\nJust one short example:\n\nThere is that well-known parent relation: par (parent, child). The view anc\nnow should list all ancestors:\n\ncreate view anc as\n\tselect parent as ancestor, child as person from par\n\tunion\n\tselect anc.ancestor as ancestor, par.child as person\n\tfrom anc, par\n\twhere anc.person = par.parent;\n\nWhat do you guys think of this idea? How much work would it be to implement\nit?\n\nMichael\n-- \nDr. Michael Meskes, Project-Manager | topsystem Systemhaus GmbH\[email protected] | Europark A2, Adenauerstr. 20\[email protected] | 52146 Wuerselen\nGo SF49ers! Go Rhein Fire! | Tel: (+49) 2405/4670-44\nUse Debian GNU/Linux! | Fax: (+49) 2405/4670-10\n", "msg_date": "Thu, 16 Apr 1998 16:29:09 +0200 (CEST)", "msg_from": "Michael Meskes <[email protected]>", "msg_from_op": true, "msg_subject": "recursive views" }, { "msg_contents": "> \n> I had some time during vacation to thing about postgresql. And here's the\n> feature I'd really like to see. This one would put us miles in front of all\n> others: recursive view definitions.\n> \n> Just one short example:\n> \n> There is that well-known parent relation: par (parent, child). The view anc\n> now should list all ancestors:\n> \n> create view anc as\n> \tselect parent as ancestor, child as person from par\n> \tunion\n> \tselect anc.ancestor as ancestor, par.child as person\n> \tfrom anc, par\n> \twhere anc.person = par.parent;\n> \n> What do you guys think of this idea? How much work would it be to implement\n> it?\n\nYou want views of UNION's. That is not hard to do.\n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Thu, 16 Apr 1998 13:21:55 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] recursive views" }, { "msg_contents": "> \n> I had some time during vacation to thing about postgresql. And here's the\n> feature I'd really like to see. This one would put us miles in front of all\n> others: recursive view definitions.\n> \n> Just one short example:\n> \n> There is that well-known parent relation: par (parent, child). The view anc\n> now should list all ancestors:\n> \n> create view anc as\n> \tselect parent as ancestor, child as person from par\n> \tunion\n> \tselect anc.ancestor as ancestor, par.child as person\n> \tfrom anc, par\n> \twhere anc.person = par.parent;\n> \n> What do you guys think of this idea? How much work would it be to implement\n> it?\n\nAh, you want views of UNION's, and want to specify the view in the\nquery. Yikes, no idea on how hard that would be.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Thu, 16 Apr 1998 13:22:32 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] recursive views" } ]
[ { "msg_contents": "Thomas Good wrote:\n> \n> When I pare my main() module down to simply declaring a communication \n> area (include sqlca.h) and then connecting to the db (which succeeds) \n> and subsequently running exec sql delete from $table where \n> $attribute=value; I get no stderr. But, alas, I get no record removed \n> either. Meanwhile, the same sql cmd when run from psql rm's the \n> record.\n> Anyone who can get ecpg to delete records will earn my undying \n> gratitude (or a rack of good belgian ale ;-) I am able to do data \n> retrieval no problem via ecpg and this morn am writing a usr interface \n> to do inserts and updates (we'll see how that goes...) But record \n> removal eludes me.\n> Help (Bruce! Tom! - Anybody!!?) (and thanks!)\n> Tom Good\n\nMeskes, Michael wrote:\n> \n> Thomas, could you please re-send me the original bug report. I seem to\n> have lost it under the about 3000 mails wainting for me.\n\nThomas G., can you send Michael your pared-down test case? Thanks...\n\n - Tom\n", "msg_date": "Thu, 16 Apr 1998 14:29:26 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": true, "msg_subject": "[Fwd: In the Soup]" }, { "msg_contents": "\nHmm...I don't have the original as I managed to get it going...\nmy exec sql commit; line was misplaced (my error not Michael's).\nI apologize for wasting your time...\n\nI am *very* stuck on updates that use a call to gets(). \nscanf() works fine but...\nwhen I need the usr to give me a string for a record update gets()\nbehaves oddly. The src compiles and links, connects, queries for a\nlist of records and then uses printf() to ask for usr input.\n\nAt this point, it leaps ahead of the current stanza, inputting a null\ninto the attribute in question. \n\nI have tried making the call to gets() simple:\nchar usr_buffer[81];\n...\nprintf(\"Enter blah blah blah: \");\nfflush(stdin);\ngets(usr_buffer);\nand I have tried allocating memory (and making the usr-buffer var\na pointer...) but this also fails.\nFinally, I tried making the call to gets() part of a separate\nusr defined function. This also compiled without error but\ndisplays the same symptoms. \n\nI raised this once (or twice ;-) but felt a bit sheepish over\nmy delete blunder so I didn't really want to push the issue.\nHaving said that, I do wonder if it really is stupidity on my\npart or something Michael could have a look at...?\n\nI will send him my code, off-list...\n\nThanks alot for your patience Tom, you're quite a guy.\n\n\n\n ----------- Sisters of Charity Medical Center ----------\n Department of Psychiatry\n ---- \n Thomas Good, System Administrator <[email protected]>\n North Richmond CMHC/Residential Services Phone: 718-354-5528\n 75 Vanderbilt Ave, Quarters 8 Fax: 718-354-5056\n Staten Island, NY 10305\n\n", "msg_date": "Thu, 16 Apr 1998 13:47:42 -0400 (EDT)", "msg_from": "Tom Good <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] [Fwd: In the Soup]" }, { "msg_contents": "Tom Good writes:\n> I am *very* stuck on updates that use a call to gets(). \n> scanf() works fine but...\n> when I need the usr to give me a string for a record update gets()\n> behaves oddly. The src compiles and links, connects, queries for a\n> list of records and then uses printf() to ask for usr input.\n\nWe've solved this one in private mail. It was no bug in ecpg. So there's no\nneed for a patch in 6.3.2.\n\nMichael\n-- \nDr. Michael Meskes, Project-Manager | topsystem Systemhaus GmbH\[email protected] | Europark A2, Adenauerstr. 20\[email protected] | 52146 Wuerselen\nGo SF49ers! Go Rhein Fire! | Tel: (+49) 2405/4670-44\nUse Debian GNU/Linux! | Fax: (+49) 2405/4670-10\n", "msg_date": "Fri, 17 Apr 1998 11:37:53 +0200 (CEST)", "msg_from": "Michael Meskes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] [Fwd: In the Soup]" } ]
[ { "msg_contents": "Hi, all.\n\nI've just compiled the latest 6.3.2 snapshot on Digital Unix 3.2g using\nDEC's C compiler. The regression tests look good, so go ahead with the\nrelease (at least for this platform).\n\nHowever, there are two minor \"errors\". One of them is already known:\nconfigure fails when --with-compiler is specified. Included is a patch to\nthe configure script. I know that this is not the right file to patch,\nsince it is generated from configure.in, but we can use it as a workaround\nsince the error lies in autoconf. The patch should be re-applied every\ntime autoconf is run.\n\nThe second \"error\" is a new one. The configure script no longer asks for\nadditional include and lib directories, so, for example, it doesn't find\nmy readline libs, installed under /usr/local/include and /usr/local/lib.\nSince these are two common directories to have additional software\ninstalled into, I have added them to src/templates/alpha. Attached is a\npatch to make this change. It won't hurt even if the directories don't\nexist.\n\n-------------------------------------------------------------------\nPedro José Lobo Perea Tel: +34 1 336 78 19\nCentro de Cálculo Fax: +34 1 331 92 29\nEUIT Telecomunicación - UPM e-mail: [email protected]", "msg_date": "Thu, 16 Apr 1998 16:58:49 +0200 (MET DST)", "msg_from": "\"Pedro J. Lobo\" <[email protected]>", "msg_from_op": true, "msg_subject": "Status of 6.3.2 snapshot on alpha/Digital Unix" }, { "msg_contents": "> The second \"error\" is a new one. The configure script no longer asks for\n> additional include and lib directories, so, for example, it doesn't find\n> my readline libs, installed under /usr/local/include and /usr/local/lib.\n> Since these are two common directories to have additional software\n> installed into, I have added them to src/templates/alpha. Attached is a\n> patch to make this change. It won't hurt even if the directories don't\n> exist.\n> \n\nWe have new -with-includes -with-libraries options.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Thu, 16 Apr 1998 11:18:35 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Status of 6.3.2 snapshot on alpha/Digital Unix" }, { "msg_contents": "On Thu, 16 Apr 1998, Pedro J. Lobo wrote:\n\n> Hi, all.\n> \n> I've just compiled the latest 6.3.2 snapshot on Digital Unix 3.2g using\n> DEC's C compiler. The regression tests look good, so go ahead with the\n> release (at least for this platform).\n> \n> However, there are two minor \"errors\". One of them is already known:\n> configure fails when --with-compiler is specified. Included is a patch to\n> the configure script. I know that this is not the right file to patch,\n> since it is generated from configure.in, but we can use it as a workaround\n> since the error lies in autoconf. The patch should be re-applied every\n> time autoconf is run.\n\n\tAny idea if a patch is possible for autoconf itself?\n\n> The second \"error\" is a new one. The configure script no longer asks for\n> additional include and lib directories, so, for example, it doesn't find\n> my readline libs, installed under /usr/local/include and /usr/local/lib.\n> Since these are two common directories to have additional software\n> installed into, I have added them to src/templates/alpha. Attached is a\n> patch to make this change. It won't hurt even if the directories don't\n> exist.\n\n\t./configure --help should list both a '--with-includes=' and\n'--with-libraries=' option :)\n\n\n", "msg_date": "Thu, 16 Apr 1998 12:05:59 -0400 (EDT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Status of 6.3.2 snapshot on alpha/Digital Unix" }, { "msg_contents": "On Thu, 16 Apr 1998, Pedro J. Lobo wrote:\n\n> However, there are two minor \"errors\". One of them is already known:\n> configure fails when --with-compiler is specified. Included is a patch to\n> the configure script. I know that this is not the right file to patch,\n> since it is generated from configure.in, but we can use it as a workaround\n> since the error lies in autoconf. The patch should be re-applied every\n> time autoconf is run.\n\n\tI was just thinking about it, and wasn't the solution under\nSolaris to actually define CC=cc in that OSs template file..??\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Thu, 16 Apr 1998 22:25:14 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Status of 6.3.2 snapshot on alpha/Digital Unix" } ]
[ { "msg_contents": "Mattias Kregert <[email protected]> writes:\n> Async communication between backend and frontend would be really nice.\n> I use Tcl a lot and I really miss this. It would be wonderful to have\n> libpgtcl do callbacks, so that info on-screen could be automagically\n> updated whenever something changes.\n\nYes, if anything were to be done along this line it'd also make sense\nto revise libpgtcl. I think it ought to work more like this:\n (a) the idle loop is invoked while waiting for a query response\n (so that a pg_exec statement behaves sort of like \"tkwait\");\n (b) a \"listen\" command is sent via a new pg_listen statement that\n specifies a callback command string. Subsequent notify responses\n can occur whenever a callback is possible.\nI suppose (a) had better be an option to pg_exec statements so that\nwe don't break existing Tcl code...\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 16 Apr 1998 11:35:44 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Anyone working on asynchronous NOTIFY reception? " }, { "msg_contents": "> \n> Mattias Kregert <[email protected]> writes:\n> > Async communication between backend and frontend would be really nice.\n> > I use Tcl a lot and I really miss this. It would be wonderful to have\n> > libpgtcl do callbacks, so that info on-screen could be automagically\n> > updated whenever something changes.\n> \n> Yes, if anything were to be done along this line it'd also make sense\n> to revise libpgtcl. I think it ought to work more like this:\n> (a) the idle loop is invoked while waiting for a query response\n> (so that a pg_exec statement behaves sort of like \"tkwait\");\n> (b) a \"listen\" command is sent via a new pg_listen statement that\n> specifies a callback command string. Subsequent notify responses\n> can occur whenever a callback is possible.\n> I suppose (a) had better be an option to pg_exec statements so that\n> we don't break existing Tcl code...\n> \n> \t\t\tregards, tom lane\n\nThere is already some support for async notify in libpgtcl, it was and old\npatch of mine. It does exactly what you are thinking of, except for the\nidle loop stuff. You can setup callbacks for specific relations and then\nperiodically issue a command which checks for pending notifications and does\nthe callbacks if any is found. Note also that you can listen on any name,\nnot just for existing relations.\n\nI have a Tcl/Tk application (used concurrently by more than 30 users)\nwhich uses heavily async notifications to notify clients when some events\noccur. It uses a timer inside the application which polls the server every\nn seconds (actually 1 second) for pending notifies.\nIt works well but it is a really big bottleneck for the application and I\nhad to spend a lot of time to debug and patch the code in async.c.\nIt seems that nobody beside me has ever used this feature because I didn't\nsee any bug report on the mailing lists (and there were many).\n\nThe biggest problem is that if you have many clients listening on the same\nthing they are signaled at the same time and all of them try to access the\npg_listener table for write. The result is that you have a lot of waits on\nthe table and sometimes also deadlocks if you don't do things carefully.\n\n>From the Tcl side, a better solution would be to define a tcl event handler,\nlike the standard Tcl filehandler, which would be invoked automatically by\nthe Tk event loop or by tkwait if using pure Tcl.\n\nI have also some new patches which try to reduce the notify overhead by\navoiding unnecessary unlocks of the table. If you are interested I can\npost them.\n\n-- \nMassimo Dal Zotto\n\n+----------------------------------------------------------------------+\n| Massimo Dal Zotto e-mail: [email protected] |\n| Via Marconi, 141 phone: ++39-461-534251 |\n| 38057 Pergine Valsugana (TN) www: http://www.cs.unitn.it/~dz/ |\n| Italy pgp: finger [email protected] |\n+----------------------------------------------------------------------+\n\n", "msg_date": "Fri, 17 Apr 1998 12:23:27 +0200 (MET DST)", "msg_from": "Massimo Dal Zotto <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Anyone working on asynchronous NOTIFY reception?" } ]
[ { "msg_contents": "I am currently designing a database that I expect to put into use after\n6.4 is released. This makes use of inheritance, and I would like to\nask about how inheritance will relate to the handling of constraints.\n\n1. I would like to be able to say:\n\n create table job\n (\n ...\n resid char(4) not null references resource*(id),\n ...\n )\n\n to indicate that the foreign key constraint would be satisfied by the\n presence of the desired item in any class of the inheritance tree starting\n at resource. The parser does not recognise this syntax at present.\n (This is parallel to `select ... from class*', by which we can currently\n list all items in an inheritance tree.)\n\n2. Will all constraints on a class be inherited along with the column\n definitions?\n\n If constraints are inherited, there is the possibility of conflict or\n redefinition.\n\n In single inheritance, could a constraint be redefined by being restated \n in the descendent?\n\n In multiple inheritance, a conflict of column types causes an error; how\n will a conflict of constraint names be handled, if the check condition\n is different? (Options: error; drop the constraint; require a new\n definition of the constraint in the descendent class.)\n\n At the moment, check constraints are inherited and are silently mangled\n by prefixing the class name; this can lead to an impossible combination\n of constraints, which could be solved if redefinition were possible.\n\n Example:\n\njunk=> create table aa (id char(4) check (id > 'M'), name text);\nCREATE\njunk=> create table bb (id char(4) check (id < 'M'), qty int);\nCREATE\njunk=> create table ab (value money) inherits (aa, bb);\nCREATE\njunk=> insert into ab values ('ABCD', 5);\nERROR: ExecAppend: rejected due to CHECK constraint aa_id\njunk=> insert into ab values ('WXYZ', 5);\nERROR: ExecAppend: rejected due to CHECK constraint bb_id\n\n We could perhaps allow syntax such as:\n\n create table ab (..., constraint id check (id > 'E' and id < 'Q'))\n inherits (aa, bb)\n undefine (constraint aa_id, constraint bb_id)\n\n Is this feasible?\n\n At present, primary key definitions are not inherited. Could they be?\n (either to share the same index or have a new one for each class, at\n the designer's option.)\n\n\n\n-- \nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\n\nPGP key from public servers; key ID 32B8FAA1\n\n ========================================\n Come to me, all you who labour and are heavily laden, and I will\n give you rest. Take my yoke upon you, and learn from me; for I am\n meek and lowly in heart, and you shall find rest for your souls.\n For my yoke is easy and my burden is light. (Matthew 11: 28-30)\n\n\n", "msg_date": "Thu, 16 Apr 1998 17:55:06 +0200", "msg_from": "\"Oliver Elphick\" <[email protected]>", "msg_from_op": true, "msg_subject": "Constraints and inheritance" } ]
[ { "msg_contents": "Just thought I'd try the cluster command. What am I doing wrong.\nReadHat 5.0\n6.3.1 rpm's\n\n[djackson@www]$ psql template1\nWelcome to the POSTGRESQL interactive sql monitor:\n Please read the file COPYRIGHT for copyright terms of POSTGRESQL\n\n type \\? for help on slash commands\n type \\q to quit\n type \\g or terminate with semicolon to execute query\n You are currently connected to the database: template1\n\ntemplate1=> \\d\nCouldn't find any tables, sequences or indices!\ntemplate1=> \\l\ndatname |datdba|datpath \n---------+------+---------\ntemplate1| 100|template1\npostgres | 100|postgres \n(2 rows)\n\ntemplate1=> create database test;\nCREATEDB\ntemplate1=> \\connect test \nconnecting to new database: test\ntest=> create table list (k int2);\nCREATE\ntest=> insert into list values (1);\nINSERT 33769 1\ntest=> insert into list select max(k)+1;\n.\n.\n.\ntest=> select * from list;\nk\n-\n1\n2\n3\n4\n5\n6\n(6 rows)\n\ntest=> create table list2 (k1 int2 NOT NULL, k2 int2 NOT NULL);\nCREATE\ntest=> create UNIQUE INDEX l1 ON list2(k1, k2);\nCREATE\ntest=> create UNIQUE INDEX l2 ON list2(k2, k1); \nCREATE\ntest=> insert into list2 select l1.k, l2.k from list as l1, list as l2;\nINSERT 0 36\ntest=> select * from list2;\nk1|k2\n--+--\n 1| 1\n 2| 1\n 3| 1\n.\n.\n.\n 4| 6\n 5| 6\n 6| 6\n(36 rows)\n\ntest=> vacuum verbose analyze list2;\nNOTICE: Rel list2: Pages 1: Changed 0, Reapped 0, Empty 0, New 0; Tup\n36: Vac 0, Crash 0, UnUsed 0, MinLen 44, MaxLen 44; Re-using:\nFree/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0. Elapsed 0/0 sec.\nNOTICE: Ind l2: Pages 2; Tuples 36. Elapsed 0/0 sec.\nNOTICE: Ind l1: Pages 2; Tuples 36. Elapsed 0/0 sec.\nVACUUM\ntest=> cluster l1 on list2;\nERROR: Cannot create unique index. Table contains non-unique values\ntest=> cluster l2 on list2; \nPQexec() -- Request was sent to backend, but backend closed the channel\nbefore responding.\n This probably means the backend terminated abnormally before or\nwhile processing the request.\n", "msg_date": "Thu, 16 Apr 1998 13:56:27 -0500", "msg_from": "\"Jackson, DeJuan\" <[email protected]>", "msg_from_op": true, "msg_subject": "Bug or Short between my brain and the keyboard?" }, { "msg_contents": "Yep, its a bug. Not sure about the cause, but will look into it in the\nnext few weeks.\n\n> \n> Just thought I'd try the cluster command. What am I doing wrong.\n> ReadHat 5.0\n> 6.3.1 rpm's\n> \n> [djackson@www]$ psql template1\n> Welcome to the POSTGRESQL interactive sql monitor:\n> Please read the file COPYRIGHT for copyright terms of POSTGRESQL\n> \n> type \\? for help on slash commands\n> type \\q to quit\n> type \\g or terminate with semicolon to execute query\n> You are currently connected to the database: template1\n> \n> template1=> \\d\n> Couldn't find any tables, sequences or indices!\n> template1=> \\l\n> datname |datdba|datpath \n> ---------+------+---------\n> template1| 100|template1\n> postgres | 100|postgres \n> (2 rows)\n> \n> template1=> create database test;\n> CREATEDB\n> template1=> \\connect test \n> connecting to new database: test\n> test=> create table list (k int2);\n> CREATE\n> test=> insert into list values (1);\n> INSERT 33769 1\n> test=> insert into list select max(k)+1;\n> .\n> .\n> .\n> test=> select * from list;\n> k\n> -\n> 1\n> 2\n> 3\n> 4\n> 5\n> 6\n> (6 rows)\n> \n> test=> create table list2 (k1 int2 NOT NULL, k2 int2 NOT NULL);\n> CREATE\n> test=> create UNIQUE INDEX l1 ON list2(k1, k2);\n> CREATE\n> test=> create UNIQUE INDEX l2 ON list2(k2, k1); \n> CREATE\n> test=> insert into list2 select l1.k, l2.k from list as l1, list as l2;\n> INSERT 0 36\n> test=> select * from list2;\n> k1|k2\n> --+--\n> 1| 1\n> 2| 1\n> 3| 1\n> .\n> .\n> .\n> 4| 6\n> 5| 6\n> 6| 6\n> (36 rows)\n> \n> test=> vacuum verbose analyze list2;\n> NOTICE: Rel list2: Pages 1: Changed 0, Reapped 0, Empty 0, New 0; Tup\n> 36: Vac 0, Crash 0, UnUsed 0, MinLen 44, MaxLen 44; Re-using:\n> Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0. Elapsed 0/0 sec.\n> NOTICE: Ind l2: Pages 2; Tuples 36. Elapsed 0/0 sec.\n> NOTICE: Ind l1: Pages 2; Tuples 36. Elapsed 0/0 sec.\n> VACUUM\n> test=> cluster l1 on list2;\n> ERROR: Cannot create unique index. Table contains non-unique values\n> test=> cluster l2 on list2; \n> PQexec() -- Request was sent to backend, but backend closed the channel\n> before responding.\n> This probably means the backend terminated abnormally before or\n> while processing the request.\n> \n> \n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Sun, 26 Apr 1998 23:43:10 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Bug or Short between my brain and the keyboard?" }, { "msg_contents": "Can someone comment on this error message? It certainly looks like a\nbug, but I can't figure out why he is getting these problems.\n\n---------------------------------------------------------------------------\n\n\n> \n> Just thought I'd try the cluster command. What am I doing wrong.\n> ReadHat 5.0\n> 6.3.1 rpm's\n> \n> [djackson@www]$ psql template1\n> Welcome to the POSTGRESQL interactive sql monitor:\n> Please read the file COPYRIGHT for copyright terms of POSTGRESQL\n> \n> type \\? for help on slash commands\n> type \\q to quit\n> type \\g or terminate with semicolon to execute query\n> You are currently connected to the database: template1\n> \n> template1=> \\d\n> Couldn't find any tables, sequences or indices!\n> template1=> \\l\n> datname |datdba|datpath \n> ---------+------+---------\n> template1| 100|template1\n> postgres | 100|postgres \n> (2 rows)\n> \n> template1=> create database test;\n> CREATEDB\n> template1=> \\connect test \n> connecting to new database: test\n> test=> create table list (k int2);\n> CREATE\n> test=> insert into list values (1);\n> INSERT 33769 1\n> test=> insert into list select max(k)+1;\n> .\n> .\n> .\n> test=> select * from list;\n> k\n> -\n> 1\n> 2\n> 3\n> 4\n> 5\n> 6\n> (6 rows)\n> \n> test=> create table list2 (k1 int2 NOT NULL, k2 int2 NOT NULL);\n> CREATE\n> test=> create UNIQUE INDEX l1 ON list2(k1, k2);\n> CREATE\n> test=> create UNIQUE INDEX l2 ON list2(k2, k1); \n> CREATE\n> test=> insert into list2 select l1.k, l2.k from list as l1, list as l2;\n> INSERT 0 36\n> test=> select * from list2;\n> k1|k2\n> --+--\n> 1| 1\n> 2| 1\n> 3| 1\n> .\n> .\n> .\n> 4| 6\n> 5| 6\n> 6| 6\n> (36 rows)\n> \n> test=> vacuum verbose analyze list2;\n> NOTICE: Rel list2: Pages 1: Changed 0, Reapped 0, Empty 0, New 0; Tup\n> 36: Vac 0, Crash 0, UnUsed 0, MinLen 44, MaxLen 44; Re-using:\n> Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0. Elapsed 0/0 sec.\n> NOTICE: Ind l2: Pages 2; Tuples 36. Elapsed 0/0 sec.\n> NOTICE: Ind l1: Pages 2; Tuples 36. Elapsed 0/0 sec.\n> VACUUM\n> test=> cluster l1 on list2;\n> ERROR: Cannot create unique index. Table contains non-unique values\n> test=> cluster l2 on list2; \n> PQexec() -- Request was sent to backend, but backend closed the channel\n> before responding.\n> This probably means the backend terminated abnormally before or\n> while processing the request.\n> \n> \n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Mon, 15 Jun 1998 22:37:13 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Bug or Short between my brain and the keyboard?" } ]
[ { "msg_contents": "Here is a pair of patches that (I hope) finish the configuration\nissues with tcl/tk and make the recognition of the two packages\ncompletely parallel in organization. This should make future changes\neasier to maintain.\n\nHope to see this in 6.2.2.\n\nCheers,\nBrook\n\n===========================================================================\n--- ../INSTALL.orig\tThu Apr 9 17:00:52 1998\n+++ ../INSTALL\tThu Apr 16 13:43:20 1998\n@@ -288,6 +288,7 @@\n for header files. (Typical use will need\n --with-includes=/usr/local/include)\n \n+ --with-libs=DIRS\n --with-libraries=DIRS\n Include DIRS in list of directories searched\n for archive libraries. (Typical use will need\n===========================================================================\n--- configure.in.orig\tFri Apr 10 01:00:37 1998\n+++ configure.in\tThu Apr 16 13:42:32 1998\n@@ -185,7 +185,7 @@\n ])\n \n if test \"$LIBRARY_DIRS\"; then\n-\tfor dir in $withval; do\n+\tfor dir in $LIBRARY_DIRS; do\n \t\tif test -d \"$dir\"; then\n \t\t\tPGSQL_LDFLAGS=\"$PGSQL_LDFLAGS -L$dir\"\n \t\telse\n@@ -661,7 +661,7 @@\n \tice_save_CPPFLAGS=\"$CPPFLAGS\"\n \tice_save_LDFLAGS=\"$LDFLAGS\"\n \n-\tLIBS=\"$LIBS $X_EXTRA_LIBS\"\n+\tLIBS=\"$TCL_LIB $X_PRE_LIBS $X11_LIBS $X_EXTRA_LIBS $LIBS\"\n \tCFLAGS=\"$CFLAGS $X_CFLAGS\"\n \tCPPFLAGS=\"$CPPFLAGS $X_CFLAGS\"\n \tLDFLAGS=\"$LDFLAGS $X_LIBS\"\n@@ -670,7 +670,7 @@\n \ttk_libs=\"tk8.0 tk80 tk4.2 tk42 tk\"\n \tfor tk_lib in $tk_libs; do\n \t\tif test -z \"$TK_LIB\"; then\n-\t\t\tAC_CHECK_LIB($tk_lib, main, TK_LIB=$tk_lib,, $TCL_LIB $X_PRE_LIBS $X_LIBS $X11_LIBS)\n+\t\t\tAC_CHECK_LIB($tk_lib, main, TK_LIB=$tk_lib)\n \t\tfi\n \tdone\n \tif test -z \"$TK_LIB\"; then\n===========================================================================\n", "msg_date": "Thu, 16 Apr 1998 14:11:34 -0600 (MDT)", "msg_from": "Brook Milligan <[email protected]>", "msg_from_op": true, "msg_subject": "configuration patch (the last?)" }, { "msg_contents": "On Thu, 16 Apr 1998, Brook Milligan wrote:\n\n> Here is a pair of patches that (I hope) finish the configuration\n> issues with tcl/tk and make the recognition of the two packages\n> completely parallel in organization. This should make future changes\n> easier to maintain.\n> \n> Hope to see this in 6.2.2.\n\n\tSorry, but it didn't make 6.2.2 :( Must have gotten stuck in some\nqueue somewhere *grin* But its in v6.3.2 :)\n\n> \n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Thu, 16 Apr 1998 22:29:03 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] configuration patch (the last?)" } ]
[ { "msg_contents": "\n\nPaul Lisewski wrote:\n\n> Byron,\n> Thanks very much for your prompt response. After renaming a few\n> columns (password and position), I've got my exisitng application\n> working correctly, with the following minor problems:\n>\n> {fn UCase(...) ODBC function is not supported. Are you planning\n> to support ODBC functions??\n>\n\nMan, there are a ton of scalar functions that would need to be supported:\n\nString Functions: ascii, char, concat, difference, insert, lcase, left,\nlength, locate, ltrim, repeat, replace, right, rtrim, soundex, space,\nsubstring, ucase\n\nNumeric Functions: abs, acos, asin, atan, atan2, ceiling, cos, cot, degrees,\nexp, floor, log, log10, mod, pi, power, radians, rand, round, sign, sin,\nsqrt, tan, truncate\n\nTime/Date: curdate, curtime, dayname, dayofmonth, dayofweek, dayofyear,\nhour, minute, month, monthname, now, quarter, second, timestampadd,\ntimestampdiff, week, year\n\nSystem Functions: database, ifnull, user\n\nAnd of course, the granddaddy of all functions, CONVERT().\n\nNow, some of the string functions like '{fn ucase()}' could fairly easily be\nmapped to the Postgres \"Upper\" function. But what about all the others?\nShould they be implemented in the driver or in the backend? Or do we just\ndo the easy ones?\n\n> When getting a list of DataTypes via SQLGetInfo, there are 4 SQL_VARCHAR\n> types.\n> They are:\n> bpchar, varchar, text and name. I have a utility to interogate a database\n> and create tables using the syntax from SQLGetInfo. I pick the first match\n> from the list (in this case bpchar). Could the Types be modified to\n> SQL_LONGVARCHAR for the non varchar datatypes or at least resequenced so\n> that varchar comes before bpchar\n\nActually, looking at the latest driver source code, 'text' is mapped to an\nSQL_LONGVARCHAR. Now that SQLPutData and SQLParamData are implemented,\nLongVarChar can be properly handled and I think it makes sense to map it\nthis way. At least on MSACCESS, it uses SQLPutData to handle\nSQL_LONGVARCHAR's.\n\nYou should be seeing only 3 (bpchar, varchar, and name). I would argue that\n'name' should probably be mapped to SQL_CHAR, since I think it is fixed at\n32 anyway. So that leaves 2 types, bpchar and varchar. Postgres handles\nboth as variable, and the driver looks up the length dynamically for the\nSQLColumns call.\n\nNow I dont believe there is an ODBC way of discriminating types based on\n\"blank padded\" strings.\n\nAny ideas anyone?\n\n\n> I also noticed that the infomation returned by SQLGetInfo for DataTypes\n> didn't have the correct prefix and suffix for date, datetime. They\n> report ' for both pre ans suffix instead of {d } and {ts and }.\n>\n\nI dont know about that one. I thought the prefix was supposed to be for the\nnative SQL, which would be a quote character (').\n\nRegards,\n\nByron\n\nP.S., please send these notes to the \"[email protected]\" so\neveryone can see them, including me. Are you subscribed to this list?\n\n", "msg_date": "Thu, 16 Apr 1998 17:50:46 -0400", "msg_from": "Byron Nikolaidis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: ODBC driver" }, { "msg_contents": "> > {fn UCase(...) ODBC function is not supported. Are you planning\n> > to support ODBC functions??\n> There are a ton of scalar functions that would need to be supported:\n> Strings: ascii, char, concat, difference, insert, lcase, left,\n> length, locate, ltrim, repeat, replace, right, rtrim, soundex, space,\n> substring, ucase\n> \n> Numerics: abs, acos, asin, atan, atan2, ceiling, cos, cot, degrees,\n> exp, floor, log, log10, mod, pi, power, radians, rand, round, sign, \n> sin, sqrt, tan, truncate\n> \n> Dates: curdate, curtime, dayname, dayofmonth, dayofweek, dayofyear,\n> hour, minute, month, monthname, now, quarter, second, timestampadd,\n> timestampdiff, week, year\n> \n> System Functions: database, ifnull, user\n> \n> And of course, the granddaddy of all functions, CONVERT().\n> \n> Now, some of the string functions like '{fn ucase()}' could fairly \n> easily be mapped to the Postgres \"Upper\" function. But what about all \n> the others?\n> Should they be implemented in the driver or in the backend? Or do we \n> just do the easy ones?\n\nLet's do both. Some already map to existing functions, which mean you\nget to do this in your driver I suppose. Others, like many of the math\nroutines, should/could be in the backend.\n\nHow do you want to organize attacking these? I can help now with\nsuggesting mappings for existing functions (e.g. date_part('dow',\ndatetime) gives you \"dayofweek\") and can help with new functions in a\nmonth or two. Or, perhaps others can help with that more quickly...\n\n - Tom\n", "msg_date": "Fri, 17 Apr 1998 02:45:03 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [INTERFACES] Re: ODBC driver" }, { "msg_contents": "Suggested mappings would be a big help.\n\nThanks,\n\nByron\n\n\nThomas G. Lockhart wrote:\n\n> > > {fn UCase(...) ODBC function is not supported. Are you planning\n> > > to support ODBC functions??\n> > There are a ton of scalar functions that would need to be supported:\n> > Strings: ascii, char, concat, difference, insert, lcase, left,\n> > length, locate, ltrim, repeat, replace, right, rtrim, soundex, space,\n> > substring, ucase\n> >\n> > Numerics: abs, acos, asin, atan, atan2, ceiling, cos, cot, degrees,\n> > exp, floor, log, log10, mod, pi, power, radians, rand, round, sign,\n> > sin, sqrt, tan, truncate\n> >\n> > Dates: curdate, curtime, dayname, dayofmonth, dayofweek, dayofyear,\n> > hour, minute, month, monthname, now, quarter, second, timestampadd,\n> > timestampdiff, week, year\n> >\n> > System Functions: database, ifnull, user\n> >\n> > And of course, the granddaddy of all functions, CONVERT().\n> >\n> > Now, some of the string functions like '{fn ucase()}' could fairly\n> > easily be mapped to the Postgres \"Upper\" function. But what about all\n> > the others?\n> > Should they be implemented in the driver or in the backend? Or do we\n> > just do the easy ones?\n>\n> Let's do both. Some already map to existing functions, which mean you\n> get to do this in your driver I suppose. Others, like many of the math\n> routines, should/could be in the backend.\n>\n> How do you want to organize attacking these? I can help now with\n> suggesting mappings for existing functions (e.g. date_part('dow',\n> datetime) gives you \"dayofweek\") and can help with new functions in a\n> month or two. Or, perhaps others can help with that more quickly...\n>\n> - Tom\n\n\n\n", "msg_date": "Fri, 17 Apr 1998 10:40:34 -0400", "msg_from": "Byron Nikolaidis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Re: [INTERFACES] Re: ODBC driver" } ]
[ { "msg_contents": "I don't remember if it has been mentioned.\n\nAnyhow, I think that it would be nice to change fsync'es into\nfdatasync'es (of course as an autoconf-igurable option). I don't think\nit's necessary to update all file's metadata each time a file is\nflushed.\n\nI dunno where it's implemented. But it's for sure implemented in Linux.\n\nMike\n\n-- \nWWW: http://www.lodz.pdi.net/~mimo tel: Int. Acc. Code + 48 42 148340\nadd: Michal Mosiewicz * Bugaj 66 m.54 * 95-200 Pabianice * POLAND\n", "msg_date": "Fri, 17 Apr 1998 02:22:58 +0200", "msg_from": "Michal Mosiewicz <[email protected]>", "msg_from_op": true, "msg_subject": "fsync -> fdatasync in backend/storage/file/fd.c" }, { "msg_contents": "On Fri, 17 Apr 1998, Michal Mosiewicz wrote:\n\n> I don't remember if it has been mentioned.\n> \n> Anyhow, I think that it would be nice to change fsync'es into\n> fdatasync'es (of course as an autoconf-igurable option). I don't think\n> it's necessary to update all file's metadata each time a file is\n> flushed.\n> \n> I dunno where it's implemented. But it's for sure implemented in Linux.\n\n\tWe don't have it (FreeBSD)...what does it do? *raised eyebrow*\nAnd, how many ppl actually have fsync's enabled?\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Thu, 16 Apr 1998 22:27:10 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] fsync -> fdatasync in backend/storage/file/fd.c" }, { "msg_contents": "> \n> I don't remember if it has been mentioned.\n> \n> Anyhow, I think that it would be nice to change fsync'es into\n> fdatasync'es (of course as an autoconf-igurable option). I don't think\n> it's necessary to update all file's metadata each time a file is\n> flushed.\n> \n> I dunno where it's implemented. But it's for sure implemented in Linux.\n> \n\nWe have a way of keeping proper consistency with a system sync() every\n30 seconds. See the archive. I think it is on the short list.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Thu, 16 Apr 1998 21:38:03 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] fsync -> fdatasync in backend/storage/file/fd.c" }, { "msg_contents": "The Hermit Hacker wrote:\n\n> We don't have it (FreeBSD)...what does it do? *raised eyebrow*\n> And, how many ppl actually have fsync's enabled?\n\nWhen you fsync a file it usually costs at least two write operations,\none to write the data and one to update modification/access date, etc.\nfdatasync synces only data area of the file without it's\nmetainformation.\n\nIt's not a Linux-only feature. In fact, I'm really not sure if it's\nalready implemented in Linux (the man page I've got states that in\n2.0.23 it was a mere fsync alias) It is a part of POSIX1b standard.\n\nMike\n\n-- \nWWW: http://www.lodz.pdi.net/~mimo tel: Int. Acc. Code + 48 42 148340\nadd: Michal Mosiewicz * Bugaj 66 m.54 * 95-200 Pabianice * POLAND\n", "msg_date": "Fri, 17 Apr 1998 03:53:03 +0200", "msg_from": "Michal Mosiewicz <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] fsync -> fdatasync in backend/storage/file/fd.c" }, { "msg_contents": "On Fri, 17 Apr 1998, Michal Mosiewicz wrote:\n\n> The Hermit Hacker wrote:\n> \n> > We don't have it (FreeBSD)...what does it do? *raised eyebrow*\n> > And, how many ppl actually have fsync's enabled?\n> \n> When you fsync a file it usually costs at least two write operations,\n> one to write the data and one to update modification/access date, etc.\n> fdatasync synces only data area of the file without it's\n> metainformation.\n\n\tSimilar to us (FreeBSD) mounting our file systems 'noatime', so,\nya, we have it...\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Thu, 16 Apr 1998 23:24:08 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] fsync -> fdatasync in backend/storage/file/fd.c" }, { "msg_contents": "The Hermit Hacker wrote:\n> \n> On Fri, 17 Apr 1998, Michal Mosiewicz wrote:\n> \n> > I don't remember if it has been mentioned.\n> > \n> > Anyhow, I think that it would be nice to change fsync'es into\n> > fdatasync'es (of course as an autoconf-igurable option). I don't think\n> > it's necessary to update all file's metadata each time a file is\n> > flushed.\n> > \n> > I dunno where it's implemented. But it's for sure implemented in Linux.\n> \n> \tWe don't have it (FreeBSD)...what does it do? *raised eyebrow*\n> And, how many ppl actually have fsync's enabled?\n\nIt's a POSIX thing. fsync will sync the data and the metadata, but\nfdatasync only syncs the data. So in the case of a crash, the inode\nmight not have the right date, etc. This can speed things up, but I\nwouldn't venture a guess as to how much.\n\nOcie\n", "msg_date": "Thu, 16 Apr 1998 19:27:21 -0700 (PDT)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: [HACKERS] fsync -> fdatasync in backend/storage/file/fd.c" }, { "msg_contents": "The Hermit Hacker wrote:\n\n> \n> We don't have it (FreeBSD)...what does it do? *raised eyebrow*\n> And, how many ppl actually have fsync's enabled?\n\nI always have fsync enabled.\n\n/* m */\n", "msg_date": "Sat, 18 Apr 1998 16:18:27 +0200", "msg_from": "Mattias Kregert <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] fsync -> fdatasync in backend/storage/file/fd.c" }, { "msg_contents": "On Sat, 18 Apr 1998, Mattias Kregert wrote:\n\n> The Hermit Hacker wrote:\n> \n> > \n> > We don't have it (FreeBSD)...what does it do? *raised eyebrow*\n> > And, how many ppl actually have fsync's enabled?\n> \n> I always have fsync enabled.\n\n\tWhy? IMHO, the only use for this is where the system you are\nrunning it on is suspect, and you fear it crashing alot...\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Sat, 18 Apr 1998 14:57:23 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] fsync -> fdatasync in backend/storage/file/fd.c" }, { "msg_contents": "The Hermit Hacker wrote:\n>\n> > I always have fsync enabled.\n> \n> Why? IMHO, the only use for this is where the system you are\n> running it on is suspect, and you fear it crashing alot...\n> \n\nI started using the -F option to speed thing up, but then I had one\npowerfailure which totally trashed postgresql. I could not recover\nanything and had to restore from previous day's backup, loosing a\nwhole day of work. Then I started using fsync again.\n\nPerhaps some emergency rescue utility would be useful in those cases\nwhen some vital files are trashed. A utility which could go thru all\nfiles and try to fix things missing in system catalogs and so on,\nfix all obvious errors, recreate indices, remove duplicates, add\nmissing pieces...\n\n/* m */\n\n", "msg_date": "Sun, 19 Apr 1998 02:46:18 +0200", "msg_from": "Mattias Kregert <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] fsync -> fdatasync in backend/storage/file/fd.c" } ]
[ { "msg_contents": "testing the (postgresql 6.3.1 included) jdbc ImageViewer example.\nIt works. No errors.\nBut in the pgsql log I fond these (actually many more):\n----------------------------------------------------------------------------\n-----------\nApr 17 10:08:02 digital logger: NOTICE: DateStyle is Postgres with US\n(NonEuropean) conventions\nApr 17 10:08:07 digital logger: NOTICE: buffer leak [65] detected in\nBufferPoolCheckLeak()\nApr 17 10:08:07 digital logger: NOTICE: LockRelease: locktable lookup\nfailed, no lock\nApr 17 10:08:07 digital logger: NOTICE: buffer leak [75] detected in\nBufferPoolCheckLeak()\nApr 17 10:08:08 digital logger: NOTICE: LockRelease: locktable lookup\nfailed, no lock\nApr 17 10:08:47 digital logger: NOTICE: buffer leak [65] detected in\nBufferPoolCheckLeak()\n----------------------------------------------------------------------------\n-----------\nwhat do they mean ?\n\nClaudiu\n\n", "msg_date": "Fri, 17 Apr 1998 10:16:19 +0300", "msg_from": "\"SC Altex Impex SRL\" <[email protected]>", "msg_from_op": true, "msg_subject": "lock failed and buffer leak" }, { "msg_contents": "On Fri, 17 Apr 1998, SC Altex Impex SRL wrote:\n\n> testing the (postgresql 6.3.1 included) jdbc ImageViewer example.\n> It works. No errors.\n> But in the pgsql log I fond these (actually many more):\n> ----------------------------------------------------------------------------\n> -----------\n> Apr 17 10:08:02 digital logger: NOTICE: DateStyle is Postgres with US\n> (NonEuropean) conventions\n\nThis one is caused by the driver finding out what datestyle is in use.\nThis is normal.\n\n> Apr 17 10:08:07 digital logger: NOTICE: buffer leak [65] detected in\n> BufferPoolCheckLeak()\n> Apr 17 10:08:07 digital logger: NOTICE: LockRelease: locktable lookup\n> failed, no lock\n> Apr 17 10:08:07 digital logger: NOTICE: buffer leak [75] detected in\n> BufferPoolCheckLeak()\n> Apr 17 10:08:08 digital logger: NOTICE: LockRelease: locktable lookup\n> failed, no lock\n> Apr 17 10:08:47 digital logger: NOTICE: buffer leak [65] detected in\n> BufferPoolCheckLeak()\n\nThese are caused by the large object api in the backend. I'm not sure\nwhere these are caused by, but when I was fixing that part of the backend\n(to get it working for JDBC), I couldn't see it.\n\n[Hackers: any ideas?]\n\n-- \nPeter T Mount [email protected] or [email protected]\nMain Homepage: http://www.demon.co.uk/finder\nWork Homepage: http://www.maidstone.gov.uk Work EMail: [email protected]\n\n", "msg_date": "Sat, 18 Apr 1998 11:37:07 +0100 (BST)", "msg_from": "Peter T Mount <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] lock failed and buffer leak" } ]
[ { "msg_contents": "Running the regression test on pgsql 6.3 returned no diffs.\nRunning the regression test on pgsql 6.3.1 returned *lots* of diffs !\nSo I wonder, should I stick to 6.3 or accept 6.3.1 ?\n\nClaudiu\n\n", "msg_date": "Fri, 17 Apr 1998 11:47:01 +0300", "msg_from": "\"SC Altex Impex SRL\" <[email protected]>", "msg_from_op": true, "msg_subject": "pgsql 6.3 vs 6.3.1" } ]
[ { "msg_contents": "Is this really a bug? I haven't seen any (commercial) system supporting\nthis kind of transaction recovery. Once you drop a table the data is\nlost, no matter if you rollback or not. \n\nMichael\n\n--\nDr. Michael Meskes, Project-Manager | topsystem Systemhaus GmbH\[email protected] | Europark A2, Adenauerstr. 20\[email protected] | 52146 Wuerselen\nGo SF49ers! Go Rhein Fire! | Tel: (+49) 2405/4670-44\nUse Debian GNU/Linux! | Fax: (+49) 2405/4670-10\n\n> -----Original Message-----\n> From:\tJose' Soares Da Silva [SMTP:[email protected]]\n> Sent:\tFriday, April 17, 1998 4:30 PM\n> To:\[email protected]; [email protected]\n> Cc:\[email protected]\n> Subject:\t[HACKERS] drop table inside transactions\n> \n> ======================================================================\n> ======\n> POSTGRESQL BUG REPORT TEMPLATE\n> ======================================================================\n> ======\n> \n> Your name\t\t:\tJose' Soares\n> Your email address\t:\[email protected] \n> \n> \n> System Configuration\n> ---------------------\n> Architecture (example: Intel Pentium) \t: Intel Pentium\n> \n> Operating System (example: Linux 2.0.26 ELF) \t: Linux 2.0.31\n> Elf\n> \n> PostgreSQL version (example: PostgreSQL-6.1) : PostgreSQL-snapshot\n> april 6, 1998\n> \n> Compiler used (example: gcc 2.7.2)\t\t: gcc 2.7.2.1\n> \n> \n> Please enter a FULL description of your problem:\n> ------------------------------------------------\n> \n> There's another bug on transactions. If one drop a table inside a\n> transaction\n> and then change his mind and rollback work, the table structure is\n> restored\n> but data are lost.\n> Take a look...\n> \n> prova=> begin work;\n> BEGIN\n> prova=> lock table a;\n> DELETE 0\n> prova=> select * from a;\n> a\n> ---\n> 1\n> 13\n> 134\n> (3 rows)\n> \n> prova=> drop table a;\n> DROP\n> prova=> select * from a;\n> ERROR: a: Table does not exist.\n> prova=> rollback;\n> ABORT\n> prova=> select * from a;\n> a\n> -\n> (0 rows)\n> Jose'\n> \n", "msg_date": "Fri, 17 Apr 1998 14:45:00 +0200", "msg_from": "\"Meskes, Michael\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] drop table inside transactions" }, { "msg_contents": "> \n> Is this really a bug? I haven't seen any (commercial) system supporting\n> this kind of transaction recovery. Once you drop a table the data is\n> lost, no matter if you rollback or not. \n> \n> Michael\n> \n\nMeta-data changes, like drop table, are not roll-back-able. I knew this\nwas a reported problem, but I don't think it is required by standard, so\nit is not on the TODO list.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Fri, 17 Apr 1998 10:48:08 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] drop table inside transactions" }, { "msg_contents": "On Fri, 17 Apr 1998, Meskes, Michael wrote:\n\n> Is this really a bug? I haven't seen any (commercial) system supporting\n> this kind of transaction recovery. Once you drop a table the data is\n> lost, no matter if you rollback or not. \n> \n> Michael\nMaybe you are right Michael, but there's another point; the table wasn't\nremoved, it is still there, only data are cancelled.\nIt's more, like a DELETE FROM ... not a DROP TABLE... \nand, if another user inserts data into this dropped table,\nthe table returns with all data.\n(Refer to my first bug-report on this matter),\nand more; some times ROLLBACK restores both data and table structure. ;-)\n> \n> > prova=> drop table a;\n> > DROP\n> > prova=> select * from a;\n> > ERROR: a: Table does not exist.\n> > prova=> rollback;\n> > ABORT\n> > prova=> select * from a;\n> > a\n> > -\n> > (0 rows)\n> > Jose'\n\n", "msg_date": "Fri, 17 Apr 1998 17:05:08 +0000 (UTC)", "msg_from": "\"Jose' Soares Da Silva\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: [HACKERS] drop table inside transactions" }, { "msg_contents": "Meskes, Michael wrote:\n> \n> Is this really a bug? I haven't seen any (commercial) system supporting\n> this kind of transaction recovery. Once you drop a table the data is\n> lost, no matter if you rollback or not. \n> \n> Michael\n\nI tend to agree. Sybase will not even honor a drop table request\ninside a transaction:\n\n1> begin tran\n2> go\n1> drop table foo\n2> go\nMsg 2762, Level 16, State 4:\nLine 1:\nThe 'DROP TABLE' command is not allowed within a multi-statement transaction in\nthe 'ociedb' database.\n1>\n\nWe _could_ do something like check a \"deleted\" flag in the relation\nand postpone the actual delete until the transaction is committed, but\nat least in my experience, changing table structure is usually best\nleft to human admins as opposed to applications. Rows change but the\nbasic table structure stays the same until the application and schema\nare changed.\n\nOcie\n", "msg_date": "Fri, 17 Apr 1998 10:44:43 -0700 (PDT)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: [HACKERS] drop table inside transactions" }, { "msg_contents": "On Fri, 17 Apr 1998, Meskes, Michael wrote:\n\n> Is this really a bug? I haven't seen any (commercial) system supporting\n> this kind of transaction recovery. Once you drop a table the data is\n> lost, no matter if you rollback or not. \n> \nSOLID does it, take a look:\n\nSOLID SQL Editor (teletype) v.02.20.0007\n(C) Copyright Solid Information Technology Ltd 1993-1997\nExecute SQL statements terminated by a semicolon.\nExit by giving command: exit;\nConnected to default server.\nselect * from cities;\nCODE CITY\n---- ----\nSFO SAN FRANCISCO\nSTL ST. LOUIS\nSJC SAN JOSE\n3 rows fetched.\n\ndrop table cities;\nCommand completed succesfully, 0 rows affected.\n\ndrop table cities;\nSOLID Table Error 13011: Table CITIES does not exist\n\nrollback work;\nCommand completed succesfully, 0 rows affected.\n\nselect * from cities;\nCODE CITY\n---- ----\nSFO SAN FRANCISCO\nSTL ST. LOUIS\nSJC SAN JOSE\n3 rows fetched.\n Jose'\n\n", "msg_date": "Mon, 20 Apr 1998 11:18:06 +0000 (UTC)", "msg_from": "\"Jose' Soares Da Silva\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: [HACKERS] drop table inside transactions" }, { "msg_contents": "On Fri, 17 Apr 1998 [email protected] wrote:\n\n> Meskes, Michael wrote:\n> > \n> > Is this really a bug? I haven't seen any (commercial) system supporting\n> > this kind of transaction recovery. Once you drop a table the data is\n> > lost, no matter if you rollback or not. \n\nSOLID restore a dropped table inside a transaction.\n\n> > \n> > Michael\n> \n> I tend to agree. Sybase will not even honor a drop table request\n> inside a transaction:\n> \n> 1> begin tran\n> 2> go\n> 1> drop table foo\n> 2> go\n> Msg 2762, Level 16, State 4:\n> Line 1:\n> The 'DROP TABLE' command is not allowed within a multi-statement transaction in\n> the 'ociedb' database.\n> 1>\n> \n> We _could_ do something like check a \"deleted\" flag in the relation\n> and postpone the actual delete until the transaction is committed, but\n> at least in my experience, changing table structure is usually best\n> left to human admins as opposed to applications. Rows change but the\n> basic table structure stays the same until the application and schema\n> are changed.\n> \nWhat about temporary tables ?\nWe don't have CREATE TEMPORARY TABLE statement\nthus users need to create\nand drop tmp tables inside transactions. \n Jose'\n\n", "msg_date": "Mon, 20 Apr 1998 12:00:08 +0000 (UTC)", "msg_from": "\"Jose' Soares Da Silva\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] drop table inside transactions" }, { "msg_contents": ">\n> On Fri, 17 Apr 1998, Meskes, Michael wrote:\n>\n> > Is this really a bug? I haven't seen any (commercial) system supporting\n> > this kind of transaction recovery. Once you drop a table the data is\n> > lost, no matter if you rollback or not.\n> >\n> > Michael\n> Maybe you are right Michael, but there's another point; the table wasn't\n> removed, it is still there, only data are cancelled.\n> It's more, like a DELETE FROM ... not a DROP TABLE...\n> and, if another user inserts data into this dropped table,\n> the table returns with all data.\n> (Refer to my first bug-report on this matter),\n> and more; some times ROLLBACK restores both data and table structure. ;-)\n\n Partially right. The tables data file was removed at DROP\n TABLE. On the ROLLBACK, the pg_class and pg_type entries got\n restored and the storage manager created a new (empty) data\n file on the SELECT command after the ROLLBACK.\n\n Maybe we could setup an internal list of files to be deleted\n on the next transaction commit, so the files remain intact\n after ROLLBACK.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Thu, 30 Apr 1998 11:44:56 +0200 (MET DST)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] drop table inside transactions" }, { "msg_contents": "On Thu, 30 Apr 1998, Jan Wieck wrote:\n\n> >\n> > On Fri, 17 Apr 1998, Meskes, Michael wrote:\n> >\n> > > Is this really a bug? I haven't seen any (commercial) system supporting\n> > > this kind of transaction recovery. Once you drop a table the data is\n> > > lost, no matter if you rollback or not.\n> > >\n> > > Michael\n> > Maybe you are right Michael, but there's another point; the table wasn't\n> > removed, it is still there, only data are cancelled.\n> > It's more, like a DELETE FROM ... not a DROP TABLE...\n> > and, if another user inserts data into this dropped table,\n> > the table returns with all data.\n> > (Refer to my first bug-report on this matter),\n> > and more; some times ROLLBACK restores both data and table structure. ;-)\n> \n> Partially right. The tables data file was removed at DROP\n> TABLE. On the ROLLBACK, the pg_class and pg_type entries got\n> restored and the storage manager created a new (empty) data\n> file on the SELECT command after the ROLLBACK.\n> \n> Maybe we could setup an internal list of files to be deleted\n> on the next transaction commit, so the files remain intact\n> after ROLLBACK.\n\nGreat!\n\nRemember that we have the same problem with CREATE DATABASE\nin case of ROLLBACK will be removed references from \"pg_database\"\nbut directory $PGDATA/databasename will not be removed.\n Jose'\n\n", "msg_date": "Thu, 30 Apr 1998 15:25:37 +0000 (UTC)", "msg_from": "\"Jose' Soares Da Silva\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] drop table inside transactions" } ]
[ { "msg_contents": "============================================================================\n POSTGRESQL BUG REPORT TEMPLATE\n============================================================================\n\nYour name\t\t:\tJose' Soares\nYour email address\t:\[email protected] \n\n\nSystem Configuration\n---------------------\n Architecture (example: Intel Pentium) \t: Intel Pentium\n\n Operating System (example: Linux 2.0.26 ELF) \t: Linux 2.0.31 Elf\n\n PostgreSQL version (example: PostgreSQL-6.1) : PostgreSQL-snapshot april 6, 1998\n\n Compiler used (example: gcc 2.7.2)\t\t: gcc 2.7.2.1\n\n\nPlease enter a FULL description of your problem:\n------------------------------------------------\n\nThere's another bug on transactions. If one drop a table inside a transaction\nand then change his mind and rollback work, the table structure is restored\nbut data are lost.\nTake a look...\n\nprova=> begin work;\nBEGIN\nprova=> lock table a;\nDELETE 0\nprova=> select * from a;\n a\n---\n 1\n 13\n134\n(3 rows)\n\nprova=> drop table a;\nDROP\nprova=> select * from a;\nERROR: a: Table does not exist.\nprova=> rollback;\nABORT\nprova=> select * from a;\na\n-\n(0 rows)\n Jose'\n\n", "msg_date": "Fri, 17 Apr 1998 14:29:37 +0000 (UTC)", "msg_from": "\"Jose' Soares Da Silva\" <[email protected]>", "msg_from_op": true, "msg_subject": "drop table inside transactions" } ]
[ { "msg_contents": "> The biggest problem is that if you have many clients listening on the same\n> thing they are signaled at the same time and all of them try to access the\n> pg_listener table for write. The result is that you have a lot of waits on\n> the table and sometimes also deadlocks if you don't do things carefully.\n\nRight, I recall seeing some things about that in the mailing list\narchives (from you, no doubt?). I had the impression that async.c\nhad been changed to handle this better as of the current release.\nIs there still a problem?\n\n(Fortunately, I don't expect a *lot* of clients waiting on the same\ntable, but deadlock would still be very bad news...)\n\n> From the Tcl side, a better solution would be to define a tcl event handler,\n> like the standard Tcl filehandler, which would be invoked automatically by\n> the Tk event loop or by tkwait if using pure Tcl.\n\nI agree.\n\nI don't have an immediate need for Tcl-based clients, so I was just\ngoing to revise libpg and libpg++. Do you want to redo libpgtcl?\nI'd probably get to that eventually, but splitting the work sounds\nbetter :-).\n\nI'll post something later today about what the extensions to the\nlibpg API should look like.\n\n> I have also some new patches which try to reduce the notify overhead by\n> avoiding unnecessary unlocks of the table. If you are interested I can\n> post them.\n\nPlease do.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 17 Apr 1998 10:46:16 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Anyone working on asynchronous NOTIFY reception? " }, { "msg_contents": "> \n> > The biggest problem is that if you have many clients listening on the same\n> > thing they are signaled at the same time and all of them try to access the\n> > pg_listener table for write. The result is that you have a lot of waits on\n> > the table and sometimes also deadlocks if you don't do things carefully.\n> \n> Right, I recall seeing some things about that in the mailing list\n> archives (from you, no doubt?). I had the impression that async.c\n> had been changed to handle this better as of the current release.\n> Is there still a problem?\n> \n> (Fortunately, I don't expect a *lot* of clients waiting on the same\n> table, but deadlock would still be very bad news...)\n> \n> > From the Tcl side, a better solution would be to define a tcl event handler,\n> > like the standard Tcl filehandler, which would be invoked automatically by\n> > the Tk event loop or by tkwait if using pure Tcl.\n> \n> I agree.\n> \n> I don't have an immediate need for Tcl-based clients, so I was just\n> going to revise libpg and libpg++. Do you want to redo libpgtcl?\n> I'd probably get to that eventually, but splitting the work sounds\n> better :-).\n\nNot now, I am too busy.\n\n> I'll post something later today about what the extensions to the\n> libpg API should look like.\n> \n> > I have also some new patches which try to reduce the notify overhead by\n> > avoiding unnecessary unlocks of the table. If you are interested I can\n> > post them.\n> \n> Please do.\n> \n> \t\t\tregards, tom lane\n\nThis is the patch against 6.2.1p7. I haven't the the time to port it to 6.3.1.\nThe idea is to notify the backends while we have a write lock on the table\nbefore doing the first CommitTransactionCommand. Otherwise if we must also\nnotify our frontend we almost certainly get the lock again only after all the\nother backends have processed the notify and this may take a lot of time.\n\nNote however that there is a little problem by releasing the lock before the\nend of transaction: you may get duplicate records in pg_listener if more\nbackends are notifying the same relation at the same time. I don't know why\nthis happens and hadn't time to investigate, so I wrote a quick hack in\nAsync_NotifyFrontEnd_Aux() to avoid the problem (search for \"notifyHack\").\n\nThis is what I found in my pg_listener:\nmytable | 627| 0\nmytable | 627| 0\nmytable | 627| 0\n\nAnd this is the patch for 6.2.1p7:\n\n*** async.c.orig\tTue Jan 27 17:06:42 1998\n--- async.c\tThu Mar 19 01:09:49 1998\n***************\n*** 22,30 ****\n *\t\t notification (we are notifying something that we are listening),\n *\t\t signal the corresponding frontend over the comm channel using the\n *\t\t out-of-band channel.\n! *\t 2.b For all other listening processes, we send kill(2) to wake up\n! *\t\t the listening backend.\n! * 3. Upon receiving a kill(2) signal from another backend process notifying\n *\t that one of the relation that we are listening is being notified,\n *\t we can be in either of two following states:\n *\t 3.a We are sleeping, wake up and signal our frontend.\n--- 22,30 ----\n *\t\t notification (we are notifying something that we are listening),\n *\t\t signal the corresponding frontend over the comm channel using the\n *\t\t out-of-band channel.\n! *\t 2.b For all other listening processes, we send a SIGUSR2 signal\n! *\t\t to wake up the listening backend.\n! * 3. Upon receiving a SIGUSR2 signal from another backend process notifying\n *\t that one of the relation that we are listening is being notified,\n *\t we can be in either of two following states:\n *\t 3.a We are sleeping, wake up and signal our frontend.\n***************\n*** 85,99 ****\n #include <port-protos.h>\t\t/* for strdup() */\n \n #include <storage/lmgr.h>\n \n static int\tnotifyFrontEndPending = 0;\n static int\tnotifyIssued = 0;\n static Dllist *pendingNotifies = NULL;\n \n- \n static int\tAsyncExistsPendingNotify(char *);\n static void ClearPendingNotify(void);\n static void Async_NotifyFrontEnd(void);\n void Async_Unlisten(char *relname, int pid);\n static void Async_UnlistenOnExit(int code, char *relname);\n \n--- 85,105 ----\n #include <port-protos.h>\t\t/* for strdup() */\n \n #include <storage/lmgr.h>\n+ #include <utils/trace.h>\n+ \n+ #define notifyUnlock pg_options[OPT_NOTIFYUNLOCK]\n+ #define notifyHack pg_options[OPT_NOTIFYHACK]\n+ \n+ GlobalMemory notifyContext = NULL;\n \n static int\tnotifyFrontEndPending = 0;\n static int\tnotifyIssued = 0;\n static Dllist *pendingNotifies = NULL;\n \n static int\tAsyncExistsPendingNotify(char *);\n static void ClearPendingNotify(void);\n static void Async_NotifyFrontEnd(void);\n+ static void Async_NotifyFrontEnd_Aux(void);\n void Async_Unlisten(char *relname, int pid);\n static void Async_UnlistenOnExit(int code, char *relname);\n \n***************\n*** 121,145 ****\n {\n \textern TransactionState CurrentTransactionState;\n \n \tif ((CurrentTransactionState->state == TRANS_DEFAULT) &&\n \t\t(CurrentTransactionState->blockState == TRANS_DEFAULT))\n \t{\n! \n! #ifdef ASYNC_DEBUG\n! \t\telog(DEBUG, \"Waking up sleeping backend process\");\n! #endif\n \t\tAsync_NotifyFrontEnd();\n- \n \t}\n \telse\n \t{\n! #ifdef ASYNC_DEBUG\n! \t\telog(DEBUG, \"Process is in the middle of another transaction, state = %d, block state = %d\",\n! \t\t\t CurrentTransactionState->state,\n! \t\t\t CurrentTransactionState->blockState);\n! #endif\n \t\tnotifyFrontEndPending = 1;\n \t}\n }\n \n /*\n--- 127,152 ----\n {\n \textern TransactionState CurrentTransactionState;\n \n+ \tTPRINTF(TRACE_NOTIFY, \"Async_NotifyHandler\");\n+ \n \tif ((CurrentTransactionState->state == TRANS_DEFAULT) &&\n \t\t(CurrentTransactionState->blockState == TRANS_DEFAULT))\n \t{\n! \t\tTPRINTF(TRACE_NOTIFY, \"Waking up sleeping backend process\");\n \t\tAsync_NotifyFrontEnd();\n \t}\n \telse\n \t{\n! \t\tTPRINTF(TRACE_NOTIFY,\n! \t\t\t\t\"Process is in the middle of another transaction, \"\n! \t\t\t\t\"state = %d, block state = %d\",\n! \t\t\t\tCurrentTransactionState->state,\n! \t\t\t\tCurrentTransactionState->blockState);\n \t\tnotifyFrontEndPending = 1;\n+ \t\tTPRINTF(TRACE_NOTIFY, \"Async_NotifyHandler: notify frontend pending\");\n \t}\n+ \n+ \tTPRINTF(TRACE_NOTIFY, \"Async_NotifyHandler done\");\n }\n \n /*\n***************\n*** 184,192 ****\n \n \tchar\t *notifyName;\n \n! #ifdef ASYNC_DEBUG\n! \telog(DEBUG, \"Async_Notify: %s\", relname);\n! #endif\n \n \tif (!pendingNotifies)\n \t\tpendingNotifies = DLNewList();\n--- 191,197 ----\n \n \tchar\t *notifyName;\n \n! \tTPRINTF(TRACE_NOTIFY, \"Async_Notify: %s\", relname);\n \n \tif (!pendingNotifies)\n \t\tpendingNotifies = DLNewList();\n***************\n*** 224,234 ****\n \t\t\theap_replace(lRel, &lTuple->t_ctid, rTuple);\n \t\t}\n \t\tReleaseBuffer(b);\n \t}\n \theap_endscan(sRel);\n! \tRelationUnsetLockForWrite(lRel);\n \theap_close(lRel);\n! \tnotifyIssued = 1;\n }\n \n /*\n--- 229,249 ----\n \t\t\theap_replace(lRel, &lTuple->t_ctid, rTuple);\n \t\t}\n \t\tReleaseBuffer(b);\n+ \t\tnotifyIssued = 1;\n \t}\n \theap_endscan(sRel);\n! \n! \t/*\n! \t * Note: if we unset the lock or we could get multiple tuples\n! \t * with same oid if other backends notify the same relation.\n! \t */\n! \tif (notifyUnlock) {\n! \t\tRelationUnsetLockForWrite(lRel);\n! \t}\n! \n \theap_close(lRel);\n! \n! \tTPRINTF(TRACE_NOTIFY, \"Async_Notify: done %s\", relname);\n }\n \n /*\n***************\n*** 278,286 ****\n \t\t{\t\t\t\t\t\t/* 'notify <relname>' issued by us */\n \t\t\tnotifyIssued = 0;\n \t\t\tStartTransactionCommand();\n! #ifdef ASYNC_DEBUG\n! \t\t\telog(DEBUG, \"Async_NotifyAtCommit.\");\n! #endif\n \t\t\tScanKeyEntryInitialize(&key, 0,\n \t\t\t\t\t\t\t\t Anum_pg_listener_notify,\n \t\t\t\t\t\t\t\t Integer32EqualRegProcedure,\n--- 293,299 ----\n \t\t{\t\t\t\t\t\t/* 'notify <relname>' issued by us */\n \t\t\tnotifyIssued = 0;\n \t\t\tStartTransactionCommand();\n! \t\t\tTPRINTF(TRACE_NOTIFY, \"Async_NotifyAtCommit\");\n \t\t\tScanKeyEntryInitialize(&key, 0,\n \t\t\t\t\t\t\t\t Anum_pg_listener_notify,\n \t\t\t\t\t\t\t\t Integer32EqualRegProcedure,\n***************\n*** 303,318 ****\n \n \t\t\t\t\tif (ourpid == DatumGetInt32(d))\n \t\t\t\t\t{\n- #ifdef ASYNC_DEBUG\n- \t\t\t\t\t\telog(DEBUG, \"Notifying self, setting notifyFronEndPending to 1\");\n- #endif\n \t\t\t\t\t\tnotifyFrontEndPending = 1;\n \t\t\t\t\t}\n \t\t\t\t\telse\n \t\t\t\t\t{\n! #ifdef ASYNC_DEBUG\n! \t\t\t\t\t\telog(DEBUG, \"Notifying others\");\n! #endif\n #ifdef HAVE_KILL\n \t\t\t\t\t\tif (kill(DatumGetInt32(d), SIGUSR2) < 0)\n \t\t\t\t\t\t{\n--- 316,330 ----\n \n \t\t\t\t\tif (ourpid == DatumGetInt32(d))\n \t\t\t\t\t{\n \t\t\t\t\t\tnotifyFrontEndPending = 1;\n+ \t\t\t\t\t\tTPRINTF(TRACE_NOTIFY,\n+ \t\t\t\t\t\t\t\t\"Async_NotifyAtCommit notifying self\");\n \t\t\t\t\t}\n \t\t\t\t\telse\n \t\t\t\t\t{\n! \t\t\t\t\t\tTPRINTF(TRACE_NOTIFY,\n! \t\t\t\t\t\t\t\t\"Async_NotifyAtCommit notifying %d\",\n! \t\t\t\t\t\t\t\tDatumGetInt32(d));\n #ifdef HAVE_KILL\n \t\t\t\t\t\tif (kill(DatumGetInt32(d), SIGUSR2) < 0)\n \t\t\t\t\t\t{\n***************\n*** 327,344 ****\n \t\t\t\tReleaseBuffer(b);\n \t\t\t}\n \t\t\theap_endscan(sRel);\n- \t\t\tRelationUnsetLockForWrite(lRel);\n \t\t\theap_close(lRel);\n- \n- \t\t\tCommitTransactionCommand();\n \t\t\tClearPendingNotify();\n- \t\t}\n \n! \t\tif (notifyFrontEndPending)\n! \t\t{\t\t\t\t\t\t/* we need to notify the frontend of all\n! \t\t\t\t\t\t\t\t * pending notifies. */\n! \t\t\tnotifyFrontEndPending = 1;\n! \t\t\tAsync_NotifyFrontEnd();\n \t\t}\n \t}\n }\n--- 339,361 ----\n \t\t\t\tReleaseBuffer(b);\n \t\t\t}\n \t\t\theap_endscan(sRel);\n \t\t\theap_close(lRel);\n \t\t\tClearPendingNotify();\n \n! \t\t\tif (notifyFrontEndPending)\n! \t\t\t{\n! \t\t\t\t/* Notify the frontend inside the current transaction! */\n! \t\t\t\tAsync_NotifyFrontEnd_Aux();\n! \t\t\t}\n! \n! \t\t\tTPRINTF(TRACE_NOTIFY, \"Async_NotifyAtCommit done\");\n! \t\t\tCommitTransactionCommand();\n! \t\t} else {\n! \t\t\t/* Notify the frontend of pending notifies from other backends. */\n! \t\t\tif (notifyFrontEndPending)\n! \t\t\t{\n! \t\t\t\tAsync_NotifyFrontEnd();\n! \t\t\t}\n \t\t}\n \t}\n }\n***************\n*** 422,430 ****\n \tchar\t *relnamei;\n \tTupleDesc\ttupDesc;\n \n! #ifdef ASYNC_DEBUG\n! \telog(DEBUG, \"Async_Listen: %s\", relname);\n! #endif\n \tfor (i = 0; i < Natts_pg_listener; i++)\n \t{\n \t\tnulls[i] = ' ';\n--- 439,445 ----\n \tchar\t *relnamei;\n \tTupleDesc\ttupDesc;\n \n! \tTPRINTF(TRACE_NOTIFY, \"Async_Listen: %s\", relname);\n \tfor (i = 0; i < Natts_pg_listener; i++)\n \t{\n \t\tnulls[i] = ' ';\n***************\n*** 457,462 ****\n--- 472,480 ----\n \t\t\t}\n \t\t}\n \t\tReleaseBuffer(b);\n+ \t\tif (alreadyListener) {\n+ \t\t\tbreak;\n+ \t\t}\n \t}\n \theap_endscan(s);\n \n***************\n*** 464,485 ****\n \t{\n \t\telog(NOTICE, \"Async_Listen: We are already listening on %s\",\n \t\t\t relname);\n \t\treturn;\n \t}\n \n \ttupDesc = lDesc->rd_att;\n! \ttup = heap_formtuple(tupDesc,\n! \t\t\t\t\t\t values,\n! \t\t\t\t\t\t nulls);\n \theap_insert(lDesc, tup);\n- \n \tpfree(tup);\n \n- \t/*\n- \t * if (alreadyListener) { elog(NOTICE,\"Async_Listen: already one\n- \t * listener on %s (possibly dead)\",relname); }\n- \t */\n- \n \tRelationUnsetLockForWrite(lDesc);\n \theap_close(lDesc);\n \n--- 482,497 ----\n \t{\n \t\telog(NOTICE, \"Async_Listen: We are already listening on %s\",\n \t\t\t relname);\n+ \t\tRelationUnsetLockForWrite(lDesc);\n+ \t\theap_close(lDesc);\n \t\treturn;\n \t}\n \n \ttupDesc = lDesc->rd_att;\n! \ttup = heap_formtuple(tupDesc, values, nulls);\n \theap_insert(lDesc, tup);\n \tpfree(tup);\n \n \tRelationUnsetLockForWrite(lDesc);\n \theap_close(lDesc);\n \n***************\n*** 519,534 ****\n \tlTuple = SearchSysCacheTuple(LISTENREL, PointerGetDatum(relname),\n \t\t\t\t\t\t\t\t Int32GetDatum(pid),\n \t\t\t\t\t\t\t\t 0, 0);\n- \tlDesc = heap_openr(ListenerRelationName);\n- \tRelationSetLockForWrite(lDesc);\n- \n \tif (lTuple != NULL)\n \t{\n \t\theap_delete(lDesc, &lTuple->t_ctid);\n- \t}\n \n! \tRelationUnsetLockForWrite(lDesc);\n! \theap_close(lDesc);\n }\n \n static void\n--- 531,545 ----\n \tlTuple = SearchSysCacheTuple(LISTENREL, PointerGetDatum(relname),\n \t\t\t\t\t\t\t\t Int32GetDatum(pid),\n \t\t\t\t\t\t\t\t 0, 0);\n \tif (lTuple != NULL)\n \t{\n+ \t\tlDesc = heap_openr(ListenerRelationName);\n+ \t\tRelationSetLockForWrite(lDesc);\n \t\theap_delete(lDesc, &lTuple->t_ctid);\n \n! \t\tRelationUnsetLockForWrite(lDesc);\n! \t\theap_close(lDesc);\n! \t}\n }\n \n static void\n***************\n*** 560,570 ****\n *\n * --------------------------------------------------------------\n */\n- GlobalMemory notifyContext = NULL;\n- \n static void\n Async_NotifyFrontEnd()\n {\n \textern CommandDest whereToSendOutput;\n \tHeapTuple\tlTuple,\n \t\t\t\trTuple;\n--- 571,595 ----\n *\n * --------------------------------------------------------------\n */\n static void\n Async_NotifyFrontEnd()\n {\n+ \tStartTransactionCommand();\n+ \tAsync_NotifyFrontEnd_Aux();\n+ \tCommitTransactionCommand();\n+ }\n+ \n+ /*\n+ * --------------------------------------------------------------\n+ * Async_NotifyFrontEnd_Aux --\n+ *\n+ *\t\tLike Async_NotifyFrontEnd but MUST be called inside a transaction.\n+ *\n+ * --------------------------------------------------------------\n+ */\n+ static void\n+ Async_NotifyFrontEnd_Aux()\n+ {\n \textern CommandDest whereToSendOutput;\n \tHeapTuple\tlTuple,\n \t\t\t\trTuple;\n***************\n*** 580,592 ****\n \tint\t\t\tourpid;\n \tbool\t\tisnull;\n \n! \tnotifyFrontEndPending = 0;\n \n! #ifdef ASYNC_DEBUG\n! \telog(DEBUG, \"Async_NotifyFrontEnd: notifying front end.\");\n! #endif\n \n! \tStartTransactionCommand();\n \tourpid = getpid();\n \tScanKeyEntryInitialize(&key[0], 0,\n \t\t\t\t\t\t Anum_pg_listener_notify,\n--- 605,616 ----\n \tint\t\t\tourpid;\n \tbool\t\tisnull;\n \n! \tchar\t\t*hack[32];\n! \tint\t\t\ti, hack_count = 0;\n \n! \tnotifyFrontEndPending = 0;\n \n! \tTPRINTF(TRACE_NOTIFY, \"Async_NotifyFrontEnd\");\n \tourpid = getpid();\n \tScanKeyEntryInitialize(&key[0], 0,\n \t\t\t\t\t\t Anum_pg_listener_notify,\n***************\n*** 611,620 ****\n--- 635,664 ----\n \t{\n \t\td = heap_getattr(lTuple, b, Anum_pg_listener_relname,\n \t\t\t\t\t\t tdesc, &isnull);\n+ \n+ \t\t/* Hack to delete duplicate tuples (possible if notifyUnlock is set) */\n+ \t\tif (notifyHack) {\n+ \t\t\tfor (i=0; i<hack_count; i++) {\n+ \t\t\t\tif (strcmp(DatumGetName(d)->data, hack[i]) == 0) {\n+ \t\t\t\t\tTPRINTF(TRACE_NOTIFY,\n+ \t\t\t\t\t\t\t\"Async_NotifyFrontEnd duplicate %s\",\n+ \t\t\t\t\t\t\tDatumGetName(d)->data);\n+ \t\t\t\t\theap_delete(lRel, &lTuple->t_ctid);\n+ \t\t\t\t\tgoto release_buffer;\n+ \t\t\t\t}\n+ \t\t\t}\n+ \t\t\tif (hack_count < 32) {\n+ \t\t\t\thack[hack_count++] = pstrdup(DatumGetName(d)->data);\n+ \t\t\t}\n+ \t\t}\n+ \n \t\trTuple = heap_modifytuple(lTuple, b, lRel, value, nulls, repl);\n \t\theap_replace(lRel, &lTuple->t_ctid, rTuple);\n \n \t\t/* notifying the front end */\n+ \t\tTPRINTF(TRACE_NOTIFY,\n+ \t\t\t\t\"Async_NotifyFrontEnd notifying %s\",\n+ \t\t\t\tDatumGetName(d)->data);\n \n \t\tif (whereToSendOutput == Remote)\n \t\t{\n***************\n*** 625,635 ****\n \t\t}\n \t\telse\n \t\t{\n! \t\t\telog(NOTICE, \"Async_NotifyFrontEnd: no asynchronous notification to frontend on interactive sessions\");\n \t\t}\n \t\tReleaseBuffer(b);\n \t}\n! \tCommitTransactionCommand();\n }\n \n static int\n--- 669,686 ----\n \t\t}\n \t\telse\n \t\t{\n! \t\t\telog(NOTICE,\n! \t\t\t\t \"Async_NotifyFrontEnd: no asynchronous notification \"\n! \t\t\t\t \"to frontend on interactive sessions\");\n \t\t}\n+ \n+ \trelease_buffer:\n \t\tReleaseBuffer(b);\n \t}\n! \theap_endscan(sRel);\n! \theap_close(lRel);\n! \tRelationUnsetLockForWrite(lRel);\n! \tTPRINTF(TRACE_NOTIFY, \"Async_NotifyFrontEnd done\");\n }\n \n static int\n\n\nMassimo Dal Zotto\n\n+----------------------------------------------------------------------+\n| Massimo Dal Zotto e-mail: [email protected] |\n| Via Marconi, 141 phone: ++39-461-534251 |\n| 38057 Pergine Valsugana (TN) www: http://www.cs.unitn.it/~dz/ |\n| Italy pgp: finger [email protected] |\n+----------------------------------------------------------------------+\n", "msg_date": "Wed, 22 Apr 1998 22:36:40 +0200 (MET DST)", "msg_from": "Massimo Dal Zotto <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Anyone working on asynchronous NOTIFY reception?" } ]
[ { "msg_contents": "Here's what I propose to do with libpq to make it more useful for\nreal-time applications. Any comments or better ideas?\n\nThe point of these changes is first to allow frontend applications to\nreceive NOTIFY responses without having to generate dummy queries,\nand second to allow a frontend to perform other work while awaiting\nthe result of a query.\n\nWe can't break existing code for this, so the behavior of PQexec()\ncan't change. Instead, I propose new functions to add to the API.\nInternally, PQexec will be reimplemented in terms of these new\nfunctions, but old apps won't notice any difference.\n\nThe new functions are:\n\n\tint PQexecAsync (PGconn *conn, const char *query);\n\nSubmits a query without waiting for the result. Returns TRUE if the\nquery has been successfully dispatched, otherwise FALSE (in the FALSE\ncase, an error message is left in conn->errorMessage).\n\n\ttypedef enum {\n\t\tPGASYNC_IDLE,\n\t\tPGASYNC_BUSY,\n\t\tPGASYNC_DONE\n\t} PGAsyncStatusType;\n\n\tPGAsyncStatusType PQasyncStatus (PGconn *conn);\n\nIndicates the current status of an asynchronous query:\n\tPGASYNC_IDLE: nothing doing\n\tPGASYNC_BUSY: async query in progress\n\tPGASYNC_DONE: query done, can retrieve result with PQasyncResult\nWhen the state is PGASYNC_DONE, calling PQasyncResult will reset the state\nto PGASYNC_IDLE. A new query can only be submitted in the IDLE state.\n\n\tPGresult* PQasyncResult (PGconn *conn);\n\nIf the state is PGASYNC_DONE and the query was successful, a PGresult\nblock is returned (which the caller must eventually free). In all other\ncases, NULL is returned and a suitable error message is left in\nconn->errorMessage. Also, if the state is PGASYNC_DONE then it is\nreset to PGASYNC_IDLE.\n\n\tvoid PQconsumeInput (PGconn *conn);\n\nThis can be called at any time to check for and process new input from\nthe backend. It returns no status indication, but after calling it\nthe application can inspect PQasyncStatus() and/or PQnotifies()\nto see if a query was completed or a NOTIFY message arrived.\n\n\tint PQsocket (PGconn *conn);\n\nReturns the Unix file descriptor for the socket connection to the\nbackend, or -1 if there is no open connection. This is a violation of\nmodularity, of course, but there is no alternative: an application using\nthis facility needs to be able to use select() to wait for input from\neither the backend or any other input streams it may have. To use\nselect() the underlying socket must be made visible.\n\n\tPGnotify *PQnotifies (PGconn *conn);\n\nThis function doesn't need to change; we just observe that notifications\nmay become available as a side effect of executing either PQexec() or\nPQconsumeInput().\n\n\nThe general assumption is that the application's main loop will use\nselect() to wait for input. If select() indicates that input is\npending from the backend, then the app will call PQconsumeInput,\nfollowed by checking PQasyncStatus() and/or PQnotifies().\n\nI expect a lot of people would build \"partially async\" applications that\nstill do all the queries through PQexec(), but detect notifies\nasynchronously via select/PQconsumeInput/PQnotifies. This compromise\nwould allow notifies to be detected without issuing null queries,\nwithout complicating the basic logic of issuing a series of queries.\n\nThe same functionality should be added to libpq++.\n\n\nSome issues to be resolved:\n\n1. The above API assumes that only one query can be outstanding at a\ntime (per connection). Is there any prospect that the backends will\never be able to handle multiple concurrent queries? If so, we should\ndesign the API so that PQexecAsync returns some kind of \"active query\"\nobject that's separate from the connection object. Then PQasyncStatus\nand PQasyncResult would apply to these objects individually (probably\nthey should act a little differently than given above, too).\n\n2. Any comments about the naming conventions I used? The existing code\nseems a tad inconsistent; what is considered the right practice as to\ncapitalization etc?\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 17 Apr 1998 13:05:37 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Proposal for async support in libpq" }, { "msg_contents": "> \n> Here's what I propose to do with libpq to make it more useful for\n> real-time applications. Any comments or better ideas?\n> \n> The point of these changes is first to allow frontend applications to\n> receive NOTIFY responses without having to generate dummy queries,\n> and second to allow a frontend to perform other work while awaiting\n> the result of a query.\n> \n> We can't break existing code for this, so the behavior of PQexec()\n> can't change. Instead, I propose new functions to add to the API.\n> Internally, PQexec will be reimplemented in terms of these new\n> functions, but old apps won't notice any difference.\n\nThis all looks good. Another thing we really need it to be able to\ncancel queries. This would be a big win, and looks like it could fit\ninto the scheme here.\n\nIdeally, I would like to control-c in psql, and have the query cancel,\ninstead of exiting from pgsql.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Fri, 17 Apr 1998 14:00:18 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Proposal for async support in libpq" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> This all looks good. Another thing we really need it to be able to\n> cancel queries. This would be a big win, and looks like it could fit\n> into the scheme here.\n\nI thought about proposing a PQcancelAsync that would cancel the active\nquery-in-progress. But that would require support on the backend side,\nand I am far from competent to make it happen. (libpq is simple enough\nthat I'm not afraid to rewrite it, but making major mods to the backend\nis another story. I just got here this week...)\n\nIf anyone who does know what they're doing is willing to make the\nnecessary backend mods, I'm all for it. The libpq side would be\neasy enough.\n\nHow would such cancellation interact with transactions, btw? Would\nyou expect it to roll back only the current command, or abort the\nwhole transaction? We'd also have to consider corner cases, like\nwhen the backend has already finished the query by the time it gets\nthe cancel request.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 17 Apr 1998 15:51:17 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Proposal for async support in libpq " }, { "msg_contents": "> \n> Bruce Momjian <[email protected]> writes:\n> > This all looks good. Another thing we really need it to be able to\n> > cancel queries. This would be a big win, and looks like it could fit\n> > into the scheme here.\n> \n> I thought about proposing a PQcancelAsync that would cancel the active\n> query-in-progress. But that would require support on the backend side,\n> and I am far from competent to make it happen. (libpq is simple enough\n> that I'm not afraid to rewrite it, but making major mods to the backend\n> is another story. I just got here this week...)\n> \n> If anyone who does know what they're doing is willing to make the\n> necessary backend mods, I'm all for it. The libpq side would be\n> easy enough.\n> \n> How would such cancellation interact with transactions, btw? Would\n> you expect it to roll back only the current command, or abort the\n> whole transaction? We'd also have to consider corner cases, like\n> when the backend has already finished the query by the time it gets\n> the cancel request.\n\nIt is pretty easy, just an elog(ERROR) would do it. The issue is\nallowing the backend to see the request. We can put some checks in\ntcop/postgres.c as it moves from module to module, and something in the\nexecutor to check for the cancel, and do an elog(ERROR). It would be\nnice if it arrived as out-of-band data, so we could check for input\nquickly without having to actually process it if it is not a cancel\nnotification. \n\nThe out-of-band data will send a SIGURG signal to the backend, and we\ncan set a global variable, and check the variable at various places.\n\nTo do all this, we need to be able to send a query, and not have it\nblock, and it seems you are giving us this capability.\n\nYou supply the indication to the backend, and I will see that the\nbackend processes it properly.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Fri, 17 Apr 1998 16:26:22 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Proposal for async support in libpq" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> You supply the indication to the backend, and I will see that the\n> backend processes it properly.\n\nYou're on ;-)\n\nSignaling the cancel request via OOB sounds reasonable, as long as\nnothing else is using it and all the systems we care about support it.\n(I see a couple of routines to support OOB data in\nsrc/backend/libpq/pqcomm.c, but they don't seem to be called from\nanywhere. Vestiges of an old protocol, perhaps?)\n\nI still need to understand better what the backend will send back\nin response to a cancel request, especially if it's idle by the\ntime the request arrives. Will that result in an asynchronous error\nresponse of some sort? Do I need to make said response visible to\nthe frontend application? (Probably not ... it will have already\ndiscovered that the query completed normally.)\n\nHow should cancellation interact with copy in/out?\n\nThese are mostly documentation issues, rather than stuff that directly\naffects code in libpq, but we ought to nail it down.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 17 Apr 1998 16:47:45 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Proposal for async support in libpq " }, { "msg_contents": "b> \n> Bruce Momjian <[email protected]> writes:\n> > You supply the indication to the backend, and I will see that the\n> > backend processes it properly.\n> \n> You're on ;-)\n> \n> Signaling the cancel request via OOB sounds reasonable, as long as\n> nothing else is using it and all the systems we care about support it.\n> (I see a couple of routines to support OOB data in\n> src/backend/libpq/pqcomm.c, but they don't seem to be called from\n> anywhere. Vestiges of an old protocol, perhaps?)\n\nProbably. There is a document on the libpq protocol somewhere. I\nassume you have that already. It is pgsql/docs/programmer.ps.gz, around\npage 118.\n\n> \n> I still need to understand better what the backend will send back\n> in response to a cancel request, especially if it's idle by the\n> time the request arrives. Will that result in an asynchronous error\n> response of some sort? Do I need to make said response visible to\n> the frontend application? (Probably not ... it will have already\n> discovered that the query completed normally.)\n\nNot sure the backend has to signal that it received the cancel request. \nDoes it? It could just return a NULL result, that I think is caused by\nelog(ERROR) anyway, and we can put in some nice fancy text like 'query\naborted'.\n\n\n> \n> How should cancellation interact with copy in/out?\n\nNot sure on that one. May not be possible or desirable, but we could\nput something in commands/copy.c to check for cancel request.\n\n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Fri, 17 Apr 1998 17:01:17 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Proposal for async support in libpq" }, { "msg_contents": "> \n> Bruce Momjian <[email protected]> writes:\n> > This all looks good. Another thing we really need it to be able to\n> > cancel queries. This would be a big win, and looks like it could fit\n> > into the scheme here.\n> \n> I thought about proposing a PQcancelAsync that would cancel the active\n> query-in-progress. But that would require support on the backend side,\n> and I am far from competent to make it happen. (libpq is simple enough\n> that I'm not afraid to rewrite it, but making major mods to the backend\n> is another story. I just got here this week...)\n\nIn backend/libpq/pqcomm.c, I see pg_sendoob() which sends out-of-band\ndata FROM the backend TO the client, but it is not called from anywhere.\n\nThis could be a method of signaling that a notification was pending, and\nsending out-of-band data FROm the client TO the backend could be used\nfor cancelling a query.\n\nout-of-band data causes a convenient signal to the process on the other\nend, which can easily be used to handle these cases.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Sat, 18 Apr 1998 01:13:52 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Proposal for async support in libpq" }, { "msg_contents": "While I understand the desire to implement async support with the\nminimum of fuss, SQL3 provides the ASYNC, TEST and WAIT statements. \nThis would be a more \"standard\" solution, but obviously requires\nimplementation in the backend.\n\nPhil\n", "msg_date": "Sat, 18 Apr 1998 09:41:31 +0000", "msg_from": "Phil Thompson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Proposal for async support in libpq" }, { "msg_contents": "> In backend/libpq/pqcomm.c, I see pg_sendoob() which sends out-of-band\n> data FROM the backend TO the client, but it is not called from anywhere.\n> \n> This could be a method of signaling that a notification was pending, and\n> sending out-of-band data FROm the client TO the backend could be used\n> for cancelling a query.\n> \n> out-of-band data causes a convenient signal to the process on the other\n> end, which can easily be used to handle these cases.\n\nWasn't the problem with OOB data that java doesn't support this? I \nremember that OOB data has come up before on this list a long time ago, \nand that at that time some java bloke (peter?) started to complain :)\n\nMaarten\n\n_____________________________________________________________________________\n| TU Delft, The Netherlands, Faculty of Information Technology and Systems |\n| Department of Electrical Engineering |\n| Computer Architecture and Digital Technique section |\n| [email protected] |\n-----------------------------------------------------------------------------\n\n", "msg_date": "Sat, 18 Apr 1998 12:10:47 +0200 (MET DST)", "msg_from": "Maarten Boekhold <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Proposal for async support in libpq" }, { "msg_contents": "On Sat, 18 Apr 1998, Bruce Momjian wrote:\n\n> In backend/libpq/pqcomm.c, I see pg_sendoob() which sends out-of-band\n> data FROM the backend TO the client, but it is not called from anywhere.\n> \n> This could be a method of signaling that a notification was pending, and\n> sending out-of-band data FROm the client TO the backend could be used\n> for cancelling a query.\n> \n> out-of-band data causes a convenient signal to the process on the other\n> end, which can easily be used to handle these cases.\n\nJust a quick question: If you have an OOB packet sent to the backend, how\nwould we handle the case where a row is being sent to the backend, but the\nOOB packet comes in the middle of it?\n\nIt may sound like a silly question, but I'm thinking if a client is on the\nend of a slow network connection, then the packet containing the row could\nbecome fragmented, and the OOB packet could get in the way.\n\nAnyhow, I'm trying to find out how to implement OOB in Java. I know it's\nthere, as I've seen it in the past. Just can't find it at the moment.\n\n-- \nPeter T Mount [email protected] or [email protected]\nMain Homepage: http://www.demon.co.uk/finder\nWork Homepage: http://www.maidstone.gov.uk Work EMail: [email protected]\n\n", "msg_date": "Sat, 18 Apr 1998 11:34:25 +0100 (BST)", "msg_from": "Peter T Mount <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Proposal for async support in libpq" }, { "msg_contents": "On Sat, 18 Apr 1998, Maarten Boekhold wrote:\n\n> Wasn't the problem with OOB data that java doesn't support this? I \n> remember that OOB data has come up before on this list a long time ago, \n> and that at that time some java bloke (peter?) started to complain :)\n\nI said at the time, that I wasn't certain that Java did or didn't support\nit. Since then, I have noticed references to it, but I've lost them since.\n\nI'm delving into the docs as I type, looking for this.\n\n-- \nPeter T Mount [email protected] or [email protected]\nMain Homepage: http://www.demon.co.uk/finder\nWork Homepage: http://www.maidstone.gov.uk Work EMail: [email protected]\n\n", "msg_date": "Sat, 18 Apr 1998 11:40:45 +0100 (BST)", "msg_from": "Peter T Mount <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Proposal for async support in libpq" }, { "msg_contents": "A combined reply to several comments:\n\nPhil Thompson <[email protected]> writes:\n> While I understand the desire to implement async support with the\n> minimum of fuss, SQL3 provides the ASYNC, TEST and WAIT statements. \n> This would be a more \"standard\" solution,\n\n... but it doesn't solve the problem. My concern is first to get rid\nof the need to issue dummy queries to detect NOTIFY responses. This is\nclearly not an SQL language issue, only a matter of poor design of libpq.\nThe business about supporting async queries is just a nice bonus as far\nas I'm concerned.\n\nI don't believe I would want to use the SQL3 approach even if we had it.\nI don't know SQL3, but if I make the obvious guess about how these are\nsupposed to work, then you need a round trip to the server to issue an\nasync query, and another round trip every time you want to see if it's\ndone. (Plus one to fetch the results?) Now as far as I'm concerned,\nthe point of these changes is to *decrease* communication loads and\nunproductive server cycles. Not increase them. I want a scheme that\ndoesn't require the client to create busywork for itself, the server,\nand the network.\n\n\nPeter T Mount <[email protected]> writes:\n> Just a quick question: If you have an OOB packet sent to the backend, how\n> would we handle the case where a row is being sent to the backend, but the\n> OOB packet comes in the middle of it?\n\nThis is the same copy in/out issue I asked about yesterday, no?\nI think what Bruce had in mind was that the backend would check for\nan OOB cancel once per row, or something like that. This needs to\nbe documented, but it seems reasonable...\n\n\nBruce Momjian <[email protected]> writes:\n> In backend/libpq/pqcomm.c, I see pg_sendoob() which sends out-of-band\n> data FROM the backend TO the client, but it is not called from anywhere.\n> This could be a method of signaling that a notification was pending, and\n> sending out-of-band data FROm the client TO the backend could be used\n> for cancelling a query.\n\nI don't see any real need to issue outgoing notifications as OOB data.\nIf the client needed to interrupt current processing to handle a notify,\nthen maybe it'd be useful, but how likely is that? Keep in mind that\nthe backend currently delays notify reports until end of transaction.\nI don't think we really want to change that...\n\nThis brings up another issue that I meant to ask about before: exactly\nwhat is the relation between interfaces/libpq and backend/libpq? It\nsorta looks like they were once the same code. Why were they allowed to\nbecome different, and is there any point in trying to unify them again?\n\n\nMaarten Boekhold wrote:\n> Wasn't the problem with OOB data that java doesn't support this?\n\nI was a little concerned about that too, but as long as we only use it\nfor noncritical functionality like query cancellation, I don't see that\nit's a killer if some environments don't have it. But if the server\nissued outgoing notifies via OOB, then we'd have a serious loss of\nfunctionality in any environment without OOB.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 18 Apr 1998 12:12:40 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Proposal for async support in libpq " }, { "msg_contents": "> \n> > In backend/libpq/pqcomm.c, I see pg_sendoob() which sends out-of-band\n> > data FROM the backend TO the client, but it is not called from anywhere.\n> > \n> > This could be a method of signaling that a notification was pending, and\n> > sending out-of-band data FROm the client TO the backend could be used\n> > for cancelling a query.\n> > \n> > out-of-band data causes a convenient signal to the process on the other\n> > end, which can easily be used to handle these cases.\n> \n> Wasn't the problem with OOB data that java doesn't support this? I \n> remember that OOB data has come up before on this list a long time ago, \n> and that at that time some java bloke (peter?) started to complain :)\n\nI sure don't remember this topic, or anyone complaining about OOB.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Sat, 18 Apr 1998 21:16:59 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Proposal for async support in libpq" }, { "msg_contents": "> Just a quick question: If you have an OOB packet sent to the backend, how\n> would we handle the case where a row is being sent to the backend, but the\n> OOB packet comes in the middle of it?\n> \n> It may sound like a silly question, but I'm thinking if a client is on the\n> end of a slow network connection, then the packet containing the row could\n> become fragmented, and the OOB packet could get in the way.\n> \n> Anyhow, I'm trying to find out how to implement OOB in Java. I know it's\n> there, as I've seen it in the past. Just can't find it at the moment.\n> \n\nBecause it is TCP/IP, the packets are re-assembled, so you can't get the\nOOB inside a normal packet. It is not like UDP. Second, the OOB data\ndoes not arrive in the normal data stream, but must be read by\nspecifiying the MSG_OOB flag to the the recv() system. One issue raised\nby Stevens' \"Unix Network Programming\"(p. 333) is that the OOB\nsignal(SIGURG) can arrive before the data is ready to be read.\n\nI have the Stevens' book, and it will probably be required to get this\nworking properly.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Sat, 18 Apr 1998 21:44:02 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Proposal for async support in libpq" }, { "msg_contents": "> Bruce Momjian <[email protected]> writes:\n> > In backend/libpq/pqcomm.c, I see pg_sendoob() which sends out-of-band\n> > data FROM the backend TO the client, but it is not called from anywhere.\n> > This could be a method of signaling that a notification was pending, and\n> > sending out-of-band data FROm the client TO the backend could be used\n> > for cancelling a query.\n> \n> I don't see any real need to issue outgoing notifications as OOB data.\n> If the client needed to interrupt current processing to handle a notify,\n> then maybe it'd be useful, but how likely is that? Keep in mind that\n> the backend currently delays notify reports until end of transaction.\n> I don't think we really want to change that...\n\nWell, if you are trying to prevent from sending queries through libpq to\nsee if you have any notifications, how will you get notification without\nan OOB-generated signal? The notification would have to come through a\npacket from the backend, and I thought you didn't want to have to deal\nwith that?\n\n> \n> This brings up another issue that I meant to ask about before: exactly\n> what is the relation between interfaces/libpq and backend/libpq? It\n> sorta looks like they were once the same code. Why were they allowed to\n> become different, and is there any point in trying to unify them again?\n\ninterfaces/libpq is the client side, and backend/libpq is the server\nside. One sends queries, the other passes them to the backend\ninternals. There has been some effort to merge code that is common to\nboth, but the definately don't do the same thing.\n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Sat, 18 Apr 1998 22:12:24 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Proposal for async support in libpq" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n>> I don't see any real need to issue outgoing notifications as OOB data.\n\n> Well, if you are trying to prevent from sending queries through libpq to\n> see if you have any notifications, how will you get notification without\n> an OOB-generated signal? The notification would have to come through a\n> packet from the backend, and I thought you didn't want to have to deal\n> with that?\n\nNo, I have no problem with getting a regular packet from the backend\nwhen the notify condition occurs. What I don't like is creating excess\nnetwork traffic above and beyond the notification packet --- especially\nnot having to \"poll\" continuously to see whether the condition has\noccurred. But using select() to wait for something to happen does not\ninduce network traffic.\n\nThe only advantage of sending outgoing notifications as OOB is the fact\nthat a SIGURG signal gets delivered to the recipient, which could be\nused to trigger abandonment of some current operation. But I have a\nhard time perceiving where a client would want that, as opposed to\ndetecting the notify after it completes whatever it's currently doing.\n\nSending cancellation requests inbound to the server is exactly what OOB\nis for, because there you must interrupt current processing to get the\ndesired result. Outbound notify signals are a different thing IMHO.\nAn SQL NOTIFY is typically going to trigger new processing in the\nclient, not cancel an operation in progress.\n\nThere are positive reasons *not* to force applications to handle\nnotifies as OOB data, primarily having to do with portability and risk\nof breaking things. For example, consider a frontend app that already\ndeals with OOB/SIGURG on a different input channel. If libpq takes over\nSIGURG signal handling, we break the app. If not, we probably still\nbreak the app, because its signal handling logic is likely expecting\nSIGURG only from the other channel.\n\nIn short, inbound OOB to the server is OK because we have control of\neverything that will be affected. Outbound OOB is not OK because\nwe don't.\n\n> One issue raised\n> by Stevens' \"Unix Network Programming\"(p. 333) is that the OOB\n> signal(SIGURG) can arrive before the data is ready to be read.\n\nRight. One advantage of using OOB only for cancel is that the SIGURG\nsignal itself is the interesting event; you don't really *need* to get\nthe OOB data to know what to do. You can read and discard the OOB data\nat any convenient point, perhaps just before trying to read normal data\nfrom the client channel.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 19 Apr 1998 14:02:33 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Proposal for async support in libpq " }, { "msg_contents": "\nJust a thought. Tom is after some scheme to receive notification messages,\nout side the normal network stream, then why not add a second network\nconnection for this.\n\nWe could have in libpq, some new calls, to open, handle requests, and to\nclose the connection. It's then up to the client to handle these, so\nexisting clients will not be broken.\n\nThis would be a doddle to do in Java, and shouldn't be too difficult for\nlibpq, and libpgtcl (call backs are almost as simple to do in tcl as they\nare in Java).\n\nJust a couple of thoughts...\n\n-- \nPeter T Mount [email protected] or [email protected]\nMain Homepage: http://www.demon.co.uk/finder (moving soon to www.retep.org.uk)\n************ Someday I may rebuild this signature completely ;-) ************\nWork Homepage: http://www.maidstone.gov.uk Work EMail: [email protected]\n\n", "msg_date": "Sun, 19 Apr 1998 22:44:33 +0100 (BST)", "msg_from": "Peter T Mount <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Proposal for async support in libpq" }, { "msg_contents": "Tom Lane wrote:\n> \n> Bruce Momjian <[email protected]> writes:\n> >> I don't see any real need to issue outgoing notifications as OOB data.\n> \n> > Well, if you are trying to prevent from sending queries through libpq to\n> > see if you have any notifications, how will you get notification without\n> > an OOB-generated signal? The notification would have to come through a\n> > packet from the backend, and I thought you didn't want to have to deal\n> > with that?\n> \n> No, I have no problem with getting a regular packet from the backend\n> when the notify condition occurs. What I don't like is creating excess\n> network traffic above and beyond the notification packet --- especially\n> not having to \"poll\" continuously to see whether the condition has\n> occurred. But using select() to wait for something to happen does not\n> induce network traffic.\n> \n> The only advantage of sending outgoing notifications as OOB is the fact\n> that a SIGURG signal gets delivered to the recipient, which could be\n> used to trigger abandonment of some current operation. But I have a\n> hard time perceiving where a client would want that, as opposed to\n> detecting the notify after it completes whatever it's currently doing.\n> \n> Sending cancellation requests inbound to the server is exactly what OOB\n> is for, because there you must interrupt current processing to get the\n> desired result. Outbound notify signals are a different thing IMHO.\n> An SQL NOTIFY is typically going to trigger new processing in the\n> client, not cancel an operation in progress.\n> \n> There are positive reasons *not* to force applications to handle\n> notifies as OOB data, primarily having to do with portability and risk\n> of breaking things. For example, consider a frontend app that already\n> deals with OOB/SIGURG on a different input channel. If libpq takes over\n\nWhen the Postgresql library installs its signal handler for SIGURG, it\ncan find out if one was already in place. If so, it can check to see\nif the SIGURG is for that other handler and the postgres handler can\ncall the other handler.\n\nOcie\n", "msg_date": "Sun, 19 Apr 1998 15:07:20 -0700 (PDT)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Proposal for async support in libpq" }, { "msg_contents": "Peter T Mount <[email protected]> writes:\n> Just a thought. Tom is after some scheme to receive notification messages,\n> out side the normal network stream, then why not add a second network\n> connection for this.\n\n*I* certainly am not after that, and I see no reason to create a second\nnetwork connection.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 19 Apr 1998 19:49:57 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Proposal for async support in libpq " }, { "msg_contents": "[email protected] writes:\n> When the Postgresql library installs its signal handler for SIGURG, it\n> can find out if one was already in place. If so, it can check to see\n> if the SIGURG is for that other handler and the postgres handler can\n> call the other handler.\n\nCool ... but what makes you think that you get to go second? The app\ncould install or remove its SIGURG handler at any time.\n\nAlso, how would you tell whether the SIGURG was \"for that other\nhandler\"? As Bruce pointed out, the signal may be delivered before any\nOOB data is actually available to be read; therefore there is no way for\nthe signal handler to be sure whether the SIGURG came off the postgres\nsocket or some other one.\n\nBasically, the Unix interface to OOB data is too brain-damaged to\nbe useful with more than one source of OOB data :-(. We can usefully\nuse it in the backend, because we can just declare that that's all the\nbackend will ever use OOB input for. But I don't think we can make\nthe same choice for frontend applications.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 19 Apr 1998 19:57:45 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Proposal for async support in libpq " }, { "msg_contents": "> \n> No, I have no problem with getting a regular packet from the backend\n> when the notify condition occurs. What I don't like is creating excess\n> network traffic above and beyond the notification packet --- especially\n> not having to \"poll\" continuously to see whether the condition has\n> occurred. But using select() to wait for something to happen does not\n> induce network traffic.\n\nGot it. I guess I suspected that you would not necessarily be in a\nselect() call at all times. If you are waiting for user input, or using\nlibpq and your app is waiting for some keyboard input, you really are\nnot hanging waiting for input from the backend, are you?\n\n> \n> The only advantage of sending outgoing notifications as OOB is the fact\n> that a SIGURG signal gets delivered to the recipient, which could be\n> used to trigger abandonment of some current operation. But I have a\n> hard time perceiving where a client would want that, as opposed to\n> detecting the notify after it completes whatever it's currently doing.\n> \n> Sending cancellation requests inbound to the server is exactly what OOB\n> is for, because there you must interrupt current processing to get the\n> desired result. Outbound notify signals are a different thing IMHO.\n> An SQL NOTIFY is typically going to trigger new processing in the\n> client, not cancel an operation in progress.\n> \n> There are positive reasons *not* to force applications to handle\n> notifies as OOB data, primarily having to do with portability and risk\n> of breaking things. For example, consider a frontend app that already\n> deals with OOB/SIGURG on a different input channel. If libpq takes over\n> SIGURG signal handling, we break the app. If not, we probably still\n> break the app, because its signal handling logic is likely expecting\n> SIGURG only from the other channel.\n> \n> In short, inbound OOB to the server is OK because we have control of\n> everything that will be affected. Outbound OOB is not OK because\n> we don't.\n> \n> > One issue raised\n> > by Stevens' \"Unix Network Programming\"(p. 333) is that the OOB\n> > signal(SIGURG) can arrive before the data is ready to be read.\n> \n> Right. One advantage of using OOB only for cancel is that the SIGURG\n> signal itself is the interesting event; you don't really *need* to get\n> the OOB data to know what to do. You can read and discard the OOB data\n> at any convenient point, perhaps just before trying to read normal data\n> from the client channel.\n> \n> \t\t\tregards, tom lane\n> \n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Mon, 20 Apr 1998 13:58:19 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Proposal for async support in libpq" }, { "msg_contents": "On Fri, 17 Apr 1998, Tom Lane wrote:\n\n> Bruce Momjian <[email protected]> writes:\n> > You supply the indication to the backend, and I will see that the\n> > backend processes it properly.\n> \n> You're on ;-)\n> \n> Signaling the cancel request via OOB sounds reasonable, as long as\n> nothing else is using it and all the systems we care about support it.\n\n SSH doesn't have OOB. You can't send an OOB via SSH encrypted channel. \n\n Jan\n\n -- Gospel of Jesus is the saving power of God for all who believe --\nJan Vicherek ## To some, nothing is impossible. ## www.ied.com/~honza\n >>> Free Software Union President ... www.fslu.org <<<\nInteractive Electronic Design Inc. -#- PGP: finger [email protected]\n\n", "msg_date": "Tue, 21 Apr 1998 00:59:55 -0400 (EDT)", "msg_from": "Jan Vicherek <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal for async support in libpq " }, { "msg_contents": "Jan Vicherek <[email protected]> writes:\n> On Fri, 17 Apr 1998, Tom Lane wrote:\n>> Signaling the cancel request via OOB sounds reasonable, as long as\n>> nothing else is using it and all the systems we care about support it.\n\n> SSH doesn't have OOB. You can't send an OOB via SSH encrypted channel. \n\nI was afraid we'd run into something like that.\n\nWell, here's how I see it: cancelling requests in progress is an\noptional capability. If it doesn't work on certain communication\nchannels I can live with that. I would rather see that than see the\nbackend slowed down by checking for cancel requests sent as normal\ndata (without OOB).\n\nA client app will actually have no way to tell whether a cancel request\nhas any effect; if the comm channel drops OOB requests on the floor,\nit will look the same as if the cancel didn't get to the server until\nafter the server had finished the query. So this shouldn't really\ncause any software to fail anyway.\n\nOTOH, I would not like to see NOTIFY broken when using an SSH channel,\nso this is another reason not to try to use OOB for the outbound\ndirection.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 21 Apr 1998 12:22:04 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Re: Proposal for async support in libpq " }, { "msg_contents": "> \n> On Fri, 17 Apr 1998, Tom Lane wrote:\n> \n> > Bruce Momjian <[email protected]> writes:\n> > > You supply the indication to the backend, and I will see that the\n> > > backend processes it properly.\n> > \n> > You're on ;-)\n> > \n> > Signaling the cancel request via OOB sounds reasonable, as long as\n> > nothing else is using it and all the systems we care about support it.\n> \n> SSH doesn't have OOB. You can't send an OOB via SSH encrypted channel. \n\nI have trouble buying that. SSH is just the socket filter. Perhaps the\nOOB data is not encrypted like the normal data?\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Tue, 21 Apr 1998 12:34:40 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal for async support in libpq" }, { "msg_contents": "\n for one, it is not encrypted,\n for two, the receiving end doesn't listen for OOB (most likely because\nsending side doesn't encrypt it)\n The sending side doesn't encrypt it because OOB concept (flushing unsent\ndata) is incompatible with simple *single-stream* encryption. flushing\nbreaks the decryption of the data -- it corrupts the stream, so it becomes\nunencryptable.\n\n Jan\n\nOn Tue, 21 Apr 1998, Bruce Momjian wrote:\n\n> > On Fri, 17 Apr 1998, Tom Lane wrote:\n> > \n> > > Bruce Momjian <[email protected]> writes:\n> > > > You supply the indication to the backend, and I will see that the\n> > > > backend processes it properly.\n> > > \n> > > You're on ;-)\n> > > \n> > > Signaling the cancel request via OOB sounds reasonable, as long as\n> > > nothing else is using it and all the systems we care about support it.\n> > \n> > SSH doesn't have OOB. You can't send an OOB via SSH encrypted channel. \n> \n> I have trouble buying that. SSH is just the socket filter. Perhaps the\n> OOB data is not encrypted like the normal data?\n\n\n -- Gospel of Jesus is the saving power of God for all who believe --\nJan Vicherek ## To some, nothing is impossible. ## www.ied.com/~honza\n >>> Free Software Union President ... www.fslu.org <<<\nInteractive Electronic Design Inc. -#- PGP: finger [email protected]\n\n", "msg_date": "Tue, 21 Apr 1998 15:27:04 -0400 (EDT)", "msg_from": "Jan Vicherek <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal for async support in libpq" }, { "msg_contents": "Tom Lane wrote:\n> \n> Jan Vicherek <[email protected]> writes:\n> > SSH doesn't have OOB. You can't send an OOB via SSH encrypted channel.\n> \n> I was afraid we'd run into something like that.\n> \n> Well, here's how I see it: cancelling requests in progress is an\n> optional capability. If it doesn't work on certain communication\n> channels I can live with that. I would rather see that than see the\n> backend slowed down by checking for cancel requests sent as normal\n> data (without OOB).\n> \n> A client app will actually have no way to tell whether a cancel request\n> has any effect; if the comm channel drops OOB requests on the floor,\n> it will look the same as if the cancel didn't get to the server until\n> after the server had finished the query. So this shouldn't really\n> cause any software to fail anyway.\n> \n> OTOH, I would not like to see NOTIFY broken when using an SSH channel,\n> so this is another reason not to try to use OOB for the outbound\n> direction.\n> \n\nWe could use some kind of threaded model,\nwith the main thread running the current execution path\nand a minimal thread just \"select:ing on the socket\".\nA 2 process model would be most portable,\na pthread solution would be cleaner.\n\nNOTIFY/CANCEL could be an option for modern \n(in this case, pthreading) systems only.\n\n\tregards,\n-- \n---------------------------------------------\nG�ran Thyni, sysadm, JMS Bildbasen, Kiruna\n", "msg_date": "Wed, 22 Apr 1998 16:29:49 +0200", "msg_from": "\"G���ran Thyni\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: Proposal for async support in libpq" }, { "msg_contents": "> > Bruce Momjian <[email protected]> writes:\n> > > This all looks good. Another thing we really need it to be able to\n> > > cancel queries. This would be a big win, and looks like it could fit\n> > > into the scheme here.\n> > \n> > I thought about proposing a PQcancelAsync that would cancel the active\n> > query-in-progress. But that would require support on the backend side,\n> > and I am far from competent to make it happen. (libpq is simple enough\n> > that I'm not afraid to rewrite it, but making major mods to the backend\n> > is another story. I just got here this week...)\n> > \n> > If anyone who does know what they're doing is willing to make the\n> > necessary backend mods, I'm all for it. The libpq side would be\n> > easy enough.\n> > \n> > How would such cancellation interact with transactions, btw? Would\n> > you expect it to roll back only the current command, or abort the\n> > whole transaction? We'd also have to consider corner cases, like\n> > when the backend has already finished the query by the time it gets\n> > the cancel request.\n> \n> It is pretty easy, just an elog(ERROR) would do it. The issue is\n> allowing the backend to see the request. We can put some checks in\n> tcop/postgres.c as it moves from module to module, and something in the\n> executor to check for the cancel, and do an elog(ERROR). It would be\n> nice if it arrived as out-of-band data, so we could check for input\n> quickly without having to actually process it if it is not a cancel\n> notification. \n> \n> The out-of-band data will send a SIGURG signal to the backend, and we\n> can set a global variable, and check the variable at various places.\n> \n> To do all this, we need to be able to send a query, and not have it\n> block, and it seems you are giving us this capability.\n> \n> You supply the indication to the backend, and I will see that the\n> backend processes it properly.\n\nOld news I know, but I was saving it to follow up and then ...\n\nI agree completely with Bruces proposal for handling this in the back-end.\nI have recently done something very similar for another database product.\n\nThe important points are:\n\n - the signal handler can only set a single global variable. No other action\n is to be done in the handler.\n\n - the variable is to be tested only at well defined times where the recovery\n from an elog() can be handled correctly. It is nice if this test is\n \"frequent, but not too frequent\". At scan begin time is fairly good, and\n for large scans perhaps every few pages. Every row is too often. When\n stepping to a new plan node is also good.\n\n - There should be a further global flag to disable recognition of the\n cancel. This is used for example during an in progress elog() -> cleanup\n sequence. The cleanup code is not really reentrant so an elog() in the\n middle of an elog is likely to leave an inconsistant state.\n\n-dg\n\nDavid Gould [email protected] 510.628.3783 or 510.305.9468 \nInformix Software (No, really) 300 Lakeside Drive Oakland, CA 94612\n\"Of course, someone who knows more about this will correct me if I'm wrong,\n and someone who knows less will correct me if I'm right.\"\n --David Palmer ([email protected])\n", "msg_date": "Sat, 16 May 1998 00:09:05 -0700 (PDT)", "msg_from": "[email protected] (David Gould)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Proposal for async support in libpq" } ]
[ { "msg_contents": "\n\tCould some one tell me please what symbols the Dec UNIX compiler\ndefines on the Alpha? Bascially run src/tools/ccsym of the pgsql source\ntree, and send me the output. I need this to sort out the differences\nbetween how to setup defines for Dec Alpha code sections and Linux/Alpha\ncode sections. Thanks!\n\n----------------------------------------------------------------------------\n| \"For to me to live is Christ, and to die is gain.\" |\n| --- Philippians 1:21 (KJV) |\n----------------------------------------------------------------------------\n| Ryan Kirkpatrick | Boulder, Colorado | [email protected] |\n----------------------------------------------------------------------------\n| http://www-ugrad.cs.colorado.edu/~rkirkpat/ |\n----------------------------------------------------------------------------\n\n", "msg_date": "Sat, 18 Apr 1998 14:14:35 -0500 (CDT)", "msg_from": "Ryan Kirkpatrick <[email protected]>", "msg_from_op": true, "msg_subject": "Symbols defined on Dec Alpha...." } ]
[ { "msg_contents": "I wonder why it sometimes restores the data. Maybe because it's still in\nthe same area of the disk/file?\n\nMichael\n--\nDr. Michael Meskes, Projekt-Manager | topystem Systemhaus GmbH\[email protected] | Europark A2, Adenauerstr. 20\[email protected] | 52146 Wuerselen\nGo SF49ers! Use Debian GNU/Linux! | Tel: (+49) 2405/4670-44\n\n> ----------\n> From: \tJose' Soares Da Silva[SMTP:[email protected]]\n> Sent: \tFreitag, 17. April 1998 19:05\n> To: \tMeskes, Michael\n> Cc: \[email protected]; [email protected];\n> [email protected]\n> Subject: \tRE: [HACKERS] drop table inside transactions\n> \n> On Fri, 17 Apr 1998, Meskes, Michael wrote:\n> \n> > Is this really a bug? I haven't seen any (commercial) system\n> supporting\n> > this kind of transaction recovery. Once you drop a table the data is\n> > lost, no matter if you rollback or not. \n> > \n> > Michael\n> Maybe you are right Michael, but there's another point; the table\n> wasn't\n> removed, it is still there, only data are cancelled.\n> It's more, like a DELETE FROM ... not a DROP TABLE... \n> and, if another user inserts data into this dropped table,\n> the table returns with all data.\n> (Refer to my first bug-report on this matter),\n> and more; some times ROLLBACK restores both data and table structure.\n> ;-)\n> \n", "msg_date": "Sat, 18 Apr 1998 22:04:15 +0200", "msg_from": "\"Meskes, Michael\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] drop table inside transactions" } ]
[ { "msg_contents": "Hi.\nI was wondering if you used any leak finding tools in your code? There\nseems to be a few mallocs that don't get freed, and a few internal buffer\noverruns.\nI don't know if it will work for you, but I'd suggest trying out a proggie\ncalled memcheck. I use it for wireless terminals (DOS based) and it works\ngreat. When the proggie shuts down, it tells me the location and size of\nevery structure that wasn't freed. Also it will tell me during runtime\nwhen I overwrite something that I shouldn't have. Just a suggestion.\n\nIf I had more time I would dig in and see if I couldn't do some work on\nsome code here.\n\n-Mike\n\n\n\n", "msg_date": "Sun, 19 Apr 1998 10:12:43 -0300 (ADT)", "msg_from": "Michael Richards <[email protected]>", "msg_from_op": true, "msg_subject": "Leaks?" }, { "msg_contents": "We are releasing a 6.3.2 fix, perhaps today, that will fix this.\n> \n> Hi.\n> I was wondering if you used any leak finding tools in your code? There\n> seems to be a few mallocs that don't get freed, and a few internal buffer\n> overruns.\n> I don't know if it will work for you, but I'd suggest trying out a proggie\n> called memcheck. I use it for wireless terminals (DOS based) and it works\n> great. When the proggie shuts down, it tells me the location and size of\n> every structure that wasn't freed. Also it will tell me during runtime\n> when I overwrite something that I shouldn't have. Just a suggestion.\n> \n> If I had more time I would dig in and see if I couldn't do some work on\n> some code here.\n> \n> -Mike\n> \n> \n> \n> \n> \n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Sun, 19 Apr 1998 10:35:32 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Leaks?" }, { "msg_contents": "On Sun, 19 Apr 1998, Michael Richards wrote:\n\n> Hi.\n> I was wondering if you used any leak finding tools in your code? There\n\n\tSeveral times over the past two years, there have been ppl taht\nhave popped up with tools like Purify and ElectricFence, providing\n\"insight\" into our code as far as memory leaks have been concerned...with\nthe complexity of the code, I don't believe its every *really* possible to\nnever have one or two, but, if/when reported, we try and get it fixed as\nquickly as possible...\n\n\t...with that in mind, point us to where the leak is, and we'll try\nand fix it :)\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Sun, 19 Apr 1998 13:15:40 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Leaks?" }, { "msg_contents": "On Sun, 19 Apr 1998, The Hermit Hacker wrote:\n\n> > I was wondering if you used any leak finding tools in your code? There\n> \n> \tSeveral times over the past two years, there have been ppl taht\n> have popped up with tools like Purify and ElectricFence, providing\n> \t...with that in mind, point us to where the leak is, and we'll try\n> and fix it :)\nWell, I wasn't convinced until I started using Memcheck either, but I\nfound a logfile that spit stuff like\nWarning 387 bytes allocated in test.c(133) not freed\nor \nWarning BUFFER overrun at memcpy(testbuf) in file test1.c(26)\n\nwas really useful. If I ever find the time to dig in the source, I was\nthinking of linking in memcheck and seeing what it could tell me. No sense\nin my doing that if someone was already doing it to find the leaks...\n\n-Mike\n\n", "msg_date": "Sun, 19 Apr 1998 19:05:04 -0300 (ADT)", "msg_from": "Michael Richards <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Leaks?" } ]
[ { "msg_contents": "I am running a FreeBSD 2.2.5 system that requires sfio to be installed. I\nam also running FastCGI for faster CGI access as well as maintaining open\ndatabase connections.\nCompiling craps out, but the fix is quite simple. You might want to\ninclude something in the config script. Here is what I changed.\n\nin Makefile.global\n213c213\n< LDFLAGS= -L/usr/local/lib -lcrypt -lcompat -lln -lm -lreadline -ltermcap\n-lcurses -lsfio\n---\n> LDFLAGS= -L/usr/local/lib -lcrypt -lcompat -lln -lm -lreadline -ltermcap\n-lcurses\n\n(I added a -lsfio)\n\nIn interfaces/ecpg/preproc/Makefile\nHere are my changes:\n23c23\n< # Rule that really does something.\n---\n> # Rule that really do something.\n25c25\n< $(CC) -o ecpg y.tab.o pgc.o type.o ecpg.o ../lib/typename.o\n$(LEXLIB) -L /usr/local/lib -lsfio\n---\n> $(CC) -o ecpg y.tab.o pgc.o type.o ecpg.o ../lib/typename.o\n$(LEXLIB)\n\n\nI think it would be nice to implement these changes 'cause lots of folks\nare installing FastCGI. My problem was that I installed PostgreSQL first\nso it was already compiled when the Safe Fast IO (sfio) libs were\ninstalled. When I went up to 6.3 it caused me lots of grief...\n\nMaybe someone can come up with a nifty way if detecting if libsfio.a is\naround and make these makefiles work if so...\n\nthanks\n-Mike\n\n", "msg_date": "Sun, 19 Apr 1998 10:28:01 -0300 (ADT)", "msg_from": "Michael Richards <[email protected]>", "msg_from_op": true, "msg_subject": "Compiling with sfio installed" }, { "msg_contents": "On Sun, 19 Apr 1998, Michael Richards wrote:\n\n> Maybe someone can come up with a nifty way if detecting if libsfio.a is\n> around and make these makefiles work if so...\n\n\tAlready done...v6.3.2 is being released today...(I'm just building\nup the tar file now)\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Sun, 19 Apr 1998 13:11:39 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Compiling with sfio installed" } ]
[ { "msg_contents": "Hi\n\nI am sorry. I cant't English well.\nI want to know something.\nMay I ask you a question?\n\nMy system is a Linuxbox which is Redhat 5.0( Pentium 133MHz 80Mbyte Ram 120Mbyte Swap Memory) and\nRedhat 5.0)Pentium 160MHz 16Mbyte Ram 60Mbyte Swap Memory).\n\nI Install PostgrSQL 6.3.1.\nAnd I Install Int8 program, becuase data is greater then 5000000000(signed)\nI create table. like below\n\ncreate table lr(\nrec_cd char(2),\nactl_sect char(4),\nmagam_ymd char(6),\nactl_hq char(2),\nactl_dept char(4),\naip_cd char(1),\ncd char(2),\nbj char(2),\nval int8[][][],\nprimary key(rec_cd,actl_sect,magam_ymd,aip_cd,cd,bj)\n);\n\nArray val is [6][6][6] demension.\nCount(lr) is 33000 and. after time more then 1000000 record.\n\nI test query \nselect lr.val[1][6][6] from lr where\n rec_cd='1' and actl_sect='9' and magam_ymd='199802' and aip_cd='1' \n and cd='MM' and bj='BT';\n\nbut result time is 30secend. Late!!!\nWhy? I am bl\nue....\n\nHelp me Please.\n\n\n", "msg_date": "Mon, 20 Apr 1998 08:37:31 KST", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Help me please." } ]
[ { "msg_contents": "Hi, all\n\nI have some problems with transactions and locks...\nI have 5 questions about it...\n \n1. NOTICE: (transaction aborted): queries ignored until END\n *ABORT STATE*\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n I would like to know what's mean the above message.\n - Seems that transactions aborts at smallest syntax error ?\n - Seems that every work done until this point is lost. \n - ?Am I right?\n - If yes, ?what can I do to go on?\n Seems that I can't do nothing but COMMIT or ROLLBACK.\n - Seems that COMMIT has the same effect of ROLLBACK,\n because all changes are lost in anyway.\n - If that's true ?why is it neccessary to do COMMIT or ROLLBACK?\n - and ?what about locks?\n ?are all locks released before COMMIT/ROLLBACK?\n \n2. LOCKED FOR EVER AND EVER...\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^\n If user2 try to SELECT a table locked by user1, user2 falls in a trap\n he can't do nothing to free himself from this trap,\n and he must wait that user1 ends his work.\n\n - ?Is there a way to avoid this trap? \n I think that's useless to lock tables even for readonly operations.\n - user2 shouldn't be able to UPDATE tables but he should be able \n to SELECT tables.\n - ...or at least user2 should have the possibility to choice if \n he wants wait for ever that a table become available or retry latter\n to see if table was unlocked.\n A message like:\n \"TABLE <tablename> IS LOCKED BY USER <username> PLEASE, TRY LATTER\"\n would be preferable.\n - If this interests to someone;\n the Oracle'sLOCK TABLE <tablename> IN EXCLUSIVE MODE\n allows read access to locked tables.\n \n3. DROP TABLE <tablename> or DELETE FROM <tablename> ?\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n Dropping a table inside a transactions and then rolling back the work\n is equivalent to DELETE FROM <tablename>; the table's structure will\n be restored but data will be lost:\n \n postgres=> begin;\n BEGIN\n \n postgres=> select * from cities;\n code|city\n ----+-------------\n SFO |SAN FRANCISCO\n STL |ST. LOUIS\n (2 rows)\n \n postgres=> drop table cities;\n DROP\n \n postgres=> rollback;\n ABORT\n \n postgres=> select * from cities;\n code|city\n ----+----\n (0 rows)\n \n4. MIRACLE DROPPED TABLE IS RETURNED.\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n User <manuel> dropps a table inside a transactions while \n user <jose> is trying to query the same table,\n user <manuel> change his mind and rolls back the work,\n at this point user <jose> see the result of his query, \n and insert a new row to the table.\n At this point the dropped table returns with all its data.\n \n --user: manuel-------------------------------------------------\n \n postgres=> select current_user;\n getpgusername\n -------------\n manuel\n \n postgres=> begin;\n BEGIN\n \n postgres=> select * from cities;\n code|city\n ----+-------------\n SFO |SAN FRANCISCO\n STL |ST. LOUIS\n (2 rows)\n \n postgres=> lock cities;\n DELETE 0\n \n --user jose--------------------------------------------------\n \n postgres=> select current_user;\n getpgusername\n -------------\n jose\n \n postgres=> select * from cities;\n --jose was caught in a trap, wait for ever and ever ...\n \n --user manuel again---------------------------------------------\n \n postgres=> drop table cities;\n DROP\n \n postgres=> rollback;\n ABORT\n \n --user jose again---------------------------------------------\n -- (finally jose is \"free\" and have his query result):\n \n code|city\n ----+-------------\n SFO |SAN FRANCISCO\n STL |ST. LOUIS\n (2 rows)\n \n -- and now, jose decide to append a new row to the table...\n postgres=> insert into cities values ('SJC','SAN JOSE');\n INSERT 460390 1\n \n --and user manuel query table... ------------------------------------------\n -- et voila'... the table and all its data are returned...\n postgres=> select * from cities;\n code|city\n ----+-------------\n SFO |SAN FRANCISCO\n STL |ST. LOUIS\n SJC |SAN JOSE\n (3 rows)\n \n5. LOCK AT ROW LEVEL\n ^^^^^^^^^^^^^^^^^\n Massimo Dal Zotto have done a very useful work with locks at row level\n (refer to .../contrib/userlock) and it should be interesting to implement\n these functions as SQL statements.\n \n --to lock a row(s)...\n SELECT user_write_lock_oid(OID), oid, *\n FROM cities\n WHERE city LIKE 'SAN%';\n \n user_write_lock_oid| oid|code|city\n -------------------+------+----+-------------\n 1|460388|SFO |SAN FRANCISCO\n 1|460390|SJC |SAN JOSE\n\n --if result of \"user_write_lock_oid\" is 1, then the row(s) are available\n --and you can update it...\n \n --to unlock the row(s)...\n SELECT user_write_unlock_oid(OID)\n FROM cities\n WHERE oid = 460388 OR oid = 460390;\n \n - If this interests to someone, Oracle uses a similar way to locking rows,\n take a look...\n\n SELECT ROWID, *\n FROM cities\n WHERE city LIKE 'SAN%';\n FOR UPDATE OF city;\n \n ROWID\tCODE\tCITY\n __________________________________\n 460388\tSFO \tSAN FRANCISCO\n 460390\tSJC \tSAN JOSE\n\n --if row(s) is/are available then you can update it/them...\n\n UPDATE cities\n SET city = INITCAP(city)\n WHERE rowid = 460388 OR rowid = 460390;\n\n --to unlock the row(s)...\n COMMIT\n\n Oracle uses ROWIDs to lock rows, we also have OIDs...\n ?How much difficult is it to implement ?\n\n Ciao, Jose'\n\n", "msg_date": "Mon, 20 Apr 1998 10:08:45 +0000 (UTC)", "msg_from": "\"Jose' Soares Da Silva\" <[email protected]>", "msg_from_op": true, "msg_subject": "errors on transactions and locks ?" } ]
[ { "msg_contents": "v6.3.2 980419 builds OK on FreeBSD v3.0 -snap-980222\nNo config problems, works just fine.\nThanks for a very fine version of postgreSQL v6.3.2\n\njim\[email protected]\n", "msg_date": "Mon, 20 Apr 1998 11:37:44 EDT", "msg_from": "Kapsaris <[email protected]>", "msg_from_op": true, "msg_subject": "v6.3.2 build on FreeBSD" } ]
[ { "msg_contents": "\t>> Meskes, Michael wrote:\n\t>> > \n\t>> > Is this really a bug? I haven't seen any (commercial) system\nsupporting\n\t>> > this kind of transaction recovery. Once you drop a table the\ndata is\n\t>> > lost, no matter if you rollback or not. \n\t>\n\t>SOLID restores a dropped table with rollback.\n\n\tSame with Informix.\n", "msg_date": "Mon, 20 Apr 1998 14:06:17 +0200", "msg_from": "Zeugswetter Andreas SARZ <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] drop table inside transactions" }, { "msg_contents": "> \n> \t>> Meskes, Michael wrote:\n> \t>> > \n> \t>> > Is this really a bug? I haven't seen any (commercial) system\n> supporting\n> \t>> > this kind of transaction recovery. Once you drop a table the\n> data is\n> \t>> > lost, no matter if you rollback or not. \n> \t>\n> \t>SOLID restores a dropped table with rollback.\n> \n> \tSame with Informix.\n> \n> \n\nAdded to TODO list:\n\n\t* Allow table creation/destruction to be rolled back\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Mon, 20 Apr 1998 14:33:16 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] drop table inside transactions" } ]
[ { "msg_contents": "When do we start submitting new features for 6.4? I finally can compile my\nsources again. Thus ecpg 2.0 is not that far away.\n\nMichael\n-- \nDr. Michael Meskes, Project-Manager | topsystem Systemhaus GmbH\[email protected] | Europark A2, Adenauerstr. 20\[email protected] | 52146 Wuerselen\nGo SF49ers! Go Rhein Fire! | Tel: (+49) 2405/4670-44\nUse Debian GNU/Linux! | Fax: (+49) 2405/4670-10\n", "msg_date": "Mon, 20 Apr 1998 14:38:34 +0200 (CEST)", "msg_from": "Michael Meskes <[email protected]>", "msg_from_op": true, "msg_subject": "New features" }, { "msg_contents": "On Mon, 20 Apr 1998, Michael Meskes wrote:\n\n> When do we start submitting new features for 6.4? I finally can compile my\n> sources again. Thus ecpg 2.0 is not that far away.\n\n\tYesterday :)\n\n\n", "msg_date": "Mon, 20 Apr 1998 11:52:11 -0400 (EDT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] New features" }, { "msg_contents": "Any time. 6.3.* is closed.\n\n> \n> When do we start submitting new features for 6.4? I finally can compile my\n> sources again. Thus ecpg 2.0 is not that far away.\n> \n> Michael\n> -- \n> Dr. Michael Meskes, Project-Manager | topsystem Systemhaus GmbH\n> [email protected] | Europark A2, Adenauerstr. 20\n> [email protected] | 52146 Wuerselen\n> Go SF49ers! Go Rhein Fire! | Tel: (+49) 2405/4670-44\n> Use Debian GNU/Linux! | Fax: (+49) 2405/4670-10\n> \n> \n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Mon, 20 Apr 1998 14:33:39 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] New features" } ]
[ { "msg_contents": "gram.y says:\n\nopt_indirection: ...\n | '[' a_expr ']' opt_indirection\n | '[' a_expr ':' a_expr ']' opt_indirection\n\t\t...\n\nIMO a_expr is exactly where I have to enter C variable support. That is I\nadd a new case to a_expr: named cinputvariable which among others might have\nthe following form:\n\ncinputvariable: /* empty */\n\t\t...\n\t\t| ':' name ':' name\n\t\t...\n\nWith the first name being the variable, the second being the the name of the\nindicator variable. As you might expect this results in a shift/reduce\nconflict since there is no way to decide whether the second name is the\nindicator variable or a coloumn name.\n\nAny idea how to solve this?\n\nMichael\n-- \nDr. Michael Meskes, Project-Manager | topsystem Systemhaus GmbH\[email protected] | Europark A2, Adenauerstr. 20\[email protected] | 52146 Wuerselen\nGo SF49ers! Go Rhein Fire! | Tel: (+49) 2405/4670-44\nUse Debian GNU/Linux! | Fax: (+49) 2405/4670-10\n", "msg_date": "Mon, 20 Apr 1998 15:21:51 +0200 (CEST)", "msg_from": "Michael Meskes <[email protected]>", "msg_from_op": true, "msg_subject": "shift/reduce problem with ecpg" }, { "msg_contents": "> gram.y says:\n> \n> opt_indirection: ...\n> | '[' a_expr ']' opt_indirection\n> | '[' a_expr ':' a_expr ']' opt_indirection\n> ...\n> \n> IMO a_expr is exactly where I have to enter C variable support.\n> As you might expect this results in a shift/reduce\n> conflict since there is no way to decide whether the second name is \n> the indicator variable or a coloumn name.\n> \n> Any idea how to solve this?\n\nYes. If you really want to allow zero, one, or two colons, and only that\nnumber, then you can explicitly define those cases and separate them out\nfrom the a_expr syntax except as an argument. Look in gram.y for\n\"b_expr\" which accomplishes a similar thing for the BETWEEN operator.\nFor that case, the AND usage was ambiguous since it can be used for\nboolean expressions and is also used with the BETWEEN operator.\n\nYour biggest problem is probably the case with one colon, since it could\nbe either an indicator variable or the second value in a range. You\nmight want to require three or four colons when using indicator\nvariables in this context. Or, as I did with the \"b_expr\" and \"AND\"\nboolean expressions, you can require parens around the\nvariable/indicator pair. e.g.\n\n xxx [ name : name ] -- this is a range\n xxx [ (name : name) ] -- this is an indicator variable\n\n - Tom\n", "msg_date": "Tue, 21 Apr 1998 05:42:39 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] shift/reduce problem with ecpg" }, { "msg_contents": "Thomas G. Lockhart writes:\n> Yes. If you really want to allow zero, one, or two colons, and only that\n> number, then you can explicitly define those cases and separate them out\n> from the a_expr syntax except as an argument. Look in gram.y for\n> \"b_expr\" which accomplishes a similar thing for the BETWEEN operator.\n> For that case, the AND usage was ambiguous since it can be used for\n> boolean expressions and is also used with the BETWEEN operator.\n\nThanks Thomas. I've created a c_expr to be used in opt_indirection. Since\nindicators don't make sense in this position only variables may be entered\nhere. \n\nMichael\n\n-- \nDr. Michael Meskes, Project-Manager | topsystem Systemhaus GmbH\[email protected] | Europark A2, Adenauerstr. 20\[email protected] | 52146 Wuerselen\nGo SF49ers! Go Rhein Fire! | Tel: (+49) 2405/4670-44\nUse Debian GNU/Linux! | Fax: (+49) 2405/4670-10\n", "msg_date": "Tue, 21 Apr 1998 12:34:25 +0200 (CEST)", "msg_from": "Michael Meskes <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] shift/reduce problem with ecpg" } ]
[ { "msg_contents": "Is it desireable to be able to set PGSQL variables to values of C variables\nlike this?\n\nexec sql set foo to :bar;\n\nNot difficult to implement. But how is the reverse done?\n\nexec sql show foo into :bar\n\n???\n\nMichael\n\n-- \nDr. Michael Meskes, Project-Manager | topsystem Systemhaus GmbH\[email protected] | Europark A2, Adenauerstr. 20\[email protected] | 52146 Wuerselen\nGo SF49ers! Go Rhein Fire! | Tel: (+49) 2405/4670-44\nUse Debian GNU/Linux! | Fax: (+49) 2405/4670-10\n", "msg_date": "Mon, 20 Apr 1998 15:41:40 +0200 (CEST)", "msg_from": "Michael Meskes <[email protected]>", "msg_from_op": true, "msg_subject": "Feature question" } ]
[ { "msg_contents": "I'm building 6.3.2 as I write, and I've noticed one small glitch so\nfar. The configure script tells me that Perl support will be disabled\nbecause PostgreSQL has not been previously installed, which I felt was\na bit mean of it, seeing as I've got 6.3.1 installed. It turns out\nthat the test for a previous installation is too simplistic. It looks\nlike this in the configure script:\n\nif test \"$USE_PERL\" = \"true\"; then\n\tif test ! -x $prefix/bin/postgres; then\n\t\techo \"configure: warning: perl support disabled; postgres not previously installed\" 1>&2\n\t\tUSE_PERL=\n\tfi\nfi\n\nNotice that it only tests for $prefix -- unlike this example of a\nprevious test, this one for config.site:\n\nif test -z \"$CONFIG_SITE\"; then\n if test \"x$prefix\" != xNONE; then\n CONFIG_SITE=\"$prefix/share/config.site $prefix/etc/config.site\"\n else\n CONFIG_SITE=\"$ac_default_prefix/share/config.site $ac_default_prefix/etc/config.site\"\n fi\nfi\n\nThe Perl test should probably be modified to work the same way. For\nnow, I've just run configure with \"--prefix=/usr/local/pgsql\", and\nthat (not surprisingly) seems to work fine.\n\n\"Further bulletins as events warrant.\" (Watterson: Calvin & Hobbes)\n\n-tih\n-- \nPopularity is the hallmark of mediocrity. --Niles Crane, \"Frasier\"\n", "msg_date": "20 Apr 1998 20:00:14 +0200", "msg_from": "Tom Ivar Helbekkmo <[email protected]>", "msg_from_op": true, "msg_subject": "6.3.2 configure glitch" }, { "msg_contents": "> \n> I'm building 6.3.2 as I write, and I've noticed one small glitch so\n> far. The configure script tells me that Perl support will be disabled\n> because PostgreSQL has not been previously installed, which I felt was\n> a bit mean of it, seeing as I've got 6.3.1 installed. It turns out\n> that the test for a previous installation is too simplistic. It looks\n> like this in the configure script:\n> \n> if test \"$USE_PERL\" = \"true\"; then\n> \tif test ! -x $prefix/bin/postgres; then\n> \t\techo \"configure: warning: perl support disabled; postgres not previously installed\" 1>&2\n> \t\tUSE_PERL=\n> \tfi\n> fi\n> \n> Notice that it only tests for $prefix -- unlike this example of a\n> previous test, this one for config.site:\n> \n> if test -z \"$CONFIG_SITE\"; then\n> if test \"x$prefix\" != xNONE; then\n> CONFIG_SITE=\"$prefix/share/config.site $prefix/etc/config.site\"\n> else\n> CONFIG_SITE=\"$ac_default_prefix/share/config.site $ac_default_prefix/etc/config.site\"\n> fi\n> fi\n> \n> The Perl test should probably be modified to work the same way. For\n> now, I've just run configure with \"--prefix=/usr/local/pgsql\", and\n> that (not surprisingly) seems to work fine.\n\nHere is the patch to configure.in. Hope this is what you suggested.\n\n---------------------------------------------------------------------------\n\n\n*** ./configure.in.orig\tSun Apr 26 23:49:14 1998\n--- ./configure.in\tSun Apr 26 23:51:41 1998\n***************\n*** 252,260 ****\n \n dnl Verify that postgres is already installed\n dnl per instructions for perl interface installation\n! if test \"$USE_PERL\" = \"true\"; then\n! \tif test ! -x $prefix/bin/postgres; then\n! \t\tAC_MSG_WARN(perl support disabled; postgres not previously installed)\n \t\tUSE_PERL=\n \tfi\n fi\n--- 252,261 ----\n \n dnl Verify that postgres is already installed\n dnl per instructions for perl interface installation\n! if test \"$USE_PERL\" = \"true\"\n! then\n! \tif test ! -x \"$prefix\"/bin/postgres -a ! -x \"$ac_default_prefix\"/bin/postgres\n! \tthen\tAC_MSG_WARN(perl support disabled; postgres not previously installed)\n \t\tUSE_PERL=\n \tfi\n fi\n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Sun, 26 Apr 1998 23:53:18 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] 6.3.2 configure glitch" } ]
[ { "msg_contents": "> \n> ./configure in 6.3.1 used to propose to me a template configuration file\n> (for me it was linux-elf).\n> \n> 6.3.2 ./configure did not. He choosed linux by default. I forced it with\n> --with-template=linux-elf but in Makefile.global didn't appear\n> LINUX_ELF=true.\n> \n> I had to enter and modify by hand Makefile in interfaces/libpq and\n> libpgtcl introducing LINUX_ELF=true in order to obtain the shared\n> libraries. Now PgAccess works.\n> \n> I still believe that is a configuration/detection problem.\n\nYes, I asked earlier why LINUX_ELF was not being defined, and no Linux\nuser offered an answer. Solutions, folks? When our own pgaccess guy\ncan't get libpgtcl to compile, we have a problem. Do we need a patch?\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Mon, 20 Apr 1998 14:42:47 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [QUESTIONS] Configuration problems in PostgreSQL 6.3.2 on\n\tLinux-ELF" }, { "msg_contents": "I just re-posted my patch that fixes this problem of to the pgaccess \nmaintainer. It removes all reference to LINUX_ELF from the makefiles so \nsolving the problem. However to be consistent with the other ports \nmaybe it should be defined instead. I just assumed (always a very bad \nthing to do!) that whoever partially removed the LINUX_ELF logic new \nwhat they were doing and completed the job.\n\nSee pgsql/src/interfaces/libpgtcl/Makefile.in to see what I am talking \nabout.\n\nAlvin\n\n\n", "msg_date": "Mon, 20 Apr 1998 21:21:04 +0200", "msg_from": "Alvin van Raalte <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [QUESTIONS] Configuration problems in\n\tPostgreSQL 6.3.2 on Linux-ELF" }, { "msg_contents": "> Yes, I asked earlier why LINUX_ELF was not being defined, and no Linux\n> user offered an answer. Solutions, folks? When our own pgaccess guy\n> can't get libpgtcl to compile, we have a problem. Do we need a patch?\n\nUh, I think this is a question for Marc. What would he expect to be\ndefined for a platform? I'm pretty sure LINUX_ELF is supposed to be\nreplaces with, for example, \"defined(_GCC_) && defined(linux)\" or\nsomething to that effect.\n\nMarc?\n\n - Tom\n", "msg_date": "Tue, 21 Apr 1998 03:46:32 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [QUESTIONS] Configuration problems in PostgreSQL\n\t6.3.2 on Linux-ELF" }, { "msg_contents": "On Tue, 21 Apr 1998, Thomas G. Lockhart wrote:\n\n> > Yes, I asked earlier why LINUX_ELF was not being defined, and no Linux\n> > user offered an answer. Solutions, folks? When our own pgaccess guy\n> > can't get libpgtcl to compile, we have a problem. Do we need a patch?\n> \n> Uh, I think this is a question for Marc. What would he expect to be\n> defined for a platform? I'm pretty sure LINUX_ELF is supposed to be\n> replaces with, for example, \"defined(_GCC_) && defined(linux)\" or\n> something to that effect.\n> \n> Marc?\n\n\tI sort of ignored this one, being a Linux problem :( Constantin,\nwhat sort of error message(s) are you seeing and where? I'll be more\nattentive this time, promise :)\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Tue, 21 Apr 1998 01:23:22 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [QUESTIONS] Configuration problems in PostgreSQL\n\t6.3.2 on Linux-ELF" }, { "msg_contents": "The Hermit Hacker wrote:\n> \n> On Tue, 21 Apr 1998, Thomas G. Lockhart wrote:\n> \n> > > Yes, I asked earlier why LINUX_ELF was not being defined, and no Linux\n> > > user offered an answer. Solutions, folks? When our own pgaccess guy\n> > > can't get libpgtcl to compile, we have a problem. Do we need a patch?\n> >\n> > Uh, I think this is a question for Marc. What would he expect to be\n> > defined for a platform? I'm pretty sure LINUX_ELF is supposed to be\n> > replaces with, for example, \"defined(_GCC_) && defined(linux)\" or\n> > something to that effect.\n> >\n> > Marc?\n> \n> I sort of ignored this one, being a Linux problem :( Constantin,\n> what sort of error message(s) are you seeing and where? I'll be more\n> attentive this time, promise :)\n\nSo. I tried to compile PostgreSQL from scratch, as I usual do with every\nversion.\n$ cd /usr/src/postgresql-6.3.2\n$ cd src\n$ ./configure\n\nAt this point, it shows a lot of configuration files and usually asked\nme if {linux-elf} it's ok for me.\nThis time, he didn't do so. He start running and checking all sort of\nprograms and libraries and finally ended.\n\nCompiling all (gmake all) I noticed that in src/interfaces/libpgtcl\nthere isn't a libpgtcl.so library and in src/interfaces/libpq there\nisn't libpq.so.\n\nI succeeded getting that libraries editing by hand the Makefile in those\ntwo directories and introducing a new line LINUX_ELF=true, then make\nclean and make again. I copied libpgtcl.so and libpq.so in my /lib\ndirectory and PgAccess work now. But for someone who did not know how to\ndo that, it could be quit embarassing.\n\nI think that ./configure does not succeed in guessing that my system is\nlinux-elf type.\n\n-- \nConstantin Teodorescu\nFLEX Consulting Braila, ROMANIA\n", "msg_date": "Tue, 21 Apr 1998 12:33:54 +0300", "msg_from": "Constantin Teodorescu <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [QUESTIONS] Configuration problems in PostgreSQL\n\t6.3.2 on Linux-ELF" }, { "msg_contents": "On Tue, 21 Apr 1998, The Hermit Hacker wrote:\n\n> > But for someone who did not know how to\n> > do that, it could be quit embarassing.\n> \n> \tThoughts on how this might be fixed? :(\n\nIt's not that embarassing - it's simply not repaired...so I don't yet\nhave libpg.so, libpgtcl.so or libecpg.so...can you (Constantin) post the\n*simplest* workaround? ;-)\n\nThanks alot,\nTom\n\n ----------- Sisters of Charity Medical Center ----------\n Department of Psychiatry\n ---- \n Thomas Good, System Administrator <[email protected]>\n North Richmond CMHC/Residential Services Phone: 718-354-5528\n 75 Vanderbilt Ave, Quarters 8 Fax: 718-354-5056\n Staten Island, NY 10305\n\n", "msg_date": "Tue, 21 Apr 1998 07:19:03 -0400 (EDT)", "msg_from": "Tom Good <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [QUESTIONS] Configuration problems in PostgreSQL\n\t6.3.2 on Linux-ELF" }, { "msg_contents": "On Tue, 21 Apr 1998, Constantin Teodorescu wrote:\n\n> The Hermit Hacker wrote:\n> > \n> > On Tue, 21 Apr 1998, Thomas G. Lockhart wrote:\n> > \n> > > > Yes, I asked earlier why LINUX_ELF was not being defined, and no Linux\n> > > > user offered an answer. Solutions, folks? When our own pgaccess guy\n> > > > can't get libpgtcl to compile, we have a problem. Do we need a patch?\n> > >\n> > > Uh, I think this is a question for Marc. What would he expect to be\n> > > defined for a platform? I'm pretty sure LINUX_ELF is supposed to be\n> > > replaces with, for example, \"defined(_GCC_) && defined(linux)\" or\n> > > something to that effect.\n> > >\n> > > Marc?\n> > \n> > I sort of ignored this one, being a Linux problem :( Constantin,\n> > what sort of error message(s) are you seeing and where? I'll be more\n> > attentive this time, promise :)\n> \n> So. I tried to compile PostgreSQL from scratch, as I usual do with every\n> version.\n> $ cd /usr/src/postgresql-6.3.2\n> $ cd src\n> $ ./configure\n> \n> At this point, it shows a lot of configuration files and usually asked\n> me if {linux-elf} it's ok for me.\n> This time, he didn't do so. He start running and checking all sort of\n> programs and libraries and finally ended.\n\n\tI removed the \"question\" phase, since there was already the\n--with-template= feature in configure...it will try to determine and use\nwhat it feels is appropriate based on a 'uname -s', which doesn't take\ninto consideration different versions of an OS...\n\n> Compiling all (gmake all) I noticed that in src/interfaces/libpgtcl\n> there isn't a libpgtcl.so library and in src/interfaces/libpq there\n> isn't libpq.so.\n> \n> I succeeded getting that libraries editing by hand the Makefile in those\n> two directories and introducing a new line LINUX_ELF=true, then make\n> clean and make again. I copied libpgtcl.so and libpq.so in my /lib\n> directory and PgAccess work now. But for someone who did not know how to\n> do that, it could be quit embarassing.\n\n\tThoughts on how this might be fixed? :(\n\n\n", "msg_date": "Tue, 21 Apr 1998 08:00:59 -0400 (EDT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [QUESTIONS] Configuration problems in PostgreSQL\n\t6.3.2 on Linux-ELF" }, { "msg_contents": "Tom Good wrote:\n> \n> It's not that embarassing - it's simply not repaired...so I don't yet\n> have libpg.so, libpgtcl.so or libecpg.so...can you (Constantin) post the\n> *simplest* workaround? ;-)\n\nOk. I'm afraid that my 'workaround' is too simple. I'm not specialised\nin Makefiles :-(\n\nAfter ./configure , just edit Makefile from src/interfaces/libpq and\nsrc/interfaces/libpgtcl and insert a new line :\n\nLINUX_ELF=true\n\njust make clean ; make after that.\n\n-- \nConstantin Teodorescu\nFLEX Consulting Braila, ROMANIA\n", "msg_date": "Tue, 21 Apr 1998 15:23:46 +0300", "msg_from": "Constantin Teodorescu <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [QUESTIONS] Configuration problems in PostgreSQL\n\t6.3.2 on Linux-ELF" }, { "msg_contents": "On Tue, 21 Apr 1998, Constantin Teodorescu wrote:\n\n> > can you (Constantin) post the *simplest* workaround? ;-)\n> \n> Ok. I'm afraid that my 'workaround' is too simple. I'm not specialised\n> in Makefiles :-(\n> \n> After ./configure , just edit Makefile from src/interfaces/libpq and\n> src/interfaces/libpgtcl and insert a new line :\n> \n> LINUX_ELF=true\n\nBelated thanks, Constantin...simple works by me...I have my libraries\nnow - thanks.\n\nTom\n\n ----------- Sisters of Charity Medical Center ----------\n Department of Psychiatry\n ---- \n Thomas Good, System Administrator <[email protected]>\n North Richmond CMHC/Residential Services Phone: 718-354-5528\n 75 Vanderbilt Ave, Quarters 8 Fax: 718-354-5056\n Staten Island, NY 10305\n\n", "msg_date": "Wed, 22 Apr 1998 10:20:02 -0400 (EDT)", "msg_from": "Tom Good <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [QUESTIONS] Configuration problems in PostgreSQL\n\t6.3.2 on Linux-ELF" } ]
[ { "msg_contents": "Developers of Postgresql, I like to say 'Thank you' for providing\n6.3.2.\n\nI just finished compiling 6.3.2 on my MIPS SVR4 platform and the\nregressions tests are currently running and I must say, it never have\nbeen looking better.\n\nI had major problems with 6.3 and 6.3.1. Apparently some bugs, which\npeople have been reporting, manifested themselfs diffently on my\nplatform. Therefore giving no indication, how they might be fixed.\n\nNow, I can think about moving from 6.2.1 to 6.3.2.\n\nThanx again.\n\n-- \nMfG/Regards\n\n /==== Siemens Nixdorf Informationssysteme AG\n / Ridderbusch / , Dep.: OEC XS QM4\n / /./ Heinz Nixdorf Ring\n /=== /,== ,===/ /,==, // 33106 Paderborn, Germany\n / // / / // / / \\ Tel.: (49) 05251-8-15211\n/ / `==/\\ / / / \\ Email: [email protected]\n\nSince I have taken all the Gates out of my computer, it finally works!!\n", "msg_date": "Mon, 20 Apr 1998 22:26:41 +0200 (MEST)", "msg_from": "Frank Ridderbusch <[email protected]>", "msg_from_op": true, "msg_subject": "Hip, Hip Hurray!! You did it with 6.3.2" } ]
[ { "msg_contents": "subscribe\n\n", "msg_date": "Mon, 20 Apr 1998 17:29:57 -0500", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "subscribe" } ]
[ { "msg_contents": "For some reason my system isn't compiling the shared libraries anymore. I\ntracked this down to the symbol LINUX_ELF being undefined. However, I have\nno idea where this should be defined. Since I run quite a lot of beta-test\nsoftware like gcc 2.8.1 this may be a system problem. Or should this be\ndefined inside the PostgreSQL tree?\n\nMichael\n-- \nDr. Michael Meskes, Project-Manager | topsystem Systemhaus GmbH\[email protected] | Europark A2, Adenauerstr. 20\[email protected] | 52146 Wuerselen\nGo SF49ers! Go Rhein Fire! | Tel: (+49) 2405/4670-44\nUse Debian GNU/Linux! | Fax: (+49) 2405/4670-10\n", "msg_date": "Tue, 21 Apr 1998 10:41:23 +0200 ()", "msg_from": "Michael Meskes <[email protected]>", "msg_from_op": true, "msg_subject": "LINUX_ELF" }, { "msg_contents": "On Tue, 21 Apr 1998, Michael Meskes wrote:\n\n> For some reason my system isn't compiling the shared libraries anymore. I\n> tracked this down to the symbol LINUX_ELF being undefined. However, I have\n> no idea where this should be defined. Since I run quite a lot of beta-test\n> software like gcc 2.8.1 this may be a system problem. Or should this be\n> defined inside the PostgreSQL tree?\n\n\tI'm not quite sure, but any references to LINUX_ELF should have\nbeen removed and replaced by compiler defined values...its possibly some\nwere overlooked?\n\n\n", "msg_date": "Tue, 21 Apr 1998 07:54:58 -0400 (EDT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] LINUX_ELF" }, { "msg_contents": "> \n> On Tue, 21 Apr 1998, Michael Meskes wrote:\n> \n> > For some reason my system isn't compiling the shared libraries anymore. I\n> > tracked this down to the symbol LINUX_ELF being undefined. However, I have\n> > no idea where this should be defined. Since I run quite a lot of beta-test\n> > software like gcc 2.8.1 this may be a system problem. Or should this be\n> > defined inside the PostgreSQL tree?\n> \n> \tI'm not quite sure, but any references to LINUX_ELF should have\n> been removed and replaced by compiler defined values...its possibly some\n> were overlooked?\n\nYes. I think the patch from Christian removed the symbol, but not\neverwhere. When I saw the problem report on it, I saw the problem, but\nno one else made any mention of it when I asked so I though LINUX_ELF\nwas perhaps defined by the linux compiler.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Tue, 21 Apr 1998 12:14:27 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] LINUX_ELF" } ]
[ { "msg_contents": "I'm using PostgreSQL 6.3.2.\n\nAs reported in some messages ago PostgreSQL has problem with\n\"... where some_field in (select ...\" type subqueries.\nOne of the solutions was to create indecies.\nI created two indecies for character(9) fields key and newkey:\ncreate index key_i on bik (key);\ncreate index newkey_i on bik (newkey);\nrun two quiery explain:\n\nbik=> explain select * from bik where key in (select newkey from bik where\nbik=\n'044531864');\nNOTICE: Apr 21 14:15:41:QUERY PLAN:\n\nSeq Scan on bik (cost=770.92 size=1373 width=113)\n SubPlan\n -> Seq Scan on bik (cost=770.92 size=1 width=12)\n\nEXPLAIN\nbik=> explain select * from bik where key = (select newkey from bik where\nbik='\n044531864');\nNOTICE: Apr 21 14:16:01:QUERY PLAN:\n\nIndex Scan on bik (cost=2.05 size=1 width=113)\n InitPlan\n -> Seq Scan on bik (cost=770.92 size=1 width=12)\n\nEXPLAIN\n\nWhen I run first query it hang for a long time, at least 10 minutes\n(I interrupted it) while second one completed in 1 second.\nTable bik has about 13000 rows and 2.6M size.\nIt seems the problem is that in first queiry plan is \"Seq Scan\" while\nin second is \"Index Scan\". How it can be fixed ?\n\nwith best regards,\nIgor Sysoev\n\n", "msg_date": "Tue, 21 Apr 1998 14:45:15 +0400", "msg_from": "\"Igor Sysoev\" <[email protected]>", "msg_from_op": true, "msg_subject": "subselect and optimizer" }, { "msg_contents": "Igor Sysoev wrote:\n> \n> I'm using PostgreSQL 6.3.2.\n> \n> As reported in some messages ago PostgreSQL has problem with\n> \"... where some_field in (select ...\" type subqueries.\n> One of the solutions was to create indecies.\n> I created two indecies for character(9) fields key and newkey:\n> create index key_i on bik (key);\n> create index newkey_i on bik (newkey);\n> run two quiery explain:\n> \n> bik=> explain select * from bik where key in (select newkey from bik where\n> bik='044531864');\n> NOTICE: Apr 21 14:15:41:QUERY PLAN:\n> \n> Seq Scan on bik (cost=770.92 size=1373 width=113)\n> SubPlan\n> -> Seq Scan on bik (cost=770.92 size=1 width=12)\n ^^^\nThis is very strange. Index Scan should be used here.\nI'll try to discover this...\n\nBTW, IN is slow (currently :) - try to create 2-key index on bik (bik, newkey) \nand rewrite your query as\n\nselect * from bik b1 where EXISTS (select newkey from bik where\nbik = '....' and b1.key = newkey)\n\nAnd let's know... (Note, that index on (newkey, bik) may be more useful\nthan on (bik, newkey) - it depends on your data).\n\nVadim\n", "msg_date": "Wed, 22 Apr 1998 14:00:44 +0800", "msg_from": "\"Vadim B. Mikheev\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] subselect and optimizer" } ]
[ { "msg_contents": "How about removing LINUX_ELF from all Makefile.in's?\n\nMichael\n\n--\nDr. Michael Meskes, Project-Manager | topsystem Systemhaus GmbH\[email protected] | Europark A2, Adenauerstr. 20\[email protected] | 52146 Wuerselen\nGo SF49ers! Go Rhein Fire! | Tel: (+49) 2405/4670-44\nUse Debian GNU/Linux! | Fax: (+49) 2405/4670-10\n\n> -----Original Message-----\n> From:\tThe Hermit Hacker [SMTP:[email protected]]\n> Sent:\tTuesday, April 21, 1998 2:01 PM\n> To:\tConstantin Teodorescu\n> Cc:\tThomas G. Lockhart; Bruce Momjian; PostgreSQL-development\n> Subject:\tRe: [HACKERS] Re: [QUESTIONS] Configuration problems in\n> PostgreSQL 6.3.2 on Linux-ELF\n> \n> On Tue, 21 Apr 1998, Constantin Teodorescu wrote:\n> \n> > The Hermit Hacker wrote:\n> > > \n> > > On Tue, 21 Apr 1998, Thomas G. Lockhart wrote:\n> > > \n> > > > > Yes, I asked earlier why LINUX_ELF was not being defined, and\n> no Linux\n> > > > > user offered an answer. Solutions, folks? When our own\n> pgaccess guy\n> > > > > can't get libpgtcl to compile, we have a problem. Do we need\n> a patch?\n> > > >\n> > > > Uh, I think this is a question for Marc. What would he expect to\n> be\n> > > > defined for a platform? I'm pretty sure LINUX_ELF is supposed to\n> be\n> > > > replaces with, for example, \"defined(_GCC_) && defined(linux)\"\n> or\n> > > > something to that effect.\n> > > >\n> > > > Marc?\n> > > \n> > > I sort of ignored this one, being a Linux problem :(\n> Constantin,\n> > > what sort of error message(s) are you seeing and where? I'll be\n> more\n> > > attentive this time, promise :)\n> > \n> > So. I tried to compile PostgreSQL from scratch, as I usual do with\n> every\n> > version.\n> > $ cd /usr/src/postgresql-6.3.2\n> > $ cd src\n> > $ ./configure\n> > \n> > At this point, it shows a lot of configuration files and usually\n> asked\n> > me if {linux-elf} it's ok for me.\n> > This time, he didn't do so. He start running and checking all sort\n> of\n> > programs and libraries and finally ended.\n> \n> \tI removed the \"question\" phase, since there was already the\n> --with-template= feature in configure...it will try to determine and\n> use\n> what it feels is appropriate based on a 'uname -s', which doesn't take\n> into consideration different versions of an OS...\n> \n> > Compiling all (gmake all) I noticed that in src/interfaces/libpgtcl\n> > there isn't a libpgtcl.so library and in src/interfaces/libpq there\n> > isn't libpq.so.\n> > \n> > I succeeded getting that libraries editing by hand the Makefile in\n> those\n> > two directories and introducing a new line LINUX_ELF=true, then make\n> > clean and make again. I copied libpgtcl.so and libpq.so in my /lib\n> > directory and PgAccess work now. But for someone who did not know\n> how to\n> > do that, it could be quit embarassing.\n> \n> \tThoughts on how this might be fixed? :(\n> \n> \n", "msg_date": "Tue, 21 Apr 1998 14:08:36 +0200", "msg_from": "\"Meskes, Michael\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] Re: [QUESTIONS] Configuration problems in PostgreSQ\n\tL 6.3.2 on Linux-ELF" }, { "msg_contents": "Meskes, Michael wrote:\n> \n> How about removing LINUX_ELF from all Makefile.in's?\n> \n\nRemoving ??? Why removing it ???\n\nYou mean that removing the ifdef's containing LINUX_ELF and making 'by\ndefault' the shared libraries all the time.\n\n-- \nConstantin Teodorescu\nFLEX Consulting Braila, ROMANIA\n", "msg_date": "Tue, 21 Apr 1998 15:19:41 +0300", "msg_from": "Constantin Teodorescu <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [QUESTIONS] Configuration problems in PostgreSQ L\n\t6.3.2 on Linux-ELF" }, { "msg_contents": "On Tue, 21 Apr 1998, Constantin Teodorescu wrote:\n\n> Meskes, Michael wrote:\n> > \n> > How about removing LINUX_ELF from all Makefile.in's?\n> > \n> \n> Removing ??? Why removing it ???\n> \n> You mean that removing the ifdef's containing LINUX_ELF and making 'by\n> default' the shared libraries all the time.\n\n\tThere has to be a better method of determining this...doesn't\nthere? Is there a test we can add to configure to auto-determine an 'ELF'\nsystem? Then just change the makefile so that it gets rid of the\nLINUX_ELF \"stuff\" with something that configure sets?\n\n\tThere, try that. I have it so that if 'with-template' is\nlinux_elf, it sets LINUX_ELF to yes in both Makefile's...let me know if\nthat does it for you...\n\n\n", "msg_date": "Tue, 21 Apr 1998 08:39:16 -0400 (EDT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [QUESTIONS] Configuration problems in PostgreSQ L\n\t6.3.2 on Linux-ELF" }, { "msg_contents": "The Hermit Hacker wrote:\n> \n> There has to be a better method of determining this...doesn't\n> there? Is there a test we can add to configure to auto-determine an 'ELF'\n> system? Then just change the makefile so that it gets rid of the\n> LINUX_ELF \"stuff\" with something that configure sets?\n> \n> There, try that. I have it so that if 'with-template' is\n> linux_elf, it sets LINUX_ELF to yes in both Makefile's...let me know if\n> that does it for you...\n\nI have experimented right now.\n\nI ran ./configure --with-template=linux-elf\nIt says that it will use template/linux-elf configuration file but I\nlooked into Makefiels in libpq and libpgtcl directories and none of them\ncontains LINUX_ELF=true\n\n-- \nConstantin Teodorescu\nFLEX Consulting Braila, ROMANIA\n", "msg_date": "Tue, 21 Apr 1998 15:51:29 +0300", "msg_from": "Constantin Teodorescu <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [QUESTIONS] Configuration problems in PostgreSQ L\n\t6.3.2 on Linux-ELF" }, { "msg_contents": "On Tue, 21 Apr 1998, Constantin Teodorescu wrote:\n\n> The Hermit Hacker wrote:\n> > \n> > There has to be a better method of determining this...doesn't\n> > there? Is there a test we can add to configure to auto-determine an 'ELF'\n> > system? Then just change the makefile so that it gets rid of the\n> > LINUX_ELF \"stuff\" with something that configure sets?\n> > \n> > There, try that. I have it so that if 'with-template' is\n> > linux_elf, it sets LINUX_ELF to yes in both Makefile's...let me know if\n> > that does it for you...\n> \n> I have experimented right now.\n> \n> I ran ./configure --with-template=linux-elf\n> It says that it will use template/linux-elf configuration file but I\n> looked into Makefiels in libpq and libpgtcl directories and none of them\n> contains LINUX_ELF=true\n\n\tThis is with just new CVSup'd code? *raised eyebrow* \n\n\tThere should be (interfaces/libpq/Makefile.in):\n\n# Shared library stuff\nshlib := \ninstall-shlib-dep :=\nifeq ($(PORTNAME), linux)\n LINUX_ELF=@LINUX_ELF@\n ifdef LINUX_ELF\n install-shlib-dep := install-shlib\n shlib := libpq.so.$(SO_MAJOR_VERSION).$(SO_MINOR_VERSION)\n LDFLAGS_SL = -shared -soname libpq.so.$(SO_MAJOR_VERSION)\n CFLAGS += $(CFLAGS_SL)\n endif\nendif\n\n\n\n", "msg_date": "Tue, 21 Apr 1998 08:52:48 -0400 (EDT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [QUESTIONS] Configuration problems in PostgreSQ L\n\t6.3.2 on Linux-ELF" }, { "msg_contents": "On Tue, 21 Apr 1998, The Hermit Hacker wrote:\n\n> On Tue, 21 Apr 1998, Constantin Teodorescu wrote:\n> \n> > Meskes, Michael wrote:\n> > > \n> > > How about removing LINUX_ELF from all Makefile.in's?\n> > > \n> > \n> > Removing ??? Why removing it ???\n> > \n> > You mean that removing the ifdef's containing LINUX_ELF and making 'by\n> > default' the shared libraries all the time.\n> \n> \tThere has to be a better method of determining this...doesn't\n> there? Is there a test we can add to configure to auto-determine an 'ELF'\n> system? Then just change the makefile so that it gets rid of the\n> LINUX_ELF \"stuff\" with something that configure sets?\n\nfile /lib/libc$(DLSUFFIX) | grep ELF\n\nMaarten\n\n_____________________________________________________________________________\n| TU Delft, The Netherlands, Faculty of Information Technology and Systems |\n| Department of Electrical Engineering |\n| Computer Architecture and Digital Technique section |\n| [email protected] |\n-----------------------------------------------------------------------------\n\n", "msg_date": "Tue, 21 Apr 1998 15:49:46 +0200 (MET DST)", "msg_from": "Maarten Boekhold <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [QUESTIONS] Configuration problems in PostgreSQ L\n\t6.3.2 on Linux-ELF" }, { "msg_contents": "> \tThere has to be a better method of determining this...doesn't\n> there? Is there a test we can add to configure to auto-determine an 'ELF'\n> system? Then just change the makefile so that it gets rid of the\n> LINUX_ELF \"stuff\" with something that configure sets?\n> \n> \tThere, try that. I have it so that if 'with-template' is\n> linux_elf, it sets LINUX_ELF to yes in both Makefile's...let me know if\n> that does it for you...\n\nI believe no one is running non-elf Linux. Am I correct?\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Tue, 21 Apr 1998 12:19:02 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [QUESTIONS] Configuration problems in PostgreSQ L\n\t6.3.2 on Linux-ELF" } ]
[ { "msg_contents": "They are still in all those Makefile.in's that created shared libraries.\n\nMichael\n\n--\nDr. Michael Meskes, Project-Manager | topsystem Systemhaus GmbH\[email protected] | Europark A2, Adenauerstr. 20\[email protected] | 52146 Wuerselen\nGo SF49ers! Go Rhein Fire! | Tel: (+49) 2405/4670-44\nUse Debian GNU/Linux! | Fax: (+49) 2405/4670-10\n\n> -----Original Message-----\n> From:\tThe Hermit Hacker [SMTP:[email protected]]\n> Sent:\tTuesday, April 21, 1998 1:55 PM\n> To:\tMichael Meskes\n> Cc:\tPostgreSQL Hacker\n> Subject:\tRe: [HACKERS] LINUX_ELF\n> \n> On Tue, 21 Apr 1998, Michael Meskes wrote:\n> \n> > For some reason my system isn't compiling the shared libraries\n> anymore. I\n> > tracked this down to the symbol LINUX_ELF being undefined. However,\n> I have\n> > no idea where this should be defined. Since I run quite a lot of\n> beta-test\n> > software like gcc 2.8.1 this may be a system problem. Or should this\n> be\n> > defined inside the PostgreSQL tree?\n> \n> \tI'm not quite sure, but any references to LINUX_ELF should have\n> been removed and replaced by compiler defined values...its possibly\n> some\n> were overlooked?\n> \n", "msg_date": "Tue, 21 Apr 1998 14:09:17 +0200", "msg_from": "\"Meskes, Michael\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] LINUX_ELF" } ]
[ { "msg_contents": "Yes, that was what I meant to say. Just remove the ifdef's. But then\nthere may be some AOUT systems left.\n\nMichael\n\n--\nDr. Michael Meskes, Project-Manager | topsystem Systemhaus GmbH\[email protected] | Europark A2, Adenauerstr. 20\[email protected] | 52146 Wuerselen\nGo SF49ers! Go Rhein Fire! | Tel: (+49) 2405/4670-44\nUse Debian GNU/Linux! | Fax: (+49) 2405/4670-10\n\n> -----Original Message-----\n> From:\tConstantin Teodorescu [SMTP:[email protected]]\n> Sent:\tTuesday, April 21, 1998 2:20 PM\n> To:\tMeskes, Michael\n> Cc:\t'The Hermit Hacker'; Thomas G. Lockhart; Bruce Momjian;\n> PostgreSQL-development\n> Subject:\tRe: [HACKERS] Re: [QUESTIONS] Configuration problems in\n> PostgreSQ \tL 6.3.2 on Linux-ELF\n> \n> Meskes, Michael wrote:\n> > \n> > How about removing LINUX_ELF from all Makefile.in's?\n> > \n> \n> Removing ??? Why removing it ???\n> \n> You mean that removing the ifdef's containing LINUX_ELF and making 'by\n> default' the shared libraries all the time.\n> \n> -- \n> Constantin Teodorescu\n> FLEX Consulting Braila, ROMANIA\n", "msg_date": "Tue, 21 Apr 1998 14:35:10 +0200", "msg_from": "\"Meskes, Michael\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] Re: [QUESTIONS] Configuration problems in PostgreSQ\n\tL 6.3.2 on Linux-ELF" } ]
[ { "msg_contents": "Yes, it is. But Marc, you made a small mistake. In configure you have to\ncheck for template/linux-elf not linux-elf:\n\n*** configure.orig Tue Apr 21 14:47:20 1998\n--- configure Tue Apr 21 14:48:06 1998\n***************\n*** 670,676 ****\n \n echo \"$ac_t\"\"$TEMPLATE\" 1>&6\n \n! if test \"$TEMPLATE\" = \"linux-elf\"; then\n LINUX_ELF=yes\n else\n LINUX_ELF=no\n--- 670,676 ----\n \n echo \"$ac_t\"\"$TEMPLATE\" 1>&6\n \n! if test \"$TEMPLATE\" = \"template/linux-elf\"; then\n LINUX_ELF=yes\n else\n LINUX_ELF=no\n\nAlso ecpg/lib/Makefile.in is not updated. \n\nHow about we make linux-elf the default, rename the template files and\ncheck for linux-aout? That way, I wouldn't have to worry about my own\nlinux-elf-debug template, or the linux-elf-sparc template.\n\nMichael\n\n--\nDr. Michael Meskes, Project-Manager | topsystem Systemhaus GmbH\[email protected] | Europark A2, Adenauerstr. 20\[email protected] | 52146 Wuerselen\nGo SF49ers! Go Rhein Fire! | Tel: (+49) 2405/4670-44\nUse Debian GNU/Linux! | Fax: (+49) 2405/4670-10\n\n> -----Original Message-----\n> From:\tThe Hermit Hacker [SMTP:[email protected]]\n> Sent:\tTuesday, April 21, 1998 2:53 PM\n> To:\tConstantin Teodorescu\n> Cc:\tMeskes, Michael; Thomas G. Lockhart; Bruce Momjian;\n> PostgreSQL-development\n> Subject:\tRe: [HACKERS] Re: [QUESTIONS] Configuration problems in\n> PostgreSQ L 6.3.2 on Linux-ELF\n> \n> On Tue, 21 Apr 1998, Constantin Teodorescu wrote:\n> \n> > The Hermit Hacker wrote:\n> > > \n> > > There has to be a better method of determining\n> this...doesn't\n> > > there? Is there a test we can add to configure to auto-determine\n> an 'ELF'\n> > > system? Then just change the makefile so that it gets rid of the\n> > > LINUX_ELF \"stuff\" with something that configure sets?\n> > > \n> > > There, try that. I have it so that if 'with-template' is\n> > > linux_elf, it sets LINUX_ELF to yes in both Makefile's...let me\n> know if\n> > > that does it for you...\n> > \n> > I have experimented right now.\n> > \n> > I ran ./configure --with-template=linux-elf\n> > It says that it will use template/linux-elf configuration file but I\n> > looked into Makefiels in libpq and libpgtcl directories and none of\n> them\n> > contains LINUX_ELF=true\n> \n> \tThis is with just new CVSup'd code? *raised eyebrow* \n> \n> \tThere should be (interfaces/libpq/Makefile.in):\n> \n> # Shared library stuff\n> shlib := \n> install-shlib-dep :=\n> ifeq ($(PORTNAME), linux)\n> LINUX_ELF=@LINUX_ELF@\n> ifdef LINUX_ELF\n> install-shlib-dep := install-shlib\n> shlib := libpq.so.$(SO_MAJOR_VERSION).$(SO_MINOR_VERSION)\n> LDFLAGS_SL = -shared -soname libpq.so.$(SO_MAJOR_VERSION)\n> CFLAGS += $(CFLAGS_SL)\n> endif\n> endif\n> \n> \n", "msg_date": "Tue, 21 Apr 1998 14:51:00 +0200", "msg_from": "\"Meskes, Michael\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] Re: [QUESTIONS] Configuration problems in PostgreSQ\n\tL 6.3.2 on Linux-ELF" } ]
[ { "msg_contents": "On Mon, 20 Apr 1998, Bruce Momjian wrote:\n\n> > \n> > \t>> Meskes, Michael wrote:\n> > \t>> > \n> > \t>> > Is this really a bug? I haven't seen any (commercial) system\n> > supporting\n> > \t>> > this kind of transaction recovery. Once you drop a table the\n> > data is\n> > \t>> > lost, no matter if you rollback or not. \n> > \t>\n> > \t>SOLID restores a dropped table with rollback.\n> > \n> > \tSame with Informix.\n> > \n> > \n> \n> Added to TODO list:\n> \n> \t* Allow table creation/destruction to be rolled back\n\nTable creation already works. We have some problems with drop table and alter\ntable also. \n Jose'\n\n", "msg_date": "Tue, 21 Apr 1998 15:13:44 +0000 (UTC)", "msg_from": "\"Jose' Soares Da Silva\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] drop table inside transactions" }, { "msg_contents": "> > Added to TODO list:\n> > \n> > \t* Allow table creation/destruction to be rolled back\n> \n> Table creation already works. We have some problems with drop table and alter\n> table also. \n\nTODO updated.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Tue, 21 Apr 1998 12:24:26 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] drop table inside transactions" } ]
[ { "msg_contents": "With the old configure, where it prompted you, configure properly\nguessed I was bsdi-3.0. Now it tries:\n\n\tchecking setting template to... template/bsd/os\n\t\n\ttemplate/bsd/os does not exist\n\t\n\tAvailable Templates (set using --with-template):\n\nWhy can't it use the same guessing it used to do? Marc?\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Tue, 21 Apr 1998 12:50:25 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "configure guessing platform" }, { "msg_contents": "On Tue, 21 Apr 1998, Bruce Momjian wrote:\n\n> With the old configure, where it prompted you, configure properly\n> guessed I was bsdi-3.0. Now it tries:\n> \n> \tchecking setting template to... template/bsd/os\n> \t\n> \ttemplate/bsd/os does not exist\n> \t\n> \tAvailable Templates (set using --with-template):\n> \n> Why can't it use the same guessing it used to do? Marc?\n\n\tCause I oops'd...I forgot all about the .similar file :( Will fix\nthat tonight...\n\n\n", "msg_date": "Tue, 21 Apr 1998 13:00:35 -0400 (EDT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] configure guessing platform" } ]
[ { "msg_contents": "I have had problems with LINUX_ELF and/or PORTNAME *not* being defined\nand shared libraries *not* being compiled. I would like to make the\nfollowing observations, based on the Makefiles generated for a linux\nsystem, with elf, and with glibc v2.06.\n\nThe configure command line was:\n\n[/usr/src/postgresql-6.3.2/src]$ ./configure --host=i586-pc-linux\n--prefix=/usr/local/pgsql --enable-locale --with-tcl --with-perl\n--with-x --enable-hba\n\nUsing the following command, executed from the src directory:\n\n[/usr/src/postgresql-6.3.2/src]$ grep -l \"\\(PORTNAME\\)\\|\\(LINUX_ELF\\)\"\n`find -regex \".*/.*Makefile\\(.custom\\|.global\\)?\"`\n\nI found that the only Makefiles that mention the PORTNAME or LINUX_ELF\ndefines were the following:\n\n./interfaces/ecpg/lib/Makefile\n./interfaces/libpgtcl/Makefile\n./interfaces/libpq/Makefile\n./interfaces/libpq++/Makefile\n./Makefile.custom\n\nSince I created Makefile.custom myself, the other four Makefiles are the\nones to be investigated.\n\nOf the four Makefiles, three of them (libpgtcl, ecpg, libpq) used the\nfollowing lines to determine whether a shared library should be build.\n(under Linux)\n\nifeq ($(PORTNAME), linux)\n ifdef LINUX_ELF\n install-shlib-dep := install-shlib\n shlib := libpq.so.$(SO_MAJOR_VERSION).$(SO_MINOR_VERSION)\n LDFLAGS_SL = -shared -soname libpq.so.$(SO_MAJOR_VERSION)\n CFLAGS += $(CFLAGS_SL)\n endif\nendif\n\nwhile the other Makefile (libpq++), had the following lines:\n\nifeq ($(PORTNAME), linux)\n INSTALL-SHLIB-DEP := install-shlib\n SHLIB := libpq++.so.1\n LDFLAGS_SL = -shared -soname $(SHLIB)\n CFLAGS += $(CFLAGS_SL)\nendif\n\nThe key thing to note is that all four Makefiles require PORTNAME to be\nset to 'linux' to compile a shared library, and three of them,\n(libpgtcl, ecpg, libpq), also requre LINUX_ELF to be defined, whereas\nthe other Makefile (libpq++) does not check for this.\n\nIn addition, the following define appeared in only three out of the four\nMakefiles, the same three that checked for LINUX_ELF to be defined,\n(libpq, libpgtcl, ecpg). (Coincidence? I think not.)\n\nPORTNAME=linux\n\nIn the Makefile for libpq++, PORTNAME was never defined.\n\nSo what this boils down to is, compiling out of the box, libpgtcl.so,\nlibecpg.so, and libpq.so, will *not* be made because LINUX_ELF is *not*\ndefined anywhere. However, libpq++.so will *not* be made because\nPORTNAME is *not* defined or equal to 'linux'.\n\nAs a quick fix, create the file Makefile.custom in the src directory,\n(place where Makefile.global lives), and put the following two lines in\nit.\n\nLINUX_ELF=true\nPORTNAME='linux'\n\nSince Makefile.custom is included by Makefile.global, and thus by every\nMakefile in the tree, the interface Makefiles will be happy and make\ntheir shared libraries.\n\nIn the future, I would suggest the these two defines be placed into\nMakefile.global by the configure program (autoconf). Furthermore, the\nMakefiles for libpgtcl, libecpg, and libpq should *not* define PORTNAME,\nrather they should rely on PORTNAME being defined in Makefile.global.\nThe Makefile for libpq++ should to be changed to check for LINUX_ELF\nbeing defined in addition to checking PORTNAME='linux'.\n\nThe checks for LINUX_ELF are probably a hold over to the days when\npeople compiled PostGreSQL on a.out linux boxes with funky, gnarly\nshared library support. Perhaps people still do, and if they do,\nLINUX_ELF should stay.\n\nI have not played around with autoconf, so I don't know off hand what\nwould have to be changed to fix this, although I'll take a look if no\none else wants to.\n\nAnyways, if you made it here, I hope you've found something entertaining\nand useful. :)\n\n- Kris\n\nPS - I don't know if the other ports have similar Makefile problems, I\nhave noticed that for BSD libecpg, libpgtcl, and libpq are built shared,\nbut not libpq++.\n", "msg_date": "Tue, 21 Apr 1998 17:09:00 +0000", "msg_from": "\"Kristofer A. E. Peterson\" <[email protected]>", "msg_from_op": true, "msg_subject": "PORTNAME and LINUX_ELF defines in interface Makefiles" }, { "msg_contents": "On Tue, 21 Apr 1998, Kristofer A. E. Peterson wrote:\n\n> I have had problems with LINUX_ELF and/or PORTNAME *not* being defined\n\n\tPORTNAME was removed months ago and shouldn't be used except for\nin *very* few specialized places...\n\n\tLINUX_ELF has been fixed in v6.3.2, using the patch available at\nftp://ftp.postgresql.org/pub/patches/linux_elf.patch-980421.gz, which has\nbeen confirmed by Constantin...\n\n\n", "msg_date": "Tue, 21 Apr 1998 13:14:22 -0400 (EDT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] PORTNAME and LINUX_ELF defines in interface Makefiles" }, { "msg_contents": "I realize that this post is long, so I'll sum it up. My original post\nwas not meant as a request for help, I have gotten used to kludging the\nMakefiles for 6.2.1, 6.3.0, 6.3.1, and now 6.3.2. I want to help clean\nup the Makefiles/configure so linux users don't have to kludge the\nMakefiles to get shared library versions of libpgtcl, libpq, libpq++,\nand libecpg. I just want to offer a perspective on the building process\nfrom outside point of view.\n\nBy the way, postgresql-6.3.2 is a *great* program, good work guys! I\nusually do not bother posting bugs, fixes, etc to projects I am not\ninvolved in, but I hope these comments are useful, and help make\npostgresql-6.3.2 a better package.\n\nThe Hermit Hacker wrote:\n> \n> On Tue, 21 Apr 1998, Kristofer A. E. Peterson wrote:\n> \n> > I have had problems with LINUX_ELF and/or PORTNAME *not* being defined\n> \n> PORTNAME was removed months ago and shouldn't be used except for\n> in *very* few specialized places...\n> \n> LINUX_ELF has been fixed in v6.3.2, using the patch available at\n> ftp://ftp.postgresql.org/pub/patches/linux_elf.patch-980421.gz, which has\n> been confirmed by Constantin...\n\nThat file is *not* a patch, that is a *kludge*. Of course, it is a step\nin the right direction. This is what it did to\npostgresql-6.3.2/src/configure (line 661):\n\n------\n # Check whether --with-template or --without-template was given.\n if test \"${with_template+set}\" = set; then\n withval=\"$with_template\"\n TEMPLATE=template/$withval\n else\n TEMPLATE=template/`uname -s | tr A-Z a-z`\n fi\n*\n echo \"$ac_t\"\"$TEMPLATE\" 1>&6\n>\n>if test \"$TEMPLATE\" = \"linux-elf\"; then\n> LINUX_ELF=yes\n>else\n> LINUX_ELF=no\n>fi\n>\n\n export TEMPLATE\n------\n\nWhy bother checking what $TEMPLATE is set too? It will *never* equal\n'linux-elf', if anything it would be set to 'template/linux-elf`.\nSecondly, $TEMPLATE will only equal 'linux-elf' if you specify it with\nthe --with-template option, since \n\n'uname -s | tr A-Z a-z` will be 'linux', regardless of elf-capability.\n\nOne way of check for elf capability is to see if ld supports the\nelf_i386 \nemulation, although there might be better ways of doing this. At least\nwith this, configure will figure out you have an elf system, and set\nLINUX_ELF=yes, not =no.\n\n------\n # Check whether --with-template or --without-template was given.\n if test \"${with_template+set}\" = set; then\n withval=\"$with_template\"\n TEMPLATE=template/$withval\n else\n TEMPLATE=template/`uname -s | tr A-Z a-z`\n fi\n\n*if test \"$TEMPLATE\" = \"template/linux\"; then\n* ld -V | grep -i \"elf\" >/dev/null 2>/dev/null\n* if test $? -eq 0; then\n* TEMPLATE=${TEMPLATE}-elf\n*\t LINUX_ELF=yes\n*\t else\n* \t LINUX_ELF=no\n* fi\n*fi\n*\n echo \"$ac_t\"\"$TEMPLATE\" 1>&6\n\n export TEMPLATE\n------\n\nAnd here is the patched\npostgresql-6.3.2/src/interfaces/libpgtcl/Makefile, after applying\nlinux_elf.patch-980421.gz, but before hand changing configure as above.\n\n------\nifeq ($(PORTNAME), linux)\n LINUX_ELF=no\n ifdef LINUX_ELF\n install-shlib-dep := install-shlib\n shlib := libpgtcl.so.1\n CFLAGS += $(CFLAGS_SL)\n LDFLAGS_SL = -shared\n endif\nendif\n------\n\nLINUX_ELF=no, make a shared library? Does it matter if LINUX_ELF is yes\nor no? If we should make a shared library if LINUX_ELF=yes, and yes\nonly, then this piece should be:\n\n------\nifeq ($(PORTNAME), linux)\n LINUX_ELF=no\n ifeq ($(LINUX_ELF), yes)\n install-shlib-dep := install-shlib\n shlib := libpgtcl.so.1\n CFLAGS += $(CFLAGS_SL)\n LDFLAGS_SL = -shared\n endif\nendif\n------\n\nFurthermore, the original patch did *not* correct, (or even attempt to\ncorrect) src/interfaces/libecpg/lib/Makefile(.in), which right now will\nnot compile a shared library due to LINUX_ELF not being compiled.\n\nAs for libpq++, there isn't even a Makefile.in. However, by just copying\npostgresql-6.3.2/src/interfaces/libpq++/Makefile to Makefile.in and\npatching it like this:\n\n------\n20a21,22\n> PORTNAME=@PORTNAME@\n>\n46,49c48,54\n< INSTALL-SHLIB-DEP := install-shlib\n< SHLIB := libpq++.so.1\n< LDFLAGS_SL = -G -z text -shared -soname $(SHLIB)\n< CFLAGS += $(CFLAGS_SL)\n---\n> LINUX_ELF=@LINUX_ELF@\n> ifeq ($(LINUX_ELF), yes)\n> INSTALL-SHLIB-DEP := install-shlib\n> SHLIB := libpq++.so.1\n> LDFLAGS_SL = -G -z text -shared -soname $(SHLIB)\n> CFLAGS += $(CFLAGS_SL)\n> endif\n------\n\n\"src/interfaces/libpq++/Makefile\" also needs to be added to the\nCONFIG_FILES=... line on line 5937 in configure (patched with the\nlinux_elf patch). In configure.in it needs to be added to the AC_OUTPUT\n(...) line at the end of the file.\n\nThis helps me reduce the Makefile mayhem on my system.\n\n- Kris\n", "msg_date": "Tue, 21 Apr 1998 22:02:58 +0000", "msg_from": "\"Kristofer A. E. Peterson\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] PORTNAME and LINUX_ELF defines in interface Makefiles" }, { "msg_contents": "Kristofer Peterson makes an excellent point. \nThere seem to be three issues for linux...\n Does the system have ELF capability? (ld -V or cpp -dM will tell you)\n Is this intended to be an elf build? (could be compiled for a.out)\n Does the builder want shared (or static, for that matter) libraries?\n\nautomake and libtool are intended to solve such problems, but\nthey present a license conflict. Or do they, if the tools aren't\nincluded with the public distribution?\n\nMichael\[email protected]\n\n\n :}I realize that this post is long, so I'll sum it up. My original post\n :}was not meant as a request for help, I have gotten used to kludging the\n :}Makefiles for 6.2.1, 6.3.0, 6.3.1, and now 6.3.2. I want to help clean\n :}up the Makefiles/configure so linux users don't have to kludge the\n :}Makefiles to get shared library versions of libpgtcl, libpq, libpq++,\n :}and libecpg. I just want to offer a perspective on the building process\n :}from outside point of view.\n...\n", "msg_date": "Tue, 21 Apr 1998 17:31:30 -0700", "msg_from": "Michael Yount <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] PORTNAME and LINUX_ELF defines in interface Makefiles " } ]
[ { "msg_contents": "\nI've patched postgreSQL for SSL (secure socket layer) support. This\nis the encryption used by secure web servers and browsers. Useful for\nencrypting your postgres connections. Will eventually be useful for\nauthentication as well.\n\ninformational page:\thttp://www.chicken.org/pgsql/ssl/\n\nthe patch (for 6.3.2):\thttp://www.chicken.org/pgsql/ssl/pgsqlSSL.patch\n\nSSLeay FAQ:\t\thttp://psych.psy.uq.oz.au/~ftp/Crypto/\n\napply the patch, ./configure and make. should pass the regression\ntests.\n\nTHIS IS A HACK, for functionality only. See the info page for\ndetails. More later.\n", "msg_date": "Tue, 21 Apr 1998 19:03:35 -0700 (PDT)", "msg_from": "Brett McCormick <[email protected]>", "msg_from_op": true, "msg_subject": "ANNOUNCE: v0.1a of PostgreSQL-SSL patch released." } ]
[ { "msg_contents": "Bruce Momjian wrote:\n> \n> >\n> > Hi All\n> >\n> > I just upgraded to 128MB RAM, and really don't need ALL of it MOST of the\n> > time, and the main reason for it was to help with database speed.\n> >\n> > But then I got to thinking, is there a way to just semi-permently put a\n> > database into RAM? so that when a query is done it goes only to the image\n> > of the database in RAM, never even touching the hard drive. This would be\n> > enormasly faster on 50 ~ 100 (or what ever) MB databases, especially on\n> > those new boxes with that 6? or 10? nano second RAM. Especially if one\n> > could pick and chose which table(s) to put in RAM\n> \n> You can tune your OS to use most of that as buffer cache. However,\n> writes will be flushed to disk by postgresql fync, or the OS syncing\n> every so often, but not too bad. You can also up your postgres shared\n> memory buffers, though some OS's have a limit on that.\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n ^^^^^^^^^^^^^^^^^^^^\nCould we use mmap (instead of shmem) with MAP_ANON flag to get more memory\nfor shared buffer pool ?\nI'm using FreeBSD, man mmap says:\n\n MAP_ANON Map anonymous memory not associated with any specific file.\n The file descriptor used for creating MAP_ANON regions is\n used only for naming, and may be specified as -1 if no name\n is associated with the region.\n\nVadim\n", "msg_date": "Wed, 22 Apr 1998 13:51:15 +0800", "msg_from": "\"Vadim B. Mikheev\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [QUESTIONS] How to use memory instead of hd?" }, { "msg_contents": "> \n> Bruce Momjian wrote:\n> > \n> > >\n> > > Hi All\n> > >\n> > > I just upgraded to 128MB RAM, and really don't need ALL of it MOST of the\n> > > time, and the main reason for it was to help with database speed.\n> > >\n> > > But then I got to thinking, is there a way to just semi-permently put a\n> > > database into RAM? so that when a query is done it goes only to the image\n> > > of the database in RAM, never even touching the hard drive. This would be\n> > > enormasly faster on 50 ~ 100 (or what ever) MB databases, especially on\n> > > those new boxes with that 6? or 10? nano second RAM. Especially if one\n> > > could pick and chose which table(s) to put in RAM\n> > \n> > You can tune your OS to use most of that as buffer cache. However,\n> > writes will be flushed to disk by postgresql fync, or the OS syncing\n> > every so often, but not too bad. You can also up your postgres shared\n> > memory buffers, though some OS's have a limit on that.\n> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n> ^^^^^^^^^^^^^^^^^^^^\n> Could we use mmap (instead of shmem) with MAP_ANON flag to get more memory\n> for shared buffer pool ?\n> I'm using FreeBSD, man mmap says:\n> \n> MAP_ANON Map anonymous memory not associated with any specific file.\n> The file descriptor used for creating MAP_ANON regions is\n> used only for naming, and may be specified as -1 if no name\n> is associated with the region.\n> \n> Vadim\n\nYes, we could. I don't think we do because we an anon-mapped region is\nthe same as shmem, but if the limit is higher for anon mmap, we could\nused it.\n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Wed, 22 Apr 1998 09:58:12 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [QUESTIONS] How to use memory instead of hd?" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> >\n> > Bruce Momjian wrote:\n> > >\n> > Could we use mmap (instead of shmem) with MAP_ANON flag to get more memory\n> > for shared buffer pool ?\n> > I'm using FreeBSD, man mmap says:\n> Yes, we could. I don't think we do because we an anon-mapped region is\n> the same as shmem, but if the limit is higher for anon mmap, we could\n> used it.\n\nIt could be faster too,\nand it API is cleaner too (IMHO).\nIf we resolve the fork/exec -> just fork problem in the server\nmmap would make a good improvment in simplicity and probably\nspeed too.\n\n\tregards,\n-- \n---------------------------------------------\nG�ran Thyni, sysadm, JMS Bildbasen, Kiruna\n", "msg_date": "Wed, 22 Apr 1998 16:37:17 +0200", "msg_from": "\"G���ran Thyni\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [QUESTIONS] How to use memory instead of hd?" }, { "msg_contents": ">>>>> \"maillist\" == Bruce Momjian <[email protected]> writes:\n\n >> Bruce Momjian wrote: > > > > > Hi All > > > > I just upgraded\n >> to 128MB RAM, and really don't need ALL of it MOST of the > >\n >> time, and the main reason for it was to help with database\n >> speed. > > > > But then I got to thinking, is there a way to\n >> just semi-permently put a > > database into RAM? so that when a\n >> query is done it goes only to the image > > of the database in\n >> RAM, never even touching the hard drive. This would be > >\n >> enormasly faster on 50 ~ 100 (or what ever) MB databases,\n >> especially on > > those new boxes with that 6? or 10? nano\n >> second RAM. Especially if one > > could pick and chose which\n >> table(s) to put in RAM > > You can tune your OS to use most of\n >> that as buffer cache. However, > writes will be flushed to\n >> disk by postgresql fync, or the OS syncing > every so often,\n >> but not too bad. You can also up your postgres shared > memory\n >> buffers, though some OS's have a limit on that.\n >> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n >> ^^^^^^^^^^^^^^^^^^^^ Could we use mmap (instead of shmem) with\n >> MAP_ANON flag to get more memory for shared buffer pool ? I'm\n >> using FreeBSD, man mmap says:\n >> \n >> MAP_ANON Map anonymous memory not associated with any specific\n >> file. The file descriptor used for creating MAP_ANON regions\n >> is used only for naming, and may be specified as -1 if no name\n >> is associated with the region.\n >> \n >> Vadim\n\n > Yes, we could. I don't think we do because we an anon-mapped\n > region is the same as shmem, but if the limit is higher for anon\n > mmap, we could used it.\nOn FreeBSD the limit for mmap is basically the amount of swap vs a\npredefined limit when a kernel is built. This was a project I was\nthinking of doing but have not had the time yet. mmap may also be\nfaster due to tighter intergration in the kernel. In general it is\nbetter to avoid all the SYSV IPC calls and use other methods of doing\nthe needed operations. Stevens book (either Advanced Unix Programming \nor Unix Network Programming) has a good discussion of the pro/CONS of\nthe SYSV IPC interfaces.\n\n > -- Bruce Momjian | 830 Blythe Avenue [email protected] |\n > Drexel Hill, Pennsylvania 19026 + If your life is a hard drive,\n > | (610) 353-9879(w) + Christ can be your backup. | (610)\n > 853-3000(h)\n\n-- \nKent S. Gordon\nArchitect\niNetSpace Co.\nvoice: (972)851-3494 fax:(972)702-0384 e-mail:[email protected]\n", "msg_date": "Wed, 22 Apr 1998 11:00:19 -0500 (CDT)", "msg_from": "\"Kent S. Gordon\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [QUESTIONS] How to use memory instead of hd?" }, { "msg_contents": "> > Yes, we could. I don't think we do because we an anon-mapped\n> > region is the same as shmem, but if the limit is higher for anon\n> > mmap, we could used it.\n> On FreeBSD the limit for mmap is basically the amount of swap vs a\n> predefined limit when a kernel is built. This was a project I was\n> thinking of doing but have not had the time yet. mmap may also be\n> faster due to tighter intergration in the kernel. In general it is\n> better to avoid all the SYSV IPC calls and use other methods of doing\n> the needed operations. Stevens book (either Advanced Unix Programming \n> or Unix Network Programming) has a good discussion of the pro/CONS of\n> the SYSV IPC interfaces.\n\nOK, mmap added to FAQ. \n\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Wed, 22 Apr 1998 13:18:23 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [QUESTIONS] How to use memory instead of hd?" }, { "msg_contents": "> > Yes, we could. I don't think we do because we an anon-mapped\n> > region is the same as shmem, but if the limit is higher for anon\n> > mmap, we could used it.\n> On FreeBSD the limit for mmap is basically the amount of swap vs a\n> predefined limit when a kernel is built. This was a project I was\n> thinking of doing but have not had the time yet. mmap may also be\n> faster due to tighter intergration in the kernel. In general it is\n> better to avoid all the SYSV IPC calls and use other methods of doing\n> the needed operations. Stevens book (either Advanced Unix Programming \n> or Unix Network Programming) has a good discussion of the pro/CONS of\n> the SYSV IPC interfaces.\n\nI meant, mmap() added to TODO.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Wed, 22 Apr 1998 13:18:42 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [QUESTIONS] How to use memory instead of hd?" }, { "msg_contents": "G�ran Thyni wrote:\n> \n> Bruce Momjian wrote:\n> >\n> > >\n> > > Bruce Momjian wrote:\n> > > >\n> > > Could we use mmap (instead of shmem) with MAP_ANON flag to get more memory\n> > > for shared buffer pool ?\n> > > I'm using FreeBSD, man mmap says:\n> > Yes, we could. I don't think we do because we an anon-mapped region is\n> > the same as shmem, but if the limit is higher for anon mmap, we could\n> > used it.\n> \n> It could be faster too,\n> and it API is cleaner too (IMHO).\n> If we resolve the fork/exec -> just fork problem in the server\n> mmap would make a good improvment in simplicity and probably\n> speed too.\n\n MAP_INHERIT\n Permit regions to be inherited across execve(2) system calls.\n\n - exec isn't problem...\n\nVadim\n", "msg_date": "Thu, 23 Apr 1998 18:33:31 +0800", "msg_from": "\"Vadim B. Mikheev\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Re: [QUESTIONS] How to use memory instead of hd?" }, { "msg_contents": "Vadim B. Mikheev wrote:\n> \n> G�ran Thyni wrote:\n> >\n> > Bruce Momjian wrote:\n> > > Yes, we could. I don't think we do because we an anon-mapped region is\n> > > the same as shmem, but if the limit is higher for anon mmap, we could\n> > > used it.\n> >\n> > It could be faster too,\n> > and it API is cleaner too (IMHO).\n> > If we resolve the fork/exec -> just fork problem in the server\n> > mmap would make a good improvment in simplicity and probably\n> > speed too.\n> \n> MAP_INHERIT\n> Permit regions to be inherited across execve(2) system calls.\n> \n> - exec isn't problem...\n> \n> Vadim\n\nIt is on non-BSD systems (I checked Linux 2.0.33 and DG/UX, no go).\nAnd how do you keep the handles after an exec.\n\nBTW, to get rid of exec to more easily replace shmem with mmap is only\npart of it,\nto use mmap instead of shmem to easier remove the exec is the other\nside. :-)\n\n\tbest regards,\n-- \n---------------------------------------------\nG�ran Thyni, sysadm, JMS Bildbasen, Kiruna\n", "msg_date": "Thu, 23 Apr 1998 18:29:22 +0200", "msg_from": "\"G���ran Thyni\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [QUESTIONS] How to use memory instead of hd?" }, { "msg_contents": "> \n> Bruce Momjian wrote:\n> > \n> > >\n> > > Hi All\n> > >\n> > > I just upgraded to 128MB RAM, and really don't need ALL of it MOST of the\n> > > time, and the main reason for it was to help with database speed.\n> > >\n> > > But then I got to thinking, is there a way to just semi-permently put a\n> > > database into RAM? so that when a query is done it goes only to the image\n> > > of the database in RAM, never even touching the hard drive. This would be\n> > > enormasly faster on 50 ~ 100 (or what ever) MB databases, especially on\n> > > those new boxes with that 6? or 10? nano second RAM. Especially if one\n> > > could pick and chose which table(s) to put in RAM\n> > \n> > You can tune your OS to use most of that as buffer cache. However,\n> > writes will be flushed to disk by postgresql fync, or the OS syncing\n> > every so often, but not too bad. You can also up your postgres shared\n> > memory buffers, though some OS's have a limit on that.\n> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n> ^^^^^^^^^^^^^^^^^^^^\n> Could we use mmap (instead of shmem) with MAP_ANON flag to get more memory\n> for shared buffer pool ?\n> I'm using FreeBSD, man mmap says:\n> \n> MAP_ANON Map anonymous memory not associated with any specific file.\n> The file descriptor used for creating MAP_ANON regions is\n> used only for naming, and may be specified as -1 if no name\n> is associated with the region.\n> \n> Vadim\n> \n\nI am going through my mailbox and I believe we never found a portable\nway to allocate shared memory other than system V shared memory. \nIncreasing the amount of buffers beyond a certain amount requires a\nkernel change.\n\nHas anyone come up with a better way?\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Mon, 15 Jun 1998 22:38:46 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [QUESTIONS] How to use memory instead of hd?" } ]
[ { "msg_contents": "Vadim wrote:\n> > \n> > I'm using PostgreSQL 6.3.2.\n> > \n> > As reported in some messages ago PostgreSQL has problem with\n> > \"... where some_field in (select ...\" type subqueries.\n> > One of the solutions was to create indecies.\n> > I created two indecies for character(9) fields key and newkey:\n> > create index key_i on bik (key);\n> > create index newkey_i on bik (newkey);\n> > run two quiery explain:\n> > \n> > bik=>explain select * from bik where key in (select newkey from\n> > bik where bik='044531864');\n> > NOTICE: Apr 21 14:15:41:QUERY PLAN:\n> > \n> > Seq Scan on bik (cost=770.92 size=1373 width=113)\n> > SubPlan\n> > -> Seq Scan on bik (cost=770.92 size=1 width=12)\n> ^^^\n> This is very strange. Index Scan should be used here.\n> I'll try to discover this...\n\nNo, I think it's not strange - I haven't index for bik (bik) so in both\ncases\ninternal select should using Seq Scan. I repeat EXPLAIN from second query\n(You\ndroped it):\n\n------\nbik=> explain select * from bik where key = (select newkey from bik\nwhere bik='044531864');\nNOTICE: Apr 21 14:16:01:QUERY PLAN:\n\nIndex Scan on bik (cost=2.05 size=1 width=113)\n InitPlan\n -> Seq Scan on bik (cost=770.92 size=1 width=12)\n\nEXPLAIN\n-------\n\nStrange is another - outer select in second query using Index Scan (it's\nright)\nbut it doesn't use it in first query.\n\n> BTW, IN is slow (currently :) - try to create 2-key index on bik (bik,\nnewkey) \n> and rewrite your query as\n\nI tried simple query to check can IN use Index Scan ? EXPLAIN show it can:\n\n--------\nbik=> explain select * from bik where key in ('aqQWV+ZG');\nNOTICE: Apr 22 10:29:44:QUERY PLAN:\n\nIndex Scan on bik (cost=2.05 size=1 width=113)\n\nEXPLAIN\n--------\n\n> select * from bik b1 where EXISTS (select newkey from bik where\n> bik = '....' and b1.key = newkey)\n \n> And let's know... (Note, that index on (newkey, bik) may be more useful\n> than on (bik, newkey) - it depends on your data).\n\nOk, I' will try it now but main problem is that I often need to use LIKE\noperator (i.e. bik ~ '31864') in subselect and can't use indecies in this\ncase.\n\nIgor Sysoev\n\n", "msg_date": "Wed, 22 Apr 1998 10:35:09 +0400", "msg_from": "\"Igor Sysoev\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] subselect and optimizer" } ]
[ { "msg_contents": "Vadim wrote:\n\n> BTW, IN is slow (currently :) - try to create 2-key index on bik (bik,\nnewkey) \n> and rewrite your query as\n> \n> select * from bik b1 where EXISTS (select newkey from bik where\n> bik = '....' and b1.key = newkey)\n> \n> And let's know... (Note, that index on (newkey, bik) may be more useful\n> than on (bik, newkey) - it depends on your data).\n\nI had tried - it really works ! I don't even try to use index for bik\n(bik).\nIt works even when I tried \"bik ~ '...'\".\nThe one downside is I need to crunch and twist my brains to use\nsuch SQL statemants :). Thank you.\n\nIgor Sysoev\n", "msg_date": "Wed, 22 Apr 1998 11:36:23 +0400", "msg_from": "\"Igor Sysoev\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] subselect and optimizer" } ]
[ { "msg_contents": "> > As David Gould mentioned, we need to do pre-fetching of data pages\n> > somehow.\n> > \n> > When doing a sequential scan on a table, the OS is doing a one-page\n> > prefetch, which is probably enough. The problem is index scans of the\n> > table. Those are not sequential in the main heap table (unless it is\n> > clustered on the index), so a prefetch would help here a lot.\n> > \n> > That is where we need async i/o. I am looking in BSDI, and I don't see\n> > any way to do async i/o. The only way I can think of doing it is via\n> > threads.\n> \n> I found it. It is an fcntl option. From man fcntl:\n> \n> O_ASYNC Enable the SIGIO signal to be sent to the process group when\n> I/O is possible, e.g., upon availability of data to be read.\n> \n> Who else supports this?\n> \n\nunder Irix:\n\nman fcntl:\n\n F_SETFL Set file status flags to the third argument, arg, taken as an\n object of type int. Only the following flags can be set [see\n fcntl(5)]: FAPPEND, FSYNC, FNDELAY, FNONBLK, FDIRECT, and\n FASYNC. Since arg is used as a bit vector to set the flags,\n values for all the flags must be specified in arg. (Typically,\n arg may be constructed by obtaining existing values by F_GETFL\n and then changing the particular flags.) FAPPEND is equivalent\n to O_APPEND; FSYNC is equivalent to O_SYNC; FNDELAY is\n equivalent to O_NDELAY; FNONBLK is equivalent to O_NONBLOCK;\n and FDIRECT is equivalent to O_DIRECT. FASYNC is equivalent to\n calling ioctl with the FIOASYNC command (except that with ioctl\n all flags need not be specified). This enables the SIGIO\n facilities and is currently supported only on sockets.\n\n....but then I can find no details of FIOASYNC on the ioctl page or pages\nreferenced therein.\n\n\nAndrew\n\n\n----------------------------------------------------------------------------\nDr. Andrew C.R. Martin University College London\nEMAIL: (Work) [email protected] (Home) [email protected]\nURL: http://www.biochem.ucl.ac.uk/~martin\nTel: (Work) +44(0)171 419 3890 (Home) +44(0)1372 275775\n", "msg_date": "Wed, 22 Apr 1998 11:21:42 GMT", "msg_from": "Andrew Martin <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Safe/Fast I/O ..." }, { "msg_contents": "> \n> > > As David Gould mentioned, we need to do pre-fetching of data pages\n> > > somehow.\n> > > \n> > > When doing a sequential scan on a table, the OS is doing a one-page\n> > > prefetch, which is probably enough. The problem is index scans of the\n> > > table. Those are not sequential in the main heap table (unless it is\n> > > clustered on the index), so a prefetch would help here a lot.\n> > > \n> > > That is where we need async i/o. I am looking in BSDI, and I don't see\n> > > any way to do async i/o. The only way I can think of doing it is via\n> > > threads.\n> > \n> > I found it. It is an fcntl option. From man fcntl:\n> > \n> > O_ASYNC Enable the SIGIO signal to be sent to the process group when\n> > I/O is possible, e.g., upon availability of data to be read.\n> > \n> > Who else supports this?\n> > \n> \n> under Irix:\n> \n> man fcntl:\n> \n> F_SETFL Set file status flags to the third argument, arg, taken as an\n> object of type int. Only the following flags can be set [see\n> fcntl(5)]: FAPPEND, FSYNC, FNDELAY, FNONBLK, FDIRECT, and\n> FASYNC. Since arg is used as a bit vector to set the flags,\n> values for all the flags must be specified in arg. (Typically,\n> arg may be constructed by obtaining existing values by F_GETFL\n> and then changing the particular flags.) FAPPEND is equivalent\n> to O_APPEND; FSYNC is equivalent to O_SYNC; FNDELAY is\n> equivalent to O_NDELAY; FNONBLK is equivalent to O_NONBLOCK;\n> and FDIRECT is equivalent to O_DIRECT. FASYNC is equivalent to\n> calling ioctl with the FIOASYNC command (except that with ioctl\n> all flags need not be specified). This enables the SIGIO\n> facilities and is currently supported only on sockets.\n> \n> ....but then I can find no details of FIOASYNC on the ioctl page or pages\n> referenced therein.\n\nI have found BSDI does not support async i/o. You need a separate\nprocess to do the i/o. The O_ASYNC flag only works on tty files.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Wed, 22 Apr 1998 10:15:08 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Safe/Fast I/O ..." } ]
[ { "msg_contents": "On Tue, 21 Apr 1998, Herouth Maoz wrote:\n\nYour example is very exhaustive Herouth. I tried it with SOLID and in fact\nit leaves SOLID database inconsistent.\n\nI see that PostgreSQL BEGIN/END statements are slight different from SQL\ntransactions that begins with a implicit begin transaction (no BEGIN command)\nand ends with a ROLLBACK or COMMIT statement.\n\nUntil now I thought that END was equal to COMMIT but in the case of:\n NOTICE: (transaction aborted): queries ignored until END\n *ABORT STATE*\nin this case END stands for ROLLBACK/ABORT I think it isn't enough clear.\n(I thought all reference to END were changed to COMMIT).\nPostgreSQL don't say to the user that all his work will be lost even if he do\nCOMMIT.\n\nMaybe the following warn is more clear:\n NOTICE: (transaction aborted): queries ignored until COMMIT/ROLLBAK\n WARN: all changes will be lost even if you use COMMIT.\n\nOf course SQL transaction allows all kind of SQL command because it doesn't \nworks outside transactions.\n\nPostgreSQL is more restrictive than SQL, then I think we need to know\nwhat kind of statements we can use successful inside transactions and\nPostgreSQL should reject all invalid commands.\n\n(I have to change information on BEGIN reference manual page, we have to\ndocument this feature of PostgreSQL).\n\nI've tried the following commands:\n o CREATE TABLE works.\n o DROP TABLE doesn't work properly after ROLLBACK, the table lives\n but it's empty.\n o CREATE/DROP INDEX works.\n o CREATE/DROP SEQUENCE works.\n o CREATE/DROP USER works.\n o GRANT/REVOKE works.\n o DROP VIEW works.\n o CREATE VIEWS aborts transactions see below:\n o DROP AGGREGATE works.\n o CREATE AGGREGATE doesn't work.\n o DROP FUNCTION works.\n o CREATE FUNCTION doesn't work.\n o ALTER TABLE seems that doesn't work properly see below:\n o CREATE/DROP DATABASE removes references from \"pg_database\" but\n don't remove directory /usr/local/pgsql/data/base/name_database.\n...Maybe somebody knows what more is valid/invalid inside transactions...\n\no EXAMPLE ALTER TABLE:\n\npostgres=> begin;\nBEGIN\npostgres=> \\d a\n\nTable = a\n+----------------------------------+----------------------------------+-------+\n| Field | Type | Length|\n+----------------------------------+----------------------------------+-------+\n| a | int2 | 2 |\n+----------------------------------+----------------------------------+-------+\npostgres=> select * from a;\n a\n-----\n32767\n(1 rows)\npostgres=> alter table a add b int;\nADD\npostgres=> \\d a\n\nTable = a\n+----------------------------------+----------------------------------+-------+\n| Field | Type | Length|\n+----------------------------------+----------------------------------+-------+\n| a | int2 | 2 |\n| b | int4 | 4 |\n+----------------------------------+----------------------------------+-------+\npostgres=> select * from a;\n a|b\n-----+-\n32767|\n(1 rows)\n\npostgres=> rollback;\nABORT\npostgres=> \\d a\n\nTable = a\n+----------------------------------+----------------------------------+-------+\n| Field | Type | Length|\n+----------------------------------+----------------------------------+-------+\n| a | int2 | 2 |\n+----------------------------------+----------------------------------+-------+\npostgres=> select * from a;\n a|b <------------------ column b is already here. Why ?\n-----+-\n32767|\n(1 rows)\npostgres=> rollback;\nABORT\npostgres=> \\d a\n\nTable = a\n+----------------------------------+----------------------------------+-------+\n| Field | Type | Length|\n+----------------------------------+----------------------------------+-------+\n| a | int2 | 2 |\n+----------------------------------+----------------------------------+-------+\npostgres=> select * from a;\n a|b\n-----+-\n32767|\n(1 rows)\n\n\no EXAMPLE CREATE VIEW:\n\npostgres=> begin;\nBEGIN\npostgres=> create view error as select * from films;\nCREATE\npostgres=> \\d error\n\nTable = error\n+----------------------------------+----------------------------------+-------+\n| Field | Type | Length|\n+----------------------------------+----------------------------------+-------+\n| code | char() | 5 |\n| title | varchar() | 40 |\n| did | int4 | 4 |\n| date_prod | date | 4 |\n| kind | char() | 10 |\n| len | int2 | 2 |\n+----------------------------------+----------------------------------+-------+\npostgres=> select * from error;\nPQexec() -- Request was sent to backend, but backend closed the channel before responding.\n This probably means the backend terminated abnormally before or while processing the request.\n\n> At 16:15 +0100 on 21/4/98, Jose' Soares Da Silva wrote:\n> \n> \n> > * Bad, this isn't very friendly.\n> >\n> > * No. What I would is that PostgreSQL don't abort at every smallest\n> > syntax error.\n> \n> It depends on what you expect from a transaction. The way I see it, a\n> transaction is a sequence of operations which either *all* succeed, or\n> *all* fail. That is, if one of the operations failed, even for a syntax\n> error, Postgres should not allow any of the other operations in the same\n> transaction to work.\n> \n> For example, suppose you want to move money from one bank account to\n> another, you'll do something like:\n> \n> BEGIN;\n> \n> UPDATE accounts\n> SET credit = credit - 20,000\n> WHERE account_num = '00-xx-00';\n> \n> UPDATE accounts\n> SET credit = credit + 20000\n> WHERE account_num = '11-xx-11';\n> \n> END;\n> \n> Now, look at this example. There is a syntax error in the first update\n> statement - 20,000 should be without a comma. If Postgres were tolerant,\n> your client would have an extra 20,000 dollars in one of his account, and\n> the money came from nowhere, which means your bank loses it, and you lose\n> your job...\n> \n> But a real RDBMS, as soon as one of the statement fails - no matter why -\n> the transaction would not happen. It notifies you that it didn't happen.\n> You can then decide what to do - issue a different transaction, fix the\n> program, whatever.\n> \n> The idea is that the two actions (taking money from one account and putting\n> it in another) are considered atomic, inseparable, and dependent. If your\n> \"real world\" thinking says that the next operation should happen, no matter\n> if the first one succeeded or failed, then they shouldn't be inside the\n> same transaction.\n> \n> Herouth\n\n", "msg_date": "Wed, 22 Apr 1998 11:39:51 +0000 (UTC)", "msg_from": "\"Jose' Soares Da Silva\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [QUESTIONS] errors on transactions and locks ?" }, { "msg_contents": "> \n> On Tue, 21 Apr 1998, Herouth Maoz wrote:\n> \n> Your example is very exhaustive Herouth. I tried it with SOLID and in fact\n> it leaves SOLID database inconsistent.\n> \n> I see that PostgreSQL BEGIN/END statements are slight different from SQL\n> transactions that begins with a implicit begin transaction (no BEGIN command)\n> and ends with a ROLLBACK or COMMIT statement.\n> \n> Until now I thought that END was equal to COMMIT but in the case of:\n> NOTICE: (transaction aborted): queries ignored until END\n> *ABORT STATE*\n> in this case END stands for ROLLBACK/ABORT I think it isn't enough clear.\n> (I thought all reference to END were changed to COMMIT).\n> PostgreSQL don't say to the user that all his work will be lost even if he do\n> COMMIT.\n> \n> Maybe the following warn is more clear:\n> NOTICE: (transaction aborted): queries ignored until COMMIT/ROLLBAK\n> WARN: all changes will be lost even if you use COMMIT.\n\nI have changed the text to read:\n\n \"all queries ignored until end of transaction block\"); \n \n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Mon, 15 Jun 1998 22:43:56 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [QUESTIONS] errors on transactions and locks ?" }, { "msg_contents": "I think 6.6 will improve this.\n\n\n> On Tue, 21 Apr 1998, Herouth Maoz wrote:\n> \n> Your example is very exhaustive Herouth. I tried it with SOLID and in fact\n> it leaves SOLID database inconsistent.\n> \n> I see that PostgreSQL BEGIN/END statements are slight different from SQL\n> transactions that begins with a implicit begin transaction (no BEGIN command)\n> and ends with a ROLLBACK or COMMIT statement.\n> \n> Until now I thought that END was equal to COMMIT but in the case of:\n> NOTICE: (transaction aborted): queries ignored until END\n> *ABORT STATE*\n> in this case END stands for ROLLBACK/ABORT I think it isn't enough clear.\n> (I thought all reference to END were changed to COMMIT).\n> PostgreSQL don't say to the user that all his work will be lost even if he do\n> COMMIT.\n> \n> Maybe the following warn is more clear:\n> NOTICE: (transaction aborted): queries ignored until COMMIT/ROLLBAK\n> WARN: all changes will be lost even if you use COMMIT.\n> \n> Of course SQL transaction allows all kind of SQL command because it doesn't \n> works outside transactions.\n> \n> PostgreSQL is more restrictive than SQL, then I think we need to know\n> what kind of statements we can use successful inside transactions and\n> PostgreSQL should reject all invalid commands.\n> \n> (I have to change information on BEGIN reference manual page, we have to\n> document this feature of PostgreSQL).\n> \n> I've tried the following commands:\n> o CREATE TABLE works.\n> o DROP TABLE doesn't work properly after ROLLBACK, the table lives\n> but it's empty.\n> o CREATE/DROP INDEX works.\n> o CREATE/DROP SEQUENCE works.\n> o CREATE/DROP USER works.\n> o GRANT/REVOKE works.\n> o DROP VIEW works.\n> o CREATE VIEWS aborts transactions see below:\n> o DROP AGGREGATE works.\n> o CREATE AGGREGATE doesn't work.\n> o DROP FUNCTION works.\n> o CREATE FUNCTION doesn't work.\n> o ALTER TABLE seems that doesn't work properly see below:\n> o CREATE/DROP DATABASE removes references from \"pg_database\" but\n> don't remove directory /usr/local/pgsql/data/base/name_database.\n> ...Maybe somebody knows what more is valid/invalid inside transactions...\n> \n> o EXAMPLE ALTER TABLE:\n> \n> postgres=> begin;\n> BEGIN\n> postgres=> \\d a\n> \n> Table = a\n> +----------------------------------+----------------------------------+-------+\n> | Field | Type | Length|\n> +----------------------------------+----------------------------------+-------+\n> | a | int2 | 2 |\n> +----------------------------------+----------------------------------+-------+\n> postgres=> select * from a;\n> a\n> -----\n> 32767\n> (1 rows)\n> postgres=> alter table a add b int;\n> ADD\n> postgres=> \\d a\n> \n> Table = a\n> +----------------------------------+----------------------------------+-------+\n> | Field | Type | Length|\n> +----------------------------------+----------------------------------+-------+\n> | a | int2 | 2 |\n> | b | int4 | 4 |\n> +----------------------------------+----------------------------------+-------+\n> postgres=> select * from a;\n> a|b\n> -----+-\n> 32767|\n> (1 rows)\n> \n> postgres=> rollback;\n> ABORT\n> postgres=> \\d a\n> \n> Table = a\n> +----------------------------------+----------------------------------+-------+\n> | Field | Type | Length|\n> +----------------------------------+----------------------------------+-------+\n> | a | int2 | 2 |\n> +----------------------------------+----------------------------------+-------+\n> postgres=> select * from a;\n> a|b <------------------ column b is already here. Why ?\n> -----+-\n> 32767|\n> (1 rows)\n> postgres=> rollback;\n> ABORT\n> postgres=> \\d a\n> \n> Table = a\n> +----------------------------------+----------------------------------+-------+\n> | Field | Type | Length|\n> +----------------------------------+----------------------------------+-------+\n> | a | int2 | 2 |\n> +----------------------------------+----------------------------------+-------+\n> postgres=> select * from a;\n> a|b\n> -----+-\n> 32767|\n> (1 rows)\n> \n> \n> o EXAMPLE CREATE VIEW:\n> \n> postgres=> begin;\n> BEGIN\n> postgres=> create view error as select * from films;\n> CREATE\n> postgres=> \\d error\n> \n> Table = error\n> +----------------------------------+----------------------------------+-------+\n> | Field | Type | Length|\n> +----------------------------------+----------------------------------+-------+\n> | code | char() | 5 |\n> | title | varchar() | 40 |\n> | did | int4 | 4 |\n> | date_prod | date | 4 |\n> | kind | char() | 10 |\n> | len | int2 | 2 |\n> +----------------------------------+----------------------------------+-------+\n> postgres=> select * from error;\n> PQexec() -- Request was sent to backend, but backend closed the channel before responding.\n> This probably means the backend terminated abnormally before or while processing the request.\n> \n> > At 16:15 +0100 on 21/4/98, Jose' Soares Da Silva wrote:\n> > \n> > \n> > > * Bad, this isn't very friendly.\n> > >\n> > > * No. What I would is that PostgreSQL don't abort at every smallest\n> > > syntax error.\n> > \n> > It depends on what you expect from a transaction. The way I see it, a\n> > transaction is a sequence of operations which either *all* succeed, or\n> > *all* fail. That is, if one of the operations failed, even for a syntax\n> > error, Postgres should not allow any of the other operations in the same\n> > transaction to work.\n> > \n> > For example, suppose you want to move money from one bank account to\n> > another, you'll do something like:\n> > \n> > BEGIN;\n> > \n> > UPDATE accounts\n> > SET credit = credit - 20,000\n> > WHERE account_num = '00-xx-00';\n> > \n> > UPDATE accounts\n> > SET credit = credit + 20000\n> > WHERE account_num = '11-xx-11';\n> > \n> > END;\n> > \n> > Now, look at this example. There is a syntax error in the first update\n> > statement - 20,000 should be without a comma. If Postgres were tolerant,\n> > your client would have an extra 20,000 dollars in one of his account, and\n> > the money came from nowhere, which means your bank loses it, and you lose\n> > your job...\n> > \n> > But a real RDBMS, as soon as one of the statement fails - no matter why -\n> > the transaction would not happen. It notifies you that it didn't happen.\n> > You can then decide what to do - issue a different transaction, fix the\n> > program, whatever.\n> > \n> > The idea is that the two actions (taking money from one account and putting\n> > it in another) are considered atomic, inseparable, and dependent. If your\n> > \"real world\" thinking says that the next operation should happen, no matter\n> > if the first one succeeded or failed, then they shouldn't be inside the\n> > same transaction.\n> > \n> > Herouth\n> \n> --\n> Official WWW Site: http://www.postgresql.org\n> Online Docs & FAQ: http://www.postgresql.org/docs\n> Searchable Lists: http://www.postgresql.org/mhonarc\n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 17 Sep 1999 00:45:49 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [QUESTIONS] errors on transactions and locks ?" }, { "msg_contents": "Hi all,\n\nI have again a problem about TRANSACTIONS.\nI had some answers about this matter some time ago, but unfortunately the solution wasn't yet found.\nTransaction are essentials for a relational database but in the case of PostgreSQL some times it's\nimpossible\nto use them. Right now I'm in a middle of a work and I need to use transactions but I can't go on because\nthere are some \"warnings\" that I would like to avoid but I can't.\n\n\nProblem:\n\n PostgreSQL automatically ABORTS at every error, even a syntax error.\n I know that a transaction is a sequence of operations which either all succeed, or all fail, and\n this behavior is correct for batch mode operations, but it is not useful in interactive mode where\nthe user\n could decide if the transaction should be COMMITed or ROLLBACKed even in presence of errors.\n Other databases have such behavior.\n\nWhat about to have a variable to set like:\n\nSET TRANSACTION MODE TO {BATCH | INTERACTIVE}\n\nwhere:\n BATCH: the transaction ROLLBACK at first error and COMMIT only if all operations\nsucceed.\n INTERACTIVE: leaves the final decision to user to COMMIT or ROLLBACK even if some error occurred.\n\n\nComments...\n\nJose'\n\n\n", "msg_date": "Wed, 01 Dec 1999 11:53:15 +0100", "msg_from": "jose soares <[email protected]>", "msg_from_op": false, "msg_subject": "TRANSACTION \"WARNINGS\"" }, { "msg_contents": "Hi,\n\nIts me again,\n\nI'm trying to use transactions thru ODBC but it seems to be impossible.\nI'm populating my tables using transactions thru ODBC and before to INSERT a row to a table\nI check if such row already exist in that table.\nif result is FALSE I insert the row into the table otherwise I skip the INSERT operation.\nI have a log in which ODBC checks for an unexistent row but when I try to INSERT the row\nI cannot insert it, there's a duplicate index error.\nI have only two index in that table and only one of them is UNIQUE and I know there is no\nother row with the same index in that table.\nIf I use the same program without transactions it works fine.\n\nAny ideas?\n\nhere the log:\n\n<DELETED>\nconn=61438304, SQLDriverConnect(out)='DSN=PostgreSQL;DATABASE=hygea;SERVER=verde\nconn=61438304, query='SELECT \"utenti\".\"azienda\",\"utenti\".\"inizio_attivita\"\nFROM \"utenti\" WHERE (\"azienda\" = '01879540308' ) '\n [ fetched 0 rows ]\nconn=61438304, SQLDisconnect\nconn=61284284, query='INSERT INTO \"utenti\"\n(\"azienda\",\"ragione_sociale\",\"istat\",\"cap\",\"indirizzo\",\"partita_iva\",\"istat_nascita\",\"distretto\",\"data_aggiornamento\")\n\nVALUES ('01879540308','FONZAR PAOLO-LUCA-LUCIANO E DANIELA','030120','33050','VIA PROVINCIALE\nN.4','01879540308','000000','G10500','1999-11-17 00:00:00')'\nERROR from backend during send_query: 'ERROR: Cannot insert a duplicate key into a unique index'\nconn=61284284, query='ABORT'\n<DELETED>\n\nand here the table structure:\n\nTable = utenti\n+----------------------------------+----------------------------------+-------+\n|Field |Type | Length|\n+----------------------------------+----------------------------------+-------+\n| azienda | char() not null | 16 |\n| ragione_sociale | varchar() not null | 45 |\n| istat | char() not null | 6 |\n| cap | char() | 5 |\n| indirizzo | char() | 40 |\n| civico | char() | 10 |\n| distretto_interno | char() | 3 |\n| frazione | char() | 25 |\n| telefono | char() | 15 |\n| fax | char() | 15 |\n| email | char() | 15 |\n| codice_fiscale | char() | 16 |\n| partita_iva | char() | 11 |\n| cciaa | char() | 8 |\n| data_ccia | date | 4 |\n| data_nascita | date | 4 |\n| istat_nascita | char() | 6 |\n| stato_attivita | char() | 2 |\n| fuori_usl | char() default 'N' | 1 |\n| assegnazione_codice | date | 4 |\n| inizio_attivita | date not null default date( 'cur | 4 |\n| fine_attivita | date | 4 |\n| dpr317 | char() default 'N' | 1 |\n| distretto | char() | 6 |\n| data_aggiornamento | timestamp default now() | 4 |\n| aggiornato_da | char() default CURRENT_USER | 10 |\n| data_esportazione | date | 4 |\n| data_precedente_esp | date | 4 |\n+----------------------------------+----------------------------------+-------+\nIndices: utenti_pkey\n utenti_ragione_idx\n\n\nThanks for any help.\nJose'\n\n\njose soares ha scritto:\n\n> Hi all,\n>\n> I have again a problem about TRANSACTIONS.\n> I had some answers about this matter some time ago, but unfortunately the solution wasn't yet found.\n> Transaction are essentials for a relational database but in the case of PostgreSQL some times it's\n> impossible\n> to use them. Right now I'm in a middle of a work and I need to use transactions but I can't go on because\n> there are some \"warnings\" that I would like to avoid but I can't.\n>\n> Problem:\n>\n> PostgreSQL automatically ABORTS at every error, even a syntax error.\n> I know that a transaction is a sequence of operations which either all succeed, or all fail, and\n> this behavior is correct for batch mode operations, but it is not useful in interactive mode where\n> the user\n> could decide if the transaction should be COMMITed or ROLLBACKed even in presence of errors.\n> Other databases have such behavior.\n>\n> What about to have a variable to set like:\n>\n> SET TRANSACTION MODE TO {BATCH | INTERACTIVE}\n>\n> where:\n> BATCH: the transaction ROLLBACK at first error and COMMIT only if all operations\n> succeed.\n> INTERACTIVE: leaves the final decision to user to COMMIT or ROLLBACK even if some error occurred.\n>\n> Comments...\n>\n> Jose'\n>\n> ************\n\n\nHi,\nIts me again,\nI'm trying to use transactions thru ODBC but it seems to be impossible.\nI'm populating my tables using transactions thru ODBC and before\nto INSERT a row to a table\nI check if such row already exist in that table.\nif result is FALSE I insert the row into the table otherwise I\nskip the INSERT operation.\nI have a log in which ODBC checks for an unexistent row but when\nI try to INSERT the row\nI cannot insert it, there's a duplicate index error.\nI have only two index in that table and only one of them is UNIQUE\nand I know there is no\nother row with the same index in that table.\nIf I use the same program without transactions it works fine.\nAny ideas?\nhere the log:\n<DELETED>\nconn=61438304, SQLDriverConnect(out)='DSN=PostgreSQL;DATABASE=hygea;SERVER=verde\nconn=61438304, query='SELECT \"utenti\".\"azienda\",\"utenti\".\"inizio_attivita\"\nFROM \"utenti\" WHERE (\"azienda\" = '01879540308' ) '\n    [ fetched 0 rows ]\nconn=61438304, SQLDisconnect\nconn=61284284, query='INSERT INTO  \"utenti\" (\"azienda\",\"ragione_sociale\",\"istat\",\"cap\",\"indirizzo\",\"partita_iva\",\"istat_nascita\",\"distretto\",\"data_aggiornamento\")\nVALUES ('01879540308','FONZAR PAOLO-LUCA-LUCIANO E DANIELA','030120','33050','VIA\nPROVINCIALE N.4','01879540308','000000','G10500','1999-11-17 00:00:00')'\nERROR from backend during send_query:  'ERROR:  Cannot\ninsert a duplicate key into a unique index'\nconn=61284284, query='ABORT'\n<DELETED>\nand here the table structure:\nTable    = utenti\n+----------------------------------+----------------------------------+-------+\n|Field                            \n|Type                             \n| Length|\n+----------------------------------+----------------------------------+-------+\n| azienda                         \n| char() not null                 \n|    16 |\n| ragione_sociale                 \n| varchar() not null              \n|    45 |\n| istat                           \n| char() not null                 \n|     6 |\n| cap                             \n| char()                          \n|     5 |\n| indirizzo                       \n| char()                          \n|    40 |\n| civico                          \n| char()                          \n|    10 |\n| distretto_interno               \n| char()                          \n|     3 |\n| frazione                        \n| char()                          \n|    25 |\n| telefono                        \n| char()                          \n|    15 |\n| fax                             \n| char()                          \n|    15 |\n| email                           \n| char()                          \n|    15 |\n| codice_fiscale                  \n| char()                          \n|    16 |\n| partita_iva                     \n| char()                          \n|    11 |\n| cciaa                           \n| char()                          \n|     8 |\n| data_ccia                       \n| date                            \n|     4 |\n| data_nascita                    \n| date                            \n|     4 |\n| istat_nascita                   \n| char()                          \n|     6 |\n| stato_attivita                  \n| char()                          \n|     2 |\n| fuori_usl                       \n| char() default 'N'              \n|     1 |\n| assegnazione_codice             \n| date                            \n|     4 |\n| inizio_attivita                 \n| date not null default date( 'cur |     4 |\n| fine_attivita                   \n| date                            \n|     4 |\n| dpr317                          \n| char() default 'N'              \n|     1 |\n| distretto                       \n| char()                          \n|     6 |\n| data_aggiornamento              \n| timestamp default now()         \n|     4 |\n| aggiornato_da                   \n| char() default CURRENT_USER      |   \n10 |\n| data_esportazione               \n| date                            \n|     4 |\n| data_precedente_esp             \n| date                            \n|     4 |\n+----------------------------------+----------------------------------+-------+\nIndices:  utenti_pkey\n          utenti_ragione_idx\n \nThanks for any help.\nJose'\n \njose soares ha scritto:\nHi all,\nI have again a problem about TRANSACTIONS.\nI had some answers about this matter some time ago, but unfortunately\nthe solution wasn't yet found.\nTransaction are essentials for a relational database but in the case\nof PostgreSQL some times it's\nimpossible\nto use them. Right now I'm in a middle of a work and I need to use\ntransactions but I can't go on because\nthere are some \"warnings\" that I would like to avoid but I can't.\nProblem:\n    PostgreSQL automatically ABORTS at every error, even\na syntax error.\n    I know that a transaction is a sequence of operations\nwhich either all succeed, or all fail, and\n    this behavior is correct for batch mode operations, \nbut  it is not useful in interactive mode where\nthe user\n    could decide if the transaction should be COMMITed\nor ROLLBACKed even in presence of  errors.\n    Other databases have such behavior.\nWhat about to have a variable to set like:\nSET TRANSACTION MODE TO {BATCH | INTERACTIVE}\nwhere:\n        BATCH:             \nthe transaction ROLLBACK at first error and COMMIT only if all operations\nsucceed.\n        INTERACTIVE:  leaves\nthe final decision to user to COMMIT or ROLLBACK even if some error occurred.\nComments...\nJose'\n************", "msg_date": "Tue, 07 Dec 1999 10:47:01 +0100", "msg_from": "jose soares <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] TRANSACTION \"WARNINGS\"" } ]
[ { "msg_contents": "\nI've never seen this one reported before...just doing a \\d in psql:\n\nsalesorg=> \\d\nNOTICE: SIMarkEntryData: cache state reset\nPQexec() -- Request was sent to backend, but backend closed the channel\n\tbefore responding.\n This probably means the backend terminated abnormally before or\n\twhile processing the request.\n\n\nThis one is still a v6.3.1 system...won't upgrade it until the new system\nis in place (May 11th)...but figured I'd show it :)\n\n\n", "msg_date": "Wed, 22 Apr 1998 08:35:40 -0400 (EDT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "Here's a new one ..." } ]
[ { "msg_contents": "> \n> \n> Hi, all\n> \n> Is the following sentence true ?\n> \n> * LOCK TABLE statement don't allows read access to locked tables by\n> the other users.\n> If another user try to SELECT a locked table, he must attend\n> until the locked table is released.\n> \n> I have heard about another syntax of LOCK TABLE that allows read access\n> to locked tables, on v6.3.2.\n> Is it true or I've dream ?\n\nNo, perhaps in 6.4. However, any select from a table does it.\n\n-- \nBruce Momjian | 830 Blythe Avenue\[email protected] | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Wed, 22 Apr 1998 10:28:24 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] LOCK TABLE statement" }, { "msg_contents": "\nHi, all\n\n Is the following sentence true ?\n\n * LOCK TABLE statement don't allows read access to locked tables by\n the other users.\n If another user try to SELECT a locked table, he must attend\n until the locked table is released.\n\n I have heard about another syntax of LOCK TABLE that allows read access\n to locked tables, on v6.3.2.\n Is it true or I've dream ?\n Thanks, Jose'\n\n", "msg_date": "Wed, 22 Apr 1998 15:03:32 +0000 (UTC)", "msg_from": "\"Jose' Soares Da Silva\" <[email protected]>", "msg_from_op": false, "msg_subject": "LOCK TABLE statement" } ]
[ { "msg_contents": "Is it correct that the following two statements are equal?\n\nselect \"a\" from foo;\n\nselect a from foo;\n\n\nThat results in the following problem for ecpg:\n\nWhen I'm in SQL mode (that is after reading \"exec sql\") I do not get\nquotations. But what do I do with this?\n\nexec sql whenever sqlerror do printf(\"There was an error\\n\");\n\nSince my lex file is almost the same as scan.l I wonder if anyone has an\nidea.\n\nMichael\n-- \nDr. Michael Meskes, Project-Manager | topsystem Systemhaus GmbH\[email protected] | Europark A2, Adenauerstr. 20\[email protected] | 52146 Wuerselen\nGo SF49ers! Go Rhein Fire! | Tel: (+49) 2405/4670-44\nUse Debian GNU/Linux! | Fax: (+49) 2405/4670-10\n", "msg_date": "Wed, 22 Apr 1998 16:42:25 +0200 ()", "msg_from": "Michael Meskes <[email protected]>", "msg_from_op": true, "msg_subject": "parser problem" }, { "msg_contents": "> Is it correct that the following two statements are equal?\n> select \"a\" from foo;\n> select a from foo;\n\nYes.\n\n> That results in the following problem for ecpg:\n> When I'm in SQL mode (that is after reading \"exec sql\") I do not get\n> quotations. But what do I do with this?\n> \n> exec sql whenever sqlerror do printf(\"There was an error\\n\");\n> \n> Since my lex file is almost the same as scan.l I wonder if anyone has \n> an idea.\n\nWhat different kinds of clauses are available with the \"whenever ...\ndo\"? My Ingres manual indicates that the syntax is:\n\n exec sql whenever <condition> <action>\n\nwhere <condition> is one of:\n\n sqlwarning\n sqlerror\n sqlmessage\n not found\n dbevent\n\nand the <action> is one of:\n\n continue\n stop\n goto <label>\n call <procedure>\n\nwhere <procedure> cannot be called with any arguments. This syntax would\nbe easy to parse with your existing lexer. My SQL books shows an even\nmore limited syntax with only \"continue\" and \"goto\" allowed.\n\nIf you want to allow some other syntax, including double-quoted strings,\nthen you will need to implement it explicitly in your grammar.\n\n - Tom\n", "msg_date": "Thu, 23 Apr 1998 02:10:46 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] parser problem" }, { "msg_contents": "Thomas G. Lockhart writes:\n> where <condition> is one of:\n> \n> sqlwarning\n> sqlerror\n> sqlmessage\n> not found\n> dbevent\n\nAt the moment we only can sqlerror and not found.\n\n> continue\n> stop\n> goto <label>\n\nGot these plus go to <label> and sqlprint.\n\n> call <procedure>\n\nHmm, this is called \"do\" in Oracle. I think I allow both for compatibility.\n\n> where <procedure> cannot be called with any arguments. This syntax would\n> be easy to parse with your existing lexer. My SQL books shows an even\n> more limited syntax with only \"continue\" and \"goto\" allowed.\n\nYes, but we don't have to play it down to the standard, do we? :-)\n\n> If you want to allow some other syntax, including double-quoted strings,\n> then you will need to implement it explicitly in your grammar.\n\nIMO an argument is a very good idea. So I have to think about it a little\nbit more.\n\nMichael\n-- \nDr. Michael Meskes, Project-Manager | topsystem Systemhaus GmbH\[email protected] | Europark A2, Adenauerstr. 20\[email protected] | 52146 Wuerselen\nGo SF49ers! Go Rhein Fire! | Tel: (+49) 2405/4670-44\nUse Debian GNU/Linux! | Fax: (+49) 2405/4670-10\n", "msg_date": "Thu, 23 Apr 1998 09:31:43 +0200 (CEST)", "msg_from": "Michael Meskes <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] parser problem" } ]
[ { "msg_contents": "\n> When using aio for file or raw device access the following functions \n> have to be used (from sys/aio.h):\n> \nint aio_read(int, struct aiocb *);\nint aio_write(int, struct aiocb *);\nint aio_cancel(int, struct aiocb *);\n\tint aio_suspend(int, struct aiocb *[]);\n\n\tThe main advantage is not read ahead or the like (read ahead can be \n\taccomplished with other means, e.g. separate reader and writer\nprocesses).\n\tThe main advantage is, that a process that calls these for IO will\nnot \n\tbe suspended by the OPsys, and can therefore do other work\n\tuntil the data is available. On fast disks the data will be\navailable\n\tbefore the process time slice (20 - 50 ms) is over !\n\tA process using normal read or write will have to wait until\n\tall other processes have consumed their time slice.\n\n\tI think the first step should be separate global IO processes,\n\tthese could then in a second step use aio.\n\n\tAndreas\n", "msg_date": "Wed, 22 Apr 1998 18:10:14 +0200", "msg_from": "Zeugswetter Andreas SARZ <[email protected]>", "msg_from_op": true, "msg_subject": "[HACKERS] Async IO description" }, { "msg_contents": "Zeugswetter Andreas SARZ wrote:\n> \n> \n> > When using aio for file or raw device access the following functions \n> > have to be used (from sys/aio.h):\n> > \n> int aio_read(int, struct aiocb *);\n> int aio_write(int, struct aiocb *);\n> int aio_cancel(int, struct aiocb *);\n> \tint aio_suspend(int, struct aiocb *[]);\n> \n> \tThe main advantage is not read ahead or the like (read ahead can be \n> \taccomplished with other means, e.g. separate reader and writer\n> processes).\n> \tThe main advantage is, that a process that calls these for IO will\n> not \n> \tbe suspended by the OPsys, and can therefore do other work\n> \tuntil the data is available. On fast disks the data will be\n> available\n> \tbefore the process time slice (20 - 50 ms) is over !\n> \tA process using normal read or write will have to wait until\n> \tall other processes have consumed their time slice.\n> \n> \tI think the first step should be separate global IO processes,\n> \tthese could then in a second step use aio.\n> \n> \tAndreas\n> \n> \n\nThis will limit us to operating systems that support POSIX aio. This\nmean Linux (in the future), Solaris (anso in the future) and\npresumably FreeBSD. Developing the support for AIO before we have a\nplace to test is could lead to trouble. We should also have an\nalternative for those systems that don't (or won't) support POSIX aio.\n\nOne solution to this might be to write a group of AIO macros for\npostgres. If done correctly, they could be implemented as calls ot\nthe POSIX AIO functions on POSIX systems that support this, and could\ncall the normal I/O functions on non POSIX AIO systems.\n\nAlso, in order to make the most effective use of AIO, the program will\nhave to undergo a major rewrite. Just a short example to ponder (get\nme flamed :)\n\nSuppose we are doing a search with a btree index. We read in the\nfirst page and find that we will need to read in four of its \"child\"\npages. We issue an aio_read for each page and aio_suspend until one\nof them comes in. Then you have to figure out which one is ready, go\nwork on that page, etc.\n\nLastly, aio reads and writes require memory copying, which can slow\nthings down. memory mapping doesn't have this problem -- what you\nwrite is copied directly to the disk without being copied to another\nbuffer.\n\nWell enough rambling for one day. \n\nOcie\n", "msg_date": "Wed, 22 Apr 1998 14:01:00 -0700 (PDT)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Async IO description" } ]