threads
listlengths
1
2.99k
[ { "msg_contents": "Trying to compile current sources using:\n\n./configure --prefix=/home/ler/pg-test --enable-syslog \\\n\t--with-CXX --with-perl --enable-multibyte --enable-cassert \\\n\t--with-openssl --with-tcl \\\n\t--with-tclconfig=/usr/local/lib/tcl8.3 \\\n\t--with-tkconfig=/usr/local/lib/tk8.3\n\t\nI get the following death:\n\ngmake[3]: Leaving directory `/home/ler/pg-dev/pgsql/src/interfaces/libpq++'\ngmake[3]: Entering directory `/home/ler/pg-dev/pgsql/src/interfaces/libpgtcl'\ngmake -C ../../../src/interfaces/libpq all\ngmake[4]: Entering directory `/home/ler/pg-dev/pgsql/src/interfaces/libpq'\ngmake[4]: Nothing to be done for `all'.\ngmake[4]: Leaving directory `/home/ler/pg-dev/pgsql/src/interfaces/libpq'\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -fpic -DPIC -I../../../src/include -I../../../src/interfaces/libpq -c -o pgtcl.o pgtcl.c\nIn file included from pgtcl.c:19:\nlibpgtcl.h:19: tcl.h: No such file or directory\nIn file included from pgtcl.c:20:\npgtclCmds.h:17: tcl.h: No such file or directory\nIn file included from pgtcl.c:19:\nlibpgtcl.h:21: syntax error before `*'\nlibpgtcl.h:22: syntax error before `*'\nIn file included from pgtcl.c:20:\npgtclCmds.h:35: syntax error before `Tcl_Interp'\npgtclCmds.h:71: syntax error before `cData'\npgtclCmds.h:73: syntax error before `cData'\npgtclCmds.h:75: syntax error before `cData'\npgtclCmds.h:77: syntax error before `cData'\npgtclCmds.h:79: syntax error before `cData'\npgtclCmds.h:81: syntax error before `cData'\npgtclCmds.h:83: syntax error before `cData'\npgtclCmds.h:85: syntax error before `cData'\npgtclCmds.h:87: syntax error before `cData'\npgtclCmds.h:89: syntax error before `cData'\npgtclCmds.h:91: syntax error before `cData'\npgtclCmds.h:93: syntax error before `cData'\npgtclCmds.h:95: syntax error before `cData'\npgtclCmds.h:97: syntax error before `cData'\npgtclCmds.h:99: syntax error before `cData'\npgtclCmds.h:101: syntax error before `cData'\npgtclCmds.h:103: syntax error before `cData'\nIn file included from pgtcl.c:21:\npgtclId.h:18: syntax error before `*'\npgtclId.h:37: syntax error before `*'\npgtclId.h:39: syntax error before `cData'\npgtclId.h:40: syntax error before `cData'\npgtclId.h:41: syntax error before `cData'\npgtclId.h:42: syntax error before `*'\npgtclId.h:43: syntax error before `*'\npgtclId.h:44: syntax error before `*'\npgtclId.h:45: syntax error before `*'\npgtclId.h:49: syntax error before `clientData'\npgtclId.h:63: syntax error before `Pg_ConnType'\npgtclId.h:63: warning: type defaults to `int' in declaration of `Pg_ConnType'\npgtclId.h:63: warning: data definition has no type or storage class\npgtcl.c:30: syntax error before `*'\npgtcl.c:31: warning: no previous prototype for `Pgtcl_Init'\npgtcl.c: In function `Pgtcl_Init':\npgtcl.c:43: warning: implicit declaration of function `Tcl_CreateCommand'\npgtcl.c:43: `interp' undeclared (first use in this function)\npgtcl.c:43: (Each undeclared identifier is reported only once\npgtcl.c:43: for each function it appears in.)\npgtcl.c:46: `ClientData' undeclared (first use in this function)\npgtcl.c:46: syntax error before `0'\npgtcl.c:51: syntax error before `0'\npgtcl.c:56: syntax error before `0'\npgtcl.c:61: syntax error before `0'\npgtcl.c:66: syntax error before `0'\npgtcl.c:71: syntax error before `0'\npgtcl.c:76: syntax error before `0'\npgtcl.c:81: syntax error before `0'\npgtcl.c:86: syntax error before `0'\npgtcl.c:91: syntax error before `0'\npgtcl.c:96: syntax error before `0'\npgtcl.c:101: syntax error before `0'\npgtcl.c:106: syntax error before `0'\npgtcl.c:111: syntax error before `0'\npgtcl.c:116: syntax error before `0'\npgtcl.c:121: syntax error before `0'\npgtcl.c:126: syntax error before `0'\npgtcl.c:128: warning: implicit declaration of function `Tcl_PkgProvide'\npgtcl.c:130: `TCL_OK' undeclared (first use in this function)\npgtcl.c:131: warning: control reaches end of non-void function\npgtcl.c: At top level:\npgtcl.c:135: syntax error before `*'\npgtcl.c:136: warning: no previous prototype for `Pgtcl_SafeInit'\npgtcl.c: In function `Pgtcl_SafeInit':\npgtcl.c:137: `interp' undeclared (first use in this function)\ngmake[3]: *** [pgtcl.o] Error 1\ngmake[3]: Leaving directory `/home/ler/pg-dev/pgsql/src/interfaces/libpgtcl'\ngmake[2]: *** [all] Error 2\ngmake[2]: Leaving directory `/home/ler/pg-dev/pgsql/src/interfaces'\ngmake[1]: *** [all] Error 2\ngmake[1]: Leaving directory `/home/ler/pg-dev/pgsql/src'\ngmake: *** [all] Error 2\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: [email protected]\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n", "msg_date": "Sat, 25 Nov 2000 16:24:11 -0600", "msg_from": "Larry Rosenman <[email protected]>", "msg_from_op": true, "msg_subject": "tcl/FreeBSD 4.2-STABLE, multiple TCL versions installed" }, { "msg_contents": "* Peter Eisentraut <[email protected]> [001125 17:18]:\n> Larry Rosenman writes:\n> \n> > libpgtcl.h:19: tcl.h: No such file or directory\n> \n> How do you suggest going about finding the tcl.h file?\nit's in /usr/local/include/tcl8.3/ ... \n\nThis will be a problem with TCL as installed by FreeBSD PORTS... \n\n(maybe configure ought to look for it, or have a --with-tclinclude=?\n) \n\n> \n> -- \n> Peter Eisentraut [email protected] http://yi.org/peter-e/\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: [email protected]\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n", "msg_date": "Sat, 25 Nov 2000 17:21:54 -0600", "msg_from": "Larry Rosenman <[email protected]>", "msg_from_op": true, "msg_subject": "Re: tcl/FreeBSD 4.2-STABLE, multiple TCL versions installed" }, { "msg_contents": "Larry Rosenman writes:\n\n> libpgtcl.h:19: tcl.h: No such file or directory\n\nHow do you suggest going about finding the tcl.h file?\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n", "msg_date": "Sun, 26 Nov 2000 00:23:55 +0100 (CET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: tcl/FreeBSD 4.2-STABLE, multiple TCL versions installed" }, { "msg_contents": "Larry Rosenman writes:\n\n> > > libpgtcl.h:19: tcl.h: No such file or directory\n> > \n> > How do you suggest going about finding the tcl.h file?\n> it's in /usr/local/include/tcl8.3/ ... \n> \n> This will be a problem with TCL as installed by FreeBSD PORTS... \n> \n> (maybe configure ought to look for it, or have a --with-tclinclude=?\n\nI was hoping for a way to let Tcl tell us. The info is certainly there\nsomewhere (in tclConfig.sh most likely).\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n", "msg_date": "Sun, 26 Nov 2000 01:31:29 +0100 (CET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: tcl/FreeBSD 4.2-STABLE, multiple TCL versions installed" }, { "msg_contents": "* Peter Eisentraut <[email protected]> [001125 18:26]:\n> Larry Rosenman writes:\n> \n> > > > libpgtcl.h:19: tcl.h: No such file or directory\n> > > \n> > > How do you suggest going about finding the tcl.h file?\n> > it's in /usr/local/include/tcl8.3/ ... \n> > \n> > This will be a problem with TCL as installed by FreeBSD PORTS... \n> > \n> > (maybe configure ought to look for it, or have a --with-tclinclude=?\n> \n> I was hoping for a way to let Tcl tell us. The info is certainly there\n> somewhere (in tclConfig.sh most likely).\nIt's not. I already looked in it. I'm not sure what the \n\"standard\" is on FreeBSD, I do know that they support multiple\nversions concurrently on FreeBSD. I'm not sure what the right fix \nfor us is. \nBased on the current 7.0.2 port (from Marc...):\n\nCONFIGURE_TCL= --with-tcl --with-tclconfig=\"${LOCALBASE}/lib/tcl8.3 ${LOCALBASE }/lib/tk8.3\"\n\nWorks. This is, umm, messy at best.\n> \n> -- \n> Peter Eisentraut [email protected] http://yi.org/peter-e/\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: [email protected]\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n", "msg_date": "Sat, 25 Nov 2000 18:33:09 -0600", "msg_from": "Larry Rosenman <[email protected]>", "msg_from_op": true, "msg_subject": "Re: tcl/FreeBSD 4.2-STABLE, multiple TCL versions installed" }, { "msg_contents": "* Larry Rosenman <[email protected]> [001125 18:34]:\n> CONFIGURE_TCL= --with-tcl --with-tclconfig=\"${LOCALBASE}/lib/tcl8.3 ${LOCALBASE }/lib/tk8.3\"\n> \n> Works. This is, umm, messy at best.\nErr, I lied, Marc adds the /usr/local/include/tcl8.3 and tk8.3 dirs to \nthe --with-includes configure option. \n\nStill messy.\n> > \n> > -- \n> > Peter Eisentraut [email protected] http://yi.org/peter-e/\n> \n> -- \n> Larry Rosenman http://www.lerctr.org/~ler\n> Phone: +1 972-414-9812 E-Mail: [email protected]\n> US Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: [email protected]\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n", "msg_date": "Sat, 25 Nov 2000 18:39:33 -0600", "msg_from": "Larry Rosenman <[email protected]>", "msg_from_op": true, "msg_subject": "Re: tcl/FreeBSD 4.2-STABLE, multiple TCL versions installed" }, { "msg_contents": "* Larry Rosenman <[email protected]> [001125 18:40]:\n> * Larry Rosenman <[email protected]> [001125 18:34]:\n> > CONFIGURE_TCL= --with-tcl --with-tclconfig=\"${LOCALBASE}/lib/tcl8.3 ${LOCALBASE }/lib/tk8.3\"\n> > \n> > Works. This is, umm, messy at best.\n> Err, I lied, Marc adds the /usr/local/include/tcl8.3 and tk8.3 dirs to \n> the --with-includes configure option. \n> \n> Still messy.\nand it breaks now on 7.1devel sources... \n\n\n> > > \n> > > -- \n> > > Peter Eisentraut [email protected] http://yi.org/peter-e/\n> > \n> > -- \n> > Larry Rosenman http://www.lerctr.org/~ler\n> > Phone: +1 972-414-9812 E-Mail: [email protected]\n> > US Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n> -- \n> Larry Rosenman http://www.lerctr.org/~ler\n> Phone: +1 972-414-9812 E-Mail: [email protected]\n> US Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: [email protected]\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n", "msg_date": "Sat, 25 Nov 2000 18:53:29 -0600", "msg_from": "Larry Rosenman <[email protected]>", "msg_from_op": true, "msg_subject": "Re: tcl/FreeBSD 4.2-STABLE, multiple TCL versions installed" }, { "msg_contents": "* Larry Rosenman <[email protected]> [001125 18:54]:\n> > > Works. This is, umm, messy at best.\n> > Err, I lied, Marc adds the /usr/local/include/tcl8.3 and tk8.3 dirs to \n> > the --with-includes configure option. \n> > \n> > Still messy.\n> and it breaks now on 7.1devel sources... \nHere is what I issued configure with:\n\n./configure --prefix=/home/ler/pg-test --enable-syslog \\\n\t--with-CXX --with-perl --enable-multibyte --enable-cassert \\\n\t--with-openssl \\\n\t--with-includes=\"/usr/local/include/tcl8.3 /usr/local/include/tk8.3\" \\\n\t--with-tcl \\\n\t--with-tclconfig=/usr/local/lib/tcl8.3 \\\n\t--with-tkconfig=/usr/local/lib/tk8.3\n\nand it still breaks the same way. The output doesn't show the\n--with-includes directive directories ANYWHERE in the make output :-( \n\nLER\n\n\t\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: [email protected]\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n", "msg_date": "Sat, 25 Nov 2000 19:03:44 -0600", "msg_from": "Larry Rosenman <[email protected]>", "msg_from_op": true, "msg_subject": "Re: tcl/FreeBSD 4.2-STABLE, multiple TCL versions installed" }, { "msg_contents": "Larry Rosenman writes:\n\n> The output doesn't show the --with-includes directive directories\n> ANYWHERE in the make output :-(\n\nFixed\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n", "msg_date": "Sun, 26 Nov 2000 19:21:58 +0100 (CET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: tcl/FreeBSD 4.2-STABLE, multiple TCL versions installed" }, { "msg_contents": "* Peter Eisentraut <[email protected]> [001126 12:16]:\n> Larry Rosenman writes:\n> \n> > The output doesn't show the --with-includes directive directories\n> > ANYWHERE in the make output :-(\n> \n> Fixed\nnope. Still dies in the same place in the same way. The configure\ninput I gave was:\n\n./configure --prefix=/home/ler/pg-test --enable-syslog \\\n\t--with-CXX --with-perl --enable-multibyte --enable-cassert \\\n\t--with-openssl \\\n\t--with-includes=\"/usr/local/include/tcl8.3,/usr/local/include/tk8.3\" \\\n\t--with-tcl \\\n\t--with-tclconfig=/usr/local/lib/tcl8.3 \\\n\t--with-tkconfig=/usr/local/lib/tk8.3\n\t\nand those include directories NEVER appear in the make output:\n\ngmake -C doc all\ngmake[1]: Entering directory `/home/ler/pg-dev/pgsql/doc'\ngmake[1]: Nothing to be done for `all'.\ngmake[1]: Leaving directory `/home/ler/pg-dev/pgsql/doc'\ngmake -C src all\ngmake[1]: Entering directory `/home/ler/pg-dev/pgsql/src'\ngmake -C backend all\ngmake[2]: Entering directory `/home/ler/pg-dev/pgsql/src/backend'\nprereqdir=`cd parser/ && pwd` && \\\n cd ../../src/include/parser/ && rm -f parse.h && \\\n ln -s $prereqdir/parse.h .\ngmake -C utils fmgroids.h\ngmake[3]: Entering directory `/home/ler/pg-dev/pgsql/src/backend/utils'\nCPP='gcc -E' AWK='awk' /bin/sh Gen_fmgrtab.sh ../../../src/include/catalog/pg_proc.h\ngmake[3]: Leaving directory `/home/ler/pg-dev/pgsql/src/backend/utils'\ncd ../../src/include/utils/ && rm -f fmgroids.h && \\\n ln -s ../../../src/backend/utils/fmgroids.h .\ngmake -C access all\ngmake[3]: Entering directory `/home/ler/pg-dev/pgsql/src/backend/access'\ngmake -C common SUBSYS.o\ngmake[4]: Entering directory `/home/ler/pg-dev/pgsql/src/backend/access/common'\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o heaptuple.o heaptuple.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o indextuple.o indextuple.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o indexvalid.o indexvalid.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o printtup.o printtup.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o scankey.o scankey.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o tupdesc.o tupdesc.c\n/usr/libexec/elf/ld -r -o SUBSYS.o heaptuple.o indextuple.o indexvalid.o printtup.o scankey.o tupdesc.o \ngmake[4]: Leaving directory `/home/ler/pg-dev/pgsql/src/backend/access/common'\ngmake -C gist SUBSYS.o\ngmake[4]: Entering directory `/home/ler/pg-dev/pgsql/src/backend/access/gist'\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o gist.o gist.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o gistget.o gistget.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o gistscan.o gistscan.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o giststrat.o giststrat.c\n/usr/libexec/elf/ld -r -o SUBSYS.o gist.o gistget.o gistscan.o giststrat.o\ngmake[4]: Leaving directory `/home/ler/pg-dev/pgsql/src/backend/access/gist'\ngmake -C hash SUBSYS.o\ngmake[4]: Entering directory `/home/ler/pg-dev/pgsql/src/backend/access/hash'\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o hash.o hash.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o hashfunc.o hashfunc.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o hashinsert.o hashinsert.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o hashovfl.o hashovfl.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o hashpage.o hashpage.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o hashscan.o hashscan.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o hashsearch.o hashsearch.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o hashstrat.o hashstrat.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o hashutil.o hashutil.c\n/usr/libexec/elf/ld -r -o SUBSYS.o hash.o hashfunc.o hashinsert.o hashovfl.o hashpage.o hashscan.o hashsearch.o hashstrat.o hashutil.o\ngmake[4]: Leaving directory `/home/ler/pg-dev/pgsql/src/backend/access/hash'\ngmake -C heap SUBSYS.o\ngmake[4]: Entering directory `/home/ler/pg-dev/pgsql/src/backend/access/heap'\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o heapam.o heapam.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o hio.o hio.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o stats.o stats.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o tuptoaster.o tuptoaster.c\n/usr/libexec/elf/ld -r -o SUBSYS.o heapam.o hio.o stats.o tuptoaster.o\ngmake[4]: Leaving directory `/home/ler/pg-dev/pgsql/src/backend/access/heap'\ngmake -C index SUBSYS.o\ngmake[4]: Entering directory `/home/ler/pg-dev/pgsql/src/backend/access/index'\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o genam.o genam.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o indexam.o indexam.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o istrat.o istrat.c\n/usr/libexec/elf/ld -r -o SUBSYS.o genam.o indexam.o istrat.o\ngmake[4]: Leaving directory `/home/ler/pg-dev/pgsql/src/backend/access/index'\ngmake -C nbtree SUBSYS.o\ngmake[4]: Entering directory `/home/ler/pg-dev/pgsql/src/backend/access/nbtree'\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o nbtcompare.o nbtcompare.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o nbtinsert.o nbtinsert.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o nbtpage.o nbtpage.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o nbtree.o nbtree.c\nnbtree.c:740: warning: `_bt_cleanup_page' defined but not used\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o nbtscan.o nbtscan.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o nbtsearch.o nbtsearch.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o nbtstrat.o nbtstrat.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o nbtutils.o nbtutils.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o nbtsort.o nbtsort.c\n/usr/libexec/elf/ld -r -o SUBSYS.o nbtcompare.o nbtinsert.o nbtpage.o nbtree.o nbtscan.o nbtsearch.o nbtstrat.o nbtutils.o nbtsort.o\ngmake[4]: Leaving directory `/home/ler/pg-dev/pgsql/src/backend/access/nbtree'\ngmake -C rtree SUBSYS.o\ngmake[4]: Entering directory `/home/ler/pg-dev/pgsql/src/backend/access/rtree'\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o rtget.o rtget.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o rtproc.o rtproc.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o rtree.o rtree.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o rtscan.o rtscan.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o rtstrat.o rtstrat.c\n/usr/libexec/elf/ld -r -o SUBSYS.o rtget.o rtproc.o rtree.o rtscan.o rtstrat.o\ngmake[4]: Leaving directory `/home/ler/pg-dev/pgsql/src/backend/access/rtree'\ngmake -C transam SUBSYS.o\ngmake[4]: Entering directory `/home/ler/pg-dev/pgsql/src/backend/access/transam'\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o transam.o transam.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o transsup.o transsup.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o varsup.o varsup.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o xact.o xact.c\nxact.c: In function `RecordTransactionCommit':\nxact.c:699: warning: implicit declaration of function `select'\nxact.c: In function `xact_redo':\nxact.c:1760: warning: unused variable `xlrec'\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o xid.o xid.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o xlog.o xlog.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o xlogutils.o xlogutils.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o rmgr.o rmgr.c\n/usr/libexec/elf/ld -r -o SUBSYS.o transam.o transsup.o varsup.o xact.o xid.o xlog.o xlogutils.o rmgr.o\ngmake[4]: Leaving directory `/home/ler/pg-dev/pgsql/src/backend/access/transam'\n/usr/libexec/elf/ld -r -o SUBSYS.o common/SUBSYS.o gist/SUBSYS.o hash/SUBSYS.o heap/SUBSYS.o index/SUBSYS.o nbtree/SUBSYS.o rtree/SUBSYS.o transam/SUBSYS.o\ngmake[3]: Leaving directory `/home/ler/pg-dev/pgsql/src/backend/access'\ngmake -C bootstrap all\ngmake[3]: Entering directory `/home/ler/pg-dev/pgsql/src/backend/bootstrap'\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -Wno-error -I../../../src/include -c -o bootparse.o bootparse.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -Wno-error -I../../../src/include -c -o bootscanner.o bootscanner.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -Wno-error -I../../../src/include -c -o bootstrap.o bootstrap.c\n/usr/libexec/elf/ld -r -o SUBSYS.o bootparse.o bootscanner.o bootstrap.o\ngmake[3]: Leaving directory `/home/ler/pg-dev/pgsql/src/backend/bootstrap'\ngmake -C catalog all\ngmake[3]: Entering directory `/home/ler/pg-dev/pgsql/src/backend/catalog'\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../src/include -c -o catalog.o catalog.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../src/include -c -o heap.o heap.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../src/include -c -o index.o index.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../src/include -c -o indexing.o indexing.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../src/include -c -o aclchk.o aclchk.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../src/include -c -o pg_aggregate.o pg_aggregate.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../src/include -c -o pg_largeobject.o pg_largeobject.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../src/include -c -o pg_operator.o pg_operator.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../src/include -c -o pg_proc.o pg_proc.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../src/include -c -o pg_type.o pg_type.c\n/usr/libexec/elf/ld -r -o SUBSYS.o catalog.o heap.o index.o indexing.o aclchk.o pg_aggregate.o pg_largeobject.o pg_operator.o pg_proc.o pg_type.o\nCPP='gcc -E' AWK='awk' /bin/sh genbki.sh -o global -I../../../src/include ../../../src/include/catalog/pg_database.h ../../../src/include/catalog/pg_variable.h ../../../src/include/catalog/pg_shadow.h ../../../src/include/catalog/pg_group.h ../../../src/include/catalog/pg_log.h --set-version=7.1devel\nCPP='gcc -E' AWK='awk' /bin/sh genbki.sh -o template1 -I../../../src/include ../../../src/include/catalog/pg_proc.h ../../../src/include/catalog/pg_type.h ../../../src/include/catalog/pg_attribute.h ../../../src/include/catalog/pg_class.h ../../../src/include/catalog/pg_inherits.h ../../../src/include/catalog/pg_index.h ../../../src/include/catalog/pg_statistic.h ../../../src/include/catalog/pg_operator.h ../../../src/include/catalog/pg_opclass.h ../../../src/include/catalog/pg_am.h ../../../src/include/catalog/pg_amop.h ../../../src/include/catalog/pg_amproc.h ../../../src/include/catalog/pg_language.h ../../../src/include/catalog/pg_largeobject.h ../../../src/include/catalog/pg_aggregate.h ../../../src/include/catalog/pg_ipl.h ../../../src/include/catalog/pg_inheritproc.h ../../../src/include/catalog/pg_rewrite.h ../../../src/include/catalog/pg_listener.h ../../../src/include/catalog/pg_description.h ../../../src/include/catalog/indexing.h --set-version=7.1devel\ngmake[3]: Leaving directory `/home/ler/pg-dev/pgsql/src/backend/catalog'\ngmake -C parser all\ngmake[3]: Entering directory `/home/ler/pg-dev/pgsql/src/backend/parser'\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -Wno-error -I../../../src/include -c -o analyze.o analyze.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -Wno-error -I../../../src/include -c -o gram.o gram.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -Wno-error -I../../../src/include -c -o keywords.o keywords.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -Wno-error -I../../../src/include -c -o parser.o parser.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -Wno-error -I../../../src/include -c -o parse_agg.o parse_agg.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -Wno-error -I../../../src/include -c -o parse_clause.o parse_clause.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -Wno-error -I../../../src/include -c -o parse_expr.o parse_expr.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -Wno-error -I../../../src/include -c -o parse_func.o parse_func.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -Wno-error -I../../../src/include -c -o parse_node.o parse_node.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -Wno-error -I../../../src/include -c -o parse_oper.o parse_oper.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -Wno-error -I../../../src/include -c -o parse_relation.o parse_relation.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -Wno-error -I../../../src/include -c -o parse_type.o parse_type.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -Wno-error -I../../../src/include -c -o parse_coerce.o parse_coerce.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -Wno-error -I../../../src/include -c -o parse_target.o parse_target.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -Wno-error -I../../../src/include -c -o scan.o scan.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -Wno-error -I../../../src/include -c -o scansup.o scansup.c\n/usr/libexec/elf/ld -r -o SUBSYS.o analyze.o gram.o keywords.o parser.o parse_agg.o parse_clause.o parse_expr.o parse_func.o parse_node.o parse_oper.o parse_relation.o parse_type.o parse_coerce.o parse_target.o scan.o scansup.o\ngmake[3]: Leaving directory `/home/ler/pg-dev/pgsql/src/backend/parser'\ngmake -C commands all\ngmake[3]: Entering directory `/home/ler/pg-dev/pgsql/src/backend/commands'\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../src/include -c -o async.o async.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../src/include -c -o creatinh.o creatinh.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../src/include -c -o command.o command.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../src/include -c -o comment.o comment.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../src/include -c -o copy.o copy.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../src/include -c -o indexcmds.o indexcmds.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../src/include -c -o define.o define.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../src/include -c -o remove.o remove.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../src/include -c -o rename.o rename.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../src/include -c -o vacuum.o vacuum.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../src/include -c -o analyze.o analyze.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../src/include -c -o view.o view.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../src/include -c -o cluster.o cluster.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../src/include -c -o explain.o explain.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../src/include -c -o sequence.o sequence.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../src/include -c -o trigger.o trigger.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../src/include -c -o user.o user.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../src/include -c -o proclang.o proclang.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../src/include -c -o dbcommands.o dbcommands.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../src/include -c -o variable.o variable.c\n/usr/libexec/elf/ld -r -o SUBSYS.o async.o creatinh.o command.o comment.o copy.o indexcmds.o define.o remove.o rename.o vacuum.o analyze.o view.o cluster.o explain.o sequence.o trigger.o user.o proclang.o dbcommands.o variable.o\ngmake[3]: Leaving directory `/home/ler/pg-dev/pgsql/src/backend/commands'\ngmake -C executor all\ngmake[3]: Entering directory `/home/ler/pg-dev/pgsql/src/backend/executor'\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../src/include -c -o execAmi.o execAmi.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../src/include -c -o execFlatten.o execFlatten.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../src/include -c -o execJunk.o execJunk.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../src/include -c -o execMain.o execMain.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../src/include -c -o execProcnode.o execProcnode.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../src/include -c -o execQual.o execQual.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../src/include -c -o execScan.o execScan.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../src/include -c -o execTuples.o execTuples.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../src/include -c -o execUtils.o execUtils.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../src/include -c -o functions.o functions.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../src/include -c -o nodeAppend.o nodeAppend.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../src/include -c -o nodeAgg.o nodeAgg.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../src/include -c -o nodeHash.o nodeHash.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../src/include -c -o nodeHashjoin.o nodeHashjoin.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../src/include -c -o nodeIndexscan.o nodeIndexscan.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../src/include -c -o nodeMaterial.o nodeMaterial.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../src/include -c -o nodeMergejoin.o nodeMergejoin.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../src/include -c -o nodeNestloop.o nodeNestloop.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../src/include -c -o nodeResult.o nodeResult.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../src/include -c -o nodeSeqscan.o nodeSeqscan.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../src/include -c -o nodeSetOp.o nodeSetOp.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../src/include -c -o nodeSort.o nodeSort.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../src/include -c -o nodeUnique.o nodeUnique.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../src/include -c -o nodeLimit.o nodeLimit.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../src/include -c -o nodeGroup.o nodeGroup.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../src/include -c -o nodeSubplan.o nodeSubplan.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../src/include -c -o nodeSubqueryscan.o nodeSubqueryscan.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../src/include -c -o nodeTidscan.o nodeTidscan.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../src/include -c -o spi.o spi.c\n/usr/libexec/elf/ld -r -o SUBSYS.o execAmi.o execFlatten.o execJunk.o execMain.o execProcnode.o execQual.o execScan.o execTuples.o execUtils.o functions.o nodeAppend.o nodeAgg.o nodeHash.o nodeHashjoin.o nodeIndexscan.o nodeMaterial.o nodeMergejoin.o nodeNestloop.o nodeResult.o nodeSeqscan.o nodeSetOp.o nodeSort.o nodeUnique.o nodeLimit.o nodeGroup.o nodeSubplan.o nodeSubqueryscan.o nodeTidscan.o spi.o\ngmake[3]: Leaving directory `/home/ler/pg-dev/pgsql/src/backend/executor'\ngmake -C lib all\ngmake[3]: Entering directory `/home/ler/pg-dev/pgsql/src/backend/lib'\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../src/include -c -o bit.o bit.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../src/include -c -o hasht.o hasht.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../src/include -c -o lispsort.o lispsort.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../src/include -c -o stringinfo.o stringinfo.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../src/include -c -o dllist.o dllist.c\n/usr/libexec/elf/ld -r -o SUBSYS.o bit.o hasht.o lispsort.o stringinfo.o dllist.o\ngmake[3]: Leaving directory `/home/ler/pg-dev/pgsql/src/backend/lib'\ngmake -C libpq all\ngmake[3]: Entering directory `/home/ler/pg-dev/pgsql/src/backend/libpq'\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../src/include -c -o be-fsstubs.o be-fsstubs.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../src/include -c -o auth.o auth.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../src/include -c -o crypt.o crypt.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../src/include -c -o hba.o hba.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../src/include -c -o password.o password.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../src/include -c -o pqcomm.o pqcomm.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../src/include -c -o pqformat.o pqformat.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../src/include -c -o pqpacket.o pqpacket.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../src/include -c -o pqsignal.o pqsignal.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../src/include -c -o util.o util.c\n/usr/libexec/elf/ld -r -o SUBSYS.o be-fsstubs.o auth.o crypt.o hba.o password.o pqcomm.o pqformat.o pqpacket.o pqsignal.o util.o\ngmake[3]: Leaving directory `/home/ler/pg-dev/pgsql/src/backend/libpq'\ngmake -C main all\ngmake[3]: Entering directory `/home/ler/pg-dev/pgsql/src/backend/main'\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../src/include -c -o main.o main.c\n/usr/libexec/elf/ld -r -o SUBSYS.o main.o\ngmake[3]: Leaving directory `/home/ler/pg-dev/pgsql/src/backend/main'\ngmake -C nodes all\ngmake[3]: Entering directory `/home/ler/pg-dev/pgsql/src/backend/nodes'\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../src/include -c -o nodeFuncs.o nodeFuncs.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../src/include -c -o nodes.o nodes.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../src/include -c -o list.o list.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../src/include -c -o copyfuncs.o copyfuncs.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../src/include -c -o equalfuncs.o equalfuncs.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../src/include -c -o makefuncs.o makefuncs.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../src/include -c -o outfuncs.o outfuncs.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../src/include -c -o readfuncs.o readfuncs.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../src/include -c -o print.o print.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../src/include -c -o read.o read.c\n/usr/libexec/elf/ld -r -o SUBSYS.o nodeFuncs.o nodes.o list.o copyfuncs.o equalfuncs.o makefuncs.o outfuncs.o readfuncs.o print.o read.o\ngmake[3]: Leaving directory `/home/ler/pg-dev/pgsql/src/backend/nodes'\ngmake -C optimizer all\ngmake[3]: Entering directory `/home/ler/pg-dev/pgsql/src/backend/optimizer'\ngmake -C geqo SUBSYS.o\ngmake[4]: Entering directory `/home/ler/pg-dev/pgsql/src/backend/optimizer/geqo'\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o geqo_copy.o geqo_copy.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o geqo_eval.o geqo_eval.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o geqo_main.o geqo_main.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o geqo_misc.o geqo_misc.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o geqo_pool.o geqo_pool.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o geqo_recombination.o geqo_recombination.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o geqo_selection.o geqo_selection.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o geqo_erx.o geqo_erx.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o geqo_pmx.o geqo_pmx.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o geqo_cx.o geqo_cx.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o geqo_px.o geqo_px.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o geqo_ox1.o geqo_ox1.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o geqo_ox2.o geqo_ox2.c\n/usr/libexec/elf/ld -r -o SUBSYS.o geqo_copy.o geqo_eval.o geqo_main.o geqo_misc.o geqo_pool.o geqo_recombination.o geqo_selection.o geqo_erx.o geqo_pmx.o geqo_cx.o geqo_px.o geqo_ox1.o geqo_ox2.o\ngmake[4]: Leaving directory `/home/ler/pg-dev/pgsql/src/backend/optimizer/geqo'\ngmake -C path SUBSYS.o\ngmake[4]: Entering directory `/home/ler/pg-dev/pgsql/src/backend/optimizer/path'\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o allpaths.o allpaths.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o clausesel.o clausesel.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o costsize.o costsize.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o indxpath.o indxpath.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o joinpath.o joinpath.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o joinrels.o joinrels.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o orindxpath.o orindxpath.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o pathkeys.o pathkeys.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o tidpath.o tidpath.c\n/usr/libexec/elf/ld -r -o SUBSYS.o allpaths.o clausesel.o costsize.o indxpath.o joinpath.o joinrels.o orindxpath.o pathkeys.o tidpath.o\ngmake[4]: Leaving directory `/home/ler/pg-dev/pgsql/src/backend/optimizer/path'\ngmake -C plan SUBSYS.o\ngmake[4]: Entering directory `/home/ler/pg-dev/pgsql/src/backend/optimizer/plan'\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o createplan.o createplan.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o initsplan.o initsplan.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o planmain.o planmain.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o planner.o planner.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o setrefs.o setrefs.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o subselect.o subselect.c\n/usr/libexec/elf/ld -r -o SUBSYS.o createplan.o initsplan.o planmain.o planner.o setrefs.o subselect.o\ngmake[4]: Leaving directory `/home/ler/pg-dev/pgsql/src/backend/optimizer/plan'\ngmake -C prep SUBSYS.o\ngmake[4]: Entering directory `/home/ler/pg-dev/pgsql/src/backend/optimizer/prep'\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o prepqual.o prepqual.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o preptlist.o preptlist.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o prepunion.o prepunion.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o prepkeyset.o prepkeyset.c\n/usr/libexec/elf/ld -r -o SUBSYS.o prepqual.o preptlist.o prepunion.o prepkeyset.o\ngmake[4]: Leaving directory `/home/ler/pg-dev/pgsql/src/backend/optimizer/prep'\ngmake -C util SUBSYS.o\ngmake[4]: Entering directory `/home/ler/pg-dev/pgsql/src/backend/optimizer/util'\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o restrictinfo.o restrictinfo.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o clauses.o clauses.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o plancat.o plancat.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o joininfo.o joininfo.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o pathnode.o pathnode.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o relnode.o relnode.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o tlist.o tlist.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o var.o var.c\n/usr/libexec/elf/ld -r -o SUBSYS.o restrictinfo.o clauses.o plancat.o joininfo.o pathnode.o relnode.o tlist.o var.o\ngmake[4]: Leaving directory `/home/ler/pg-dev/pgsql/src/backend/optimizer/util'\n/usr/libexec/elf/ld -r -o SUBSYS.o geqo/SUBSYS.o path/SUBSYS.o plan/SUBSYS.o prep/SUBSYS.o util/SUBSYS.o\ngmake[3]: Leaving directory `/home/ler/pg-dev/pgsql/src/backend/optimizer'\ngmake -C port all\ngmake[3]: Entering directory `/home/ler/pg-dev/pgsql/src/backend/port'\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../src/include -c -o dynloader.o dynloader.c\ndynloader.c: In function `BSD44_derived_dlsym':\ndynloader.c:85: warning: unused variable `buf'\n/usr/libexec/elf/ld -r -o SUBSYS.o dynloader.o \ngmake[3]: Leaving directory `/home/ler/pg-dev/pgsql/src/backend/port'\ngmake -C postmaster all\ngmake[3]: Entering directory `/home/ler/pg-dev/pgsql/src/backend/postmaster'\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../src/include -c -o postmaster.o postmaster.c\n/usr/libexec/elf/ld -r -o SUBSYS.o postmaster.o\ngmake[3]: Leaving directory `/home/ler/pg-dev/pgsql/src/backend/postmaster'\ngmake -C regex all\ngmake[3]: Entering directory `/home/ler/pg-dev/pgsql/src/backend/regex'\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../src/include -DPOSIX_MISTAKE -c -o regcomp.o regcomp.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../src/include -DPOSIX_MISTAKE -c -o regerror.o regerror.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../src/include -DPOSIX_MISTAKE -c -o regexec.o regexec.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../src/include -DPOSIX_MISTAKE -c -o regfree.o regfree.c\n/usr/libexec/elf/ld -r -o SUBSYS.o regcomp.o regerror.o regexec.o regfree.o\ngmake[3]: Leaving directory `/home/ler/pg-dev/pgsql/src/backend/regex'\ngmake -C rewrite all\ngmake[3]: Entering directory `/home/ler/pg-dev/pgsql/src/backend/rewrite'\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../src/include -c -o rewriteRemove.o rewriteRemove.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../src/include -c -o rewriteDefine.o rewriteDefine.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../src/include -c -o rewriteHandler.o rewriteHandler.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../src/include -c -o rewriteManip.o rewriteManip.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../src/include -c -o rewriteSupport.o rewriteSupport.c\n/usr/libexec/elf/ld -r -o SUBSYS.o rewriteRemove.o rewriteDefine.o rewriteHandler.o rewriteManip.o rewriteSupport.o\ngmake[3]: Leaving directory `/home/ler/pg-dev/pgsql/src/backend/rewrite'\ngmake -C storage all\ngmake[3]: Entering directory `/home/ler/pg-dev/pgsql/src/backend/storage'\ngmake -C buffer SUBSYS.o\ngmake[4]: Entering directory `/home/ler/pg-dev/pgsql/src/backend/storage/buffer'\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o buf_table.o buf_table.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o buf_init.o buf_init.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o bufmgr.o bufmgr.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o freelist.o freelist.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o localbuf.o localbuf.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o s_lock.o s_lock.c\n/usr/libexec/elf/ld -r -o SUBSYS.o buf_table.o buf_init.o bufmgr.o freelist.o localbuf.o s_lock.o\ngmake[4]: Leaving directory `/home/ler/pg-dev/pgsql/src/backend/storage/buffer'\ngmake -C file SUBSYS.o\ngmake[4]: Entering directory `/home/ler/pg-dev/pgsql/src/backend/storage/file'\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o fd.o fd.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o buffile.o buffile.c\n/usr/libexec/elf/ld -r -o SUBSYS.o fd.o buffile.o\ngmake[4]: Leaving directory `/home/ler/pg-dev/pgsql/src/backend/storage/file'\ngmake -C ipc SUBSYS.o\ngmake[4]: Entering directory `/home/ler/pg-dev/pgsql/src/backend/storage/ipc'\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o ipc.o ipc.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o ipci.o ipci.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o shmem.o shmem.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o shmqueue.o shmqueue.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o sinval.o sinval.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o sinvaladt.o sinvaladt.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o spin.o spin.c\n/usr/libexec/elf/ld -r -o SUBSYS.o ipc.o ipci.o shmem.o shmqueue.o sinval.o sinvaladt.o spin.o\ngmake[4]: Leaving directory `/home/ler/pg-dev/pgsql/src/backend/storage/ipc'\ngmake -C large_object SUBSYS.o\ngmake[4]: Entering directory `/home/ler/pg-dev/pgsql/src/backend/storage/large_object'\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o inv_api.o inv_api.c\n/usr/libexec/elf/ld -r -o SUBSYS.o inv_api.o\ngmake[4]: Leaving directory `/home/ler/pg-dev/pgsql/src/backend/storage/large_object'\ngmake -C lmgr SUBSYS.o\ngmake[4]: Entering directory `/home/ler/pg-dev/pgsql/src/backend/storage/lmgr'\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o lmgr.o lmgr.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o lock.o lock.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o proc.o proc.c\n/usr/libexec/elf/ld -r -o SUBSYS.o lmgr.o lock.o proc.o\ngmake[4]: Leaving directory `/home/ler/pg-dev/pgsql/src/backend/storage/lmgr'\ngmake -C page SUBSYS.o\ngmake[4]: Entering directory `/home/ler/pg-dev/pgsql/src/backend/storage/page'\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o bufpage.o bufpage.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o itemptr.o itemptr.c\n/usr/libexec/elf/ld -r -o SUBSYS.o bufpage.o itemptr.o\ngmake[4]: Leaving directory `/home/ler/pg-dev/pgsql/src/backend/storage/page'\ngmake -C smgr SUBSYS.o\ngmake[4]: Entering directory `/home/ler/pg-dev/pgsql/src/backend/storage/smgr'\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o md.o md.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o mm.o mm.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o smgr.o smgr.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o smgrtype.o smgrtype.c\n/usr/libexec/elf/ld -r -o SUBSYS.o md.o mm.o smgr.o smgrtype.o\ngmake[4]: Leaving directory `/home/ler/pg-dev/pgsql/src/backend/storage/smgr'\n/usr/libexec/elf/ld -r -o SUBSYS.o buffer/SUBSYS.o file/SUBSYS.o ipc/SUBSYS.o large_object/SUBSYS.o lmgr/SUBSYS.o page/SUBSYS.o smgr/SUBSYS.o\ngmake[3]: Leaving directory `/home/ler/pg-dev/pgsql/src/backend/storage'\ngmake -C tcop all\ngmake[3]: Entering directory `/home/ler/pg-dev/pgsql/src/backend/tcop'\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../src/include -c -o dest.o dest.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../src/include -c -o fastpath.o fastpath.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../src/include -c -o postgres.o postgres.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../src/include -c -o pquery.o pquery.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../src/include -c -o utility.o utility.c\n/usr/libexec/elf/ld -r -o SUBSYS.o dest.o fastpath.o postgres.o pquery.o utility.o\ngmake[3]: Leaving directory `/home/ler/pg-dev/pgsql/src/backend/tcop'\ngmake -C utils all\ngmake[3]: Entering directory `/home/ler/pg-dev/pgsql/src/backend/utils'\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../src/include -c -o fmgrtab.o fmgrtab.c\ngmake -C adt SUBSYS.o\ngmake[4]: Entering directory `/home/ler/pg-dev/pgsql/src/backend/utils/adt'\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o acl.o acl.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o arrayfuncs.o arrayfuncs.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o arrayutils.o arrayutils.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o bool.o bool.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o cash.o cash.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o char.o char.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o date.o date.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o datetime.o datetime.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o datum.o datum.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o float.o float.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o format_type.o format_type.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o geo_ops.o geo_ops.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o geo_selfuncs.o geo_selfuncs.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o int.o int.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o int8.o int8.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o like.o like.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o misc.o misc.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o nabstime.o nabstime.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o name.o name.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o not_in.o not_in.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o numeric.o numeric.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o numutils.o numutils.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o oid.o oid.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o oracle_compat.o oracle_compat.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o regexp.o regexp.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o regproc.o regproc.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o ruleutils.o ruleutils.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o selfuncs.o selfuncs.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o sets.o sets.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o tid.o tid.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o timestamp.o timestamp.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o varbit.o varbit.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o varchar.o varchar.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o varlena.o varlena.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o version.o version.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o network.o network.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o mac.o mac.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o inet_net_ntop.o inet_net_ntop.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o inet_net_pton.o inet_net_pton.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o ri_triggers.o ri_triggers.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o pg_lzcompress.o pg_lzcompress.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o pg_locale.o pg_locale.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o formatting.o formatting.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o ascii.o ascii.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o quote.o quote.c\n/usr/libexec/elf/ld -r -o SUBSYS.o acl.o arrayfuncs.o arrayutils.o bool.o cash.o char.o date.o datetime.o datum.o float.o format_type.o geo_ops.o geo_selfuncs.o int.o int8.o like.o misc.o nabstime.o name.o not_in.o numeric.o numutils.o oid.o oracle_compat.o regexp.o regproc.o ruleutils.o selfuncs.o sets.o tid.o timestamp.o varbit.o varchar.o varlena.o version.o network.o mac.o inet_net_ntop.o inet_net_pton.o ri_triggers.o pg_lzcompress.o pg_locale.o formatting.o ascii.o quote.o\ngmake[4]: Leaving directory `/home/ler/pg-dev/pgsql/src/backend/utils/adt'\ngmake -C cache SUBSYS.o\ngmake[4]: Entering directory `/home/ler/pg-dev/pgsql/src/backend/utils/cache'\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o catcache.o catcache.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o inval.o inval.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o rel.o rel.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o relcache.o relcache.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o syscache.o syscache.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o lsyscache.o lsyscache.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o fcache.o fcache.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o temprel.o temprel.c\n/usr/libexec/elf/ld -r -o SUBSYS.o catcache.o inval.o rel.o relcache.o syscache.o lsyscache.o fcache.o temprel.o\ngmake[4]: Leaving directory `/home/ler/pg-dev/pgsql/src/backend/utils/cache'\ngmake -C error SUBSYS.o\ngmake[4]: Entering directory `/home/ler/pg-dev/pgsql/src/backend/utils/error'\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o assert.o assert.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o elog.o elog.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o exc.o exc.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o excabort.o excabort.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o excid.o excid.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o format.o format.c\n/usr/libexec/elf/ld -r -o SUBSYS.o assert.o elog.o exc.o excabort.o excid.o format.o\ngmake[4]: Leaving directory `/home/ler/pg-dev/pgsql/src/backend/utils/error'\ngmake -C fmgr SUBSYS.o\ngmake[4]: Entering directory `/home/ler/pg-dev/pgsql/src/backend/utils/fmgr'\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o dfmgr.o dfmgr.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o fmgr.o fmgr.c\n/usr/libexec/elf/ld -r -o SUBSYS.o dfmgr.o fmgr.o\ngmake[4]: Leaving directory `/home/ler/pg-dev/pgsql/src/backend/utils/fmgr'\ngmake -C hash SUBSYS.o\ngmake[4]: Entering directory `/home/ler/pg-dev/pgsql/src/backend/utils/hash'\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o dynahash.o dynahash.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o hashfn.o hashfn.c\n/usr/libexec/elf/ld -r -o SUBSYS.o dynahash.o hashfn.o\ngmake[4]: Leaving directory `/home/ler/pg-dev/pgsql/src/backend/utils/hash'\ngmake -C init SUBSYS.o\ngmake[4]: Entering directory `/home/ler/pg-dev/pgsql/src/backend/utils/init'\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o findbe.o findbe.c\nIn file included from findbe.c:14:\n/usr/include/grp.h:58: warning: parameter names (without types) in function declaration\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o globals.o globals.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o miscinit.o miscinit.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o postinit.o postinit.c\n/usr/libexec/elf/ld -r -o SUBSYS.o findbe.o globals.o miscinit.o postinit.o\ngmake[4]: Leaving directory `/home/ler/pg-dev/pgsql/src/backend/utils/init'\ngmake -C misc SUBSYS.o\ngmake[4]: Entering directory `/home/ler/pg-dev/pgsql/src/backend/utils/misc'\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o database.o database.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o superuser.o superuser.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o guc.o guc.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o guc-file.o guc-file.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o ps_status.o ps_status.c\n/usr/libexec/elf/ld -r -o SUBSYS.o database.o superuser.o guc.o guc-file.o ps_status.o\ngmake[4]: Leaving directory `/home/ler/pg-dev/pgsql/src/backend/utils/misc'\ngmake -C mmgr SUBSYS.o\ngmake[4]: Entering directory `/home/ler/pg-dev/pgsql/src/backend/utils/mmgr'\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o aset.o aset.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o mcxt.o mcxt.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o portalmem.o portalmem.c\n/usr/libexec/elf/ld -r -o SUBSYS.o aset.o mcxt.o portalmem.o\ngmake[4]: Leaving directory `/home/ler/pg-dev/pgsql/src/backend/utils/mmgr'\ngmake -C sort SUBSYS.o\ngmake[4]: Entering directory `/home/ler/pg-dev/pgsql/src/backend/utils/sort'\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o logtape.o logtape.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o tuplesort.o tuplesort.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o tuplestore.o tuplestore.c\n/usr/libexec/elf/ld -r -o SUBSYS.o logtape.o tuplesort.o tuplestore.o\ngmake[4]: Leaving directory `/home/ler/pg-dev/pgsql/src/backend/utils/sort'\ngmake -C time SUBSYS.o\ngmake[4]: Entering directory `/home/ler/pg-dev/pgsql/src/backend/utils/time'\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o tqual.o tqual.c\n/usr/libexec/elf/ld -r -o SUBSYS.o tqual.o\ngmake[4]: Leaving directory `/home/ler/pg-dev/pgsql/src/backend/utils/time'\ngmake -C mb SUBSYS.o\ngmake[4]: Entering directory `/home/ler/pg-dev/pgsql/src/backend/utils/mb'\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o common.o common.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o conv.o conv.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o mbutils.o mbutils.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o wchar.o wchar.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o wstrcmp.o wstrcmp.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o wstrncmp.o wstrncmp.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -c -o big5.o big5.c\n/usr/libexec/elf/ld -r -o SUBSYS.o common.o conv.o mbutils.o wchar.o wstrcmp.o wstrncmp.o big5.o\ngmake[4]: Leaving directory `/home/ler/pg-dev/pgsql/src/backend/utils/mb'\n/usr/libexec/elf/ld -r -o SUBSYS.o fmgrtab.o adt/SUBSYS.o cache/SUBSYS.o error/SUBSYS.o fmgr/SUBSYS.o hash/SUBSYS.o init/SUBSYS.o misc/SUBSYS.o mmgr/SUBSYS.o sort/SUBSYS.o time/SUBSYS.o mb/SUBSYS.o\ngmake[3]: Leaving directory `/home/ler/pg-dev/pgsql/src/backend/utils'\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -o postgres access/SUBSYS.o bootstrap/SUBSYS.o catalog/SUBSYS.o parser/SUBSYS.o commands/SUBSYS.o executor/SUBSYS.o lib/SUBSYS.o libpq/SUBSYS.o main/SUBSYS.o nodes/SUBSYS.o optimizer/SUBSYS.o port/SUBSYS.o postmaster/SUBSYS.o regex/SUBSYS.o rewrite/SUBSYS.o storage/SUBSYS.o tcop/SUBSYS.o utils/SUBSYS.o -lssl -lcrypto -lz -lcrypt -lcompat -lm -lutil -lreadline -ltermcap -lncurses -R/home/ler/pg-test/lib -export-dynamic\ngmake[2]: Leaving directory `/home/ler/pg-dev/pgsql/src/backend'\ngmake -C include all\ngmake[2]: Entering directory `/home/ler/pg-dev/pgsql/src/include'\ngmake[2]: Nothing to be done for `all'.\ngmake[2]: Leaving directory `/home/ler/pg-dev/pgsql/src/include'\ngmake -C interfaces all\ngmake[2]: Entering directory `/home/ler/pg-dev/pgsql/src/interfaces'\ngmake[3]: Entering directory `/home/ler/pg-dev/pgsql/src/interfaces/libpq'\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -fpic -DPIC -I../../../src/include -DFRONTEND -I. -DSYSCONFDIR='\"/home/ler/pg-test/etc/postgresql\"' -c -o fe-auth.o fe-auth.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -fpic -DPIC -I../../../src/include -DFRONTEND -I. -DSYSCONFDIR='\"/home/ler/pg-test/etc/postgresql\"' -c -o fe-connect.o fe-connect.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -fpic -DPIC -I../../../src/include -DFRONTEND -I. -DSYSCONFDIR='\"/home/ler/pg-test/etc/postgresql\"' -c -o fe-exec.o fe-exec.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -fpic -DPIC -I../../../src/include -DFRONTEND -I. -DSYSCONFDIR='\"/home/ler/pg-test/etc/postgresql\"' -c -o fe-misc.o fe-misc.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -fpic -DPIC -I../../../src/include -DFRONTEND -I. -DSYSCONFDIR='\"/home/ler/pg-test/etc/postgresql\"' -c -o fe-print.o fe-print.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -fpic -DPIC -I../../../src/include -DFRONTEND -I. -DSYSCONFDIR='\"/home/ler/pg-test/etc/postgresql\"' -c -o fe-lobj.o fe-lobj.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -fpic -DPIC -I../../../src/include -DFRONTEND -I. -DSYSCONFDIR='\"/home/ler/pg-test/etc/postgresql\"' -c -o pqexpbuffer.o pqexpbuffer.c\nrm -f dllist.c && ln -s ../../../src/backend/lib/dllist.c .\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -fpic -DPIC -I../../../src/include -DFRONTEND -I. -DSYSCONFDIR='\"/home/ler/pg-test/etc/postgresql\"' -c -o dllist.o dllist.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -fpic -DPIC -I../../../src/include -DFRONTEND -I. -DSYSCONFDIR='\"/home/ler/pg-test/etc/postgresql\"' -c -o pqsignal.o pqsignal.c\nrm -f common.c && ln -s ../../../src/backend/utils/mb/common.c .\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -fpic -DPIC -I../../../src/include -DFRONTEND -I. -DSYSCONFDIR='\"/home/ler/pg-test/etc/postgresql\"' -c -o common.o common.c\nrm -f wchar.c && ln -s ../../../src/backend/utils/mb/wchar.c .\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -fpic -DPIC -I../../../src/include -DFRONTEND -I. -DSYSCONFDIR='\"/home/ler/pg-test/etc/postgresql\"' -c -o wchar.o wchar.c\nar cq libpq.a `lorder fe-auth.o fe-connect.o fe-exec.o fe-misc.o fe-print.o fe-lobj.o pqexpbuffer.o dllist.o pqsignal.o common.o wchar.o | tsort`\ntsort: cycle in data\ntsort: fe-connect.o\ntsort: fe-exec.o\ntsort: fe-misc.o\ntsort: cycle in data\ntsort: fe-connect.o\ntsort: fe-exec.o\ntsort: cycle in data\ntsort: fe-auth.o\ntsort: fe-connect.o\nranlib libpq.a\n/usr/libexec/elf/ld -x -shared -soname libpq.so.2 -o libpq.so.2 fe-auth.o fe-connect.o fe-exec.o fe-misc.o fe-print.o fe-lobj.o pqexpbuffer.o dllist.o pqsignal.o common.o wchar.o -lssl -lcrypto -lcrypt -R/home/ler/pg-test/lib\nrm -f libpq.so\nln -s libpq.so.2 libpq.so\ngmake[3]: Leaving directory `/home/ler/pg-dev/pgsql/src/interfaces/libpq'\ngmake[3]: Entering directory `/home/ler/pg-dev/pgsql/src/interfaces/ecpg'\ngmake -C include all\ngmake[4]: Entering directory `/home/ler/pg-dev/pgsql/src/interfaces/ecpg/include'\ngmake[4]: Nothing to be done for `all'.\ngmake[4]: Leaving directory `/home/ler/pg-dev/pgsql/src/interfaces/ecpg/include'\ngmake -C lib all\ngmake[4]: Entering directory `/home/ler/pg-dev/pgsql/src/interfaces/ecpg/lib'\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -fpic -DPIC -I../../../../src/include -I../../../../src/interfaces/ecpg/include -I../../../../src/interfaces/libpq -c -o execute.o execute.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -fpic -DPIC -I../../../../src/include -I../../../../src/interfaces/ecpg/include -I../../../../src/interfaces/libpq -c -o typename.o typename.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -fpic -DPIC -I../../../../src/include -I../../../../src/interfaces/ecpg/include -I../../../../src/interfaces/libpq -c -o descriptor.o descriptor.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -fpic -DPIC -I../../../../src/include -I../../../../src/interfaces/ecpg/include -I../../../../src/interfaces/libpq -c -o data.o data.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -fpic -DPIC -I../../../../src/include -I../../../../src/interfaces/ecpg/include -I../../../../src/interfaces/libpq -c -o error.o error.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -fpic -DPIC -I../../../../src/include -I../../../../src/interfaces/ecpg/include -I../../../../src/interfaces/libpq -c -o prepare.o prepare.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -fpic -DPIC -I../../../../src/include -I../../../../src/interfaces/ecpg/include -I../../../../src/interfaces/libpq -c -o memory.o memory.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -fpic -DPIC -I../../../../src/include -I../../../../src/interfaces/ecpg/include -I../../../../src/interfaces/libpq -c -o connect.o connect.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -fpic -DPIC -I../../../../src/include -I../../../../src/interfaces/ecpg/include -I../../../../src/interfaces/libpq -c -o misc.o misc.c\nar cq libecpg.a `lorder execute.o typename.o descriptor.o data.o error.o prepare.o memory.o connect.o misc.o | tsort`\ntsort: cycle in data\ntsort: prepare.o\ntsort: misc.o\ntsort: cycle in data\ntsort: error.o\ntsort: execute.o\ntsort: connect.o\ntsort: cycle in data\ntsort: error.o\ntsort: execute.o\ntsort: descriptor.o\ntsort: memory.o\ntsort: cycle in data\ntsort: execute.o\ntsort: descriptor.o\ntsort: error.o\ntsort: cycle in data\ntsort: execute.o\ntsort: descriptor.o\ntsort: data.o\nranlib libecpg.a\n/usr/libexec/elf/ld -x -shared -soname libecpg.so.3 -o libecpg.so.3 execute.o typename.o descriptor.o data.o error.o prepare.o memory.o connect.o misc.o -L../../../../src/interfaces/libpq -lpq -R/home/ler/pg-test/lib\nrm -f libecpg.so\nln -s libecpg.so.3 libecpg.so\ngmake[4]: Leaving directory `/home/ler/pg-dev/pgsql/src/interfaces/ecpg/lib'\ngmake -C preproc all\ngmake[4]: Entering directory `/home/ler/pg-dev/pgsql/src/interfaces/ecpg/preproc'\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -I./../include -DMAJOR_VERSION=2 -DMINOR_VERSION=8 -DPATCHLEVEL=0 -DINCLUDE_PATH=\\\"/home/ler/pg-test/include/postgresql\\\" -c -o preproc.o preproc.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -I./../include -DMAJOR_VERSION=2 -DMINOR_VERSION=8 -DPATCHLEVEL=0 -DINCLUDE_PATH=\\\"/home/ler/pg-test/include/postgresql\\\" -c -o pgc.o pgc.c\npgc.c: In function `yylex':\npgc.c:1247: warning: label `find_rule' defined but not used\npgc.l: At top level:\npgc.c:3095: warning: `yy_flex_realloc' defined but not used\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -I./../include -DMAJOR_VERSION=2 -DMINOR_VERSION=8 -DPATCHLEVEL=0 -DINCLUDE_PATH=\\\"/home/ler/pg-test/include/postgresql\\\" -c -o type.o type.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -I./../include -DMAJOR_VERSION=2 -DMINOR_VERSION=8 -DPATCHLEVEL=0 -DINCLUDE_PATH=\\\"/home/ler/pg-test/include/postgresql\\\" -c -o ecpg.o ecpg.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -I./../include -DMAJOR_VERSION=2 -DMINOR_VERSION=8 -DPATCHLEVEL=0 -DINCLUDE_PATH=\\\"/home/ler/pg-test/include/postgresql\\\" -c -o ecpg_keywords.o ecpg_keywords.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -I./../include -DMAJOR_VERSION=2 -DMINOR_VERSION=8 -DPATCHLEVEL=0 -DINCLUDE_PATH=\\\"/home/ler/pg-test/include/postgresql\\\" -c -o output.o output.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -I./../include -DMAJOR_VERSION=2 -DMINOR_VERSION=8 -DPATCHLEVEL=0 -DINCLUDE_PATH=\\\"/home/ler/pg-test/include/postgresql\\\" -c -o keywords.o keywords.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -I./../include -DMAJOR_VERSION=2 -DMINOR_VERSION=8 -DPATCHLEVEL=0 -DINCLUDE_PATH=\\\"/home/ler/pg-test/include/postgresql\\\" -c -o c_keywords.o c_keywords.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -I./../include -DMAJOR_VERSION=2 -DMINOR_VERSION=8 -DPATCHLEVEL=0 -DINCLUDE_PATH=\\\"/home/ler/pg-test/include/postgresql\\\" -c -o descriptor.o descriptor.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -I./../include -DMAJOR_VERSION=2 -DMINOR_VERSION=8 -DPATCHLEVEL=0 -DINCLUDE_PATH=\\\"/home/ler/pg-test/include/postgresql\\\" -c -o variable.o variable.c\ngcc -o ecpg preproc.o pgc.o type.o ecpg.o ecpg_keywords.o output.o keywords.o c_keywords.o ../lib/typename.o descriptor.o variable.o -lssl -lcrypto -lz -lcrypt -lcompat -lm -lutil -lreadline -ltermcap -lncurses -R/home/ler/pg-test/lib\ngmake[4]: Leaving directory `/home/ler/pg-dev/pgsql/src/interfaces/ecpg/preproc'\ngmake[3]: Leaving directory `/home/ler/pg-dev/pgsql/src/interfaces/ecpg'\ngmake[3]: Entering directory `/home/ler/pg-dev/pgsql/src/interfaces/libpgeasy'\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -fpic -DPIC -I../../../src/include -I../../../src/interfaces/libpq -c -o libpgeasy.o libpgeasy.c\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -fpic -DPIC -I../../../src/include -I../../../src/interfaces/libpq -c -o halt.o halt.c\nar cq libpgeasy.a `lorder libpgeasy.o halt.o | tsort`\nranlib libpgeasy.a\n/usr/libexec/elf/ld -x -shared -soname libpgeasy.so.2 -o libpgeasy.so.2 libpgeasy.o halt.o -L../../../src/interfaces/libpq -lpq -lcrypt -R/home/ler/pg-test/lib\nrm -f libpgeasy.so\nln -s libpgeasy.so.2 libpgeasy.so\ngmake[3]: Leaving directory `/home/ler/pg-dev/pgsql/src/interfaces/libpgeasy'\ngmake[3]: Entering directory `/home/ler/pg-dev/pgsql/src/interfaces/libpq++'\nc++ -O2 -Wall -fpic -DPIC -I../../../src/include -I../../../src/interfaces/libpq -c -o pgconnection.o pgconnection.cc\nc++ -O2 -Wall -fpic -DPIC -I../../../src/include -I../../../src/interfaces/libpq -c -o pgdatabase.o pgdatabase.cc\nc++ -O2 -Wall -fpic -DPIC -I../../../src/include -I../../../src/interfaces/libpq -c -o pgtransdb.o pgtransdb.cc\nc++ -O2 -Wall -fpic -DPIC -I../../../src/include -I../../../src/interfaces/libpq -c -o pgcursordb.o pgcursordb.cc\nc++ -O2 -Wall -fpic -DPIC -I../../../src/include -I../../../src/interfaces/libpq -c -o pglobject.o pglobject.cc\nar cq libpq++.a `lorder pgconnection.o pgdatabase.o pgtransdb.o pgcursordb.o pglobject.o | tsort`\nranlib libpq++.a\n/usr/libexec/elf/ld -x -shared -soname libpq++.so.3 -o libpq++.so.3 pgconnection.o pgdatabase.o pgtransdb.o pgcursordb.o pglobject.o -L../../../src/interfaces/libpq -lpq -R/home/ler/pg-test/lib\nrm -f libpq++.so\nln -s libpq++.so.3 libpq++.so\ngmake[3]: Leaving directory `/home/ler/pg-dev/pgsql/src/interfaces/libpq++'\ngmake[3]: Entering directory `/home/ler/pg-dev/pgsql/src/interfaces/libpgtcl'\ngmake -C ../../../src/interfaces/libpq all\ngmake[4]: Entering directory `/home/ler/pg-dev/pgsql/src/interfaces/libpq'\ngmake[4]: Nothing to be done for `all'.\ngmake[4]: Leaving directory `/home/ler/pg-dev/pgsql/src/interfaces/libpq'\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -fpic -DPIC -I../../../src/include -I../../../src/interfaces/libpq -c -o pgtcl.o pgtcl.c\nIn file included from pgtcl.c:19:\nlibpgtcl.h:19: tcl.h: No such file or directory\nIn file included from pgtcl.c:20:\npgtclCmds.h:17: tcl.h: No such file or directory\nIn file included from pgtcl.c:19:\nlibpgtcl.h:21: syntax error before `*'\nlibpgtcl.h:22: syntax error before `*'\nIn file included from pgtcl.c:20:\npgtclCmds.h:35: syntax error before `Tcl_Interp'\npgtclCmds.h:71: syntax error before `cData'\npgtclCmds.h:73: syntax error before `cData'\npgtclCmds.h:75: syntax error before `cData'\npgtclCmds.h:77: syntax error before `cData'\npgtclCmds.h:79: syntax error before `cData'\npgtclCmds.h:81: syntax error before `cData'\npgtclCmds.h:83: syntax error before `cData'\npgtclCmds.h:85: syntax error before `cData'\npgtclCmds.h:87: syntax error before `cData'\npgtclCmds.h:89: syntax error before `cData'\npgtclCmds.h:91: syntax error before `cData'\npgtclCmds.h:93: syntax error before `cData'\npgtclCmds.h:95: syntax error before `cData'\npgtclCmds.h:97: syntax error before `cData'\npgtclCmds.h:99: syntax error before `cData'\npgtclCmds.h:101: syntax error before `cData'\npgtclCmds.h:103: syntax error before `cData'\nIn file included from pgtcl.c:21:\npgtclId.h:18: syntax error before `*'\npgtclId.h:37: syntax error before `*'\npgtclId.h:39: syntax error before `cData'\npgtclId.h:40: syntax error before `cData'\npgtclId.h:41: syntax error before `cData'\npgtclId.h:42: syntax error before `*'\npgtclId.h:43: syntax error before `*'\npgtclId.h:44: syntax error before `*'\npgtclId.h:45: syntax error before `*'\npgtclId.h:49: syntax error before `clientData'\npgtclId.h:63: syntax error before `Pg_ConnType'\npgtclId.h:63: warning: type defaults to `int' in declaration of `Pg_ConnType'\npgtclId.h:63: warning: data definition has no type or storage class\npgtcl.c:30: syntax error before `*'\npgtcl.c:31: warning: no previous prototype for `Pgtcl_Init'\npgtcl.c: In function `Pgtcl_Init':\npgtcl.c:43: warning: implicit declaration of function `Tcl_CreateCommand'\npgtcl.c:43: `interp' undeclared (first use in this function)\npgtcl.c:43: (Each undeclared identifier is reported only once\npgtcl.c:43: for each function it appears in.)\npgtcl.c:46: `ClientData' undeclared (first use in this function)\npgtcl.c:46: syntax error before `0'\npgtcl.c:51: syntax error before `0'\npgtcl.c:56: syntax error before `0'\npgtcl.c:61: syntax error before `0'\npgtcl.c:66: syntax error before `0'\npgtcl.c:71: syntax error before `0'\npgtcl.c:76: syntax error before `0'\npgtcl.c:81: syntax error before `0'\npgtcl.c:86: syntax error before `0'\npgtcl.c:91: syntax error before `0'\npgtcl.c:96: syntax error before `0'\npgtcl.c:101: syntax error before `0'\npgtcl.c:106: syntax error before `0'\npgtcl.c:111: syntax error before `0'\npgtcl.c:116: syntax error before `0'\npgtcl.c:121: syntax error before `0'\npgtcl.c:126: syntax error before `0'\npgtcl.c:128: warning: implicit declaration of function `Tcl_PkgProvide'\npgtcl.c:130: `TCL_OK' undeclared (first use in this function)\npgtcl.c:131: warning: control reaches end of non-void function\npgtcl.c: At top level:\npgtcl.c:135: syntax error before `*'\npgtcl.c:136: warning: no previous prototype for `Pgtcl_SafeInit'\npgtcl.c: In function `Pgtcl_SafeInit':\npgtcl.c:137: `interp' undeclared (first use in this function)\ngmake[3]: *** [pgtcl.o] Error 1\ngmake[3]: Leaving directory `/home/ler/pg-dev/pgsql/src/interfaces/libpgtcl'\ngmake[2]: *** [all] Error 2\ngmake[2]: Leaving directory `/home/ler/pg-dev/pgsql/src/interfaces'\ngmake[1]: *** [all] Error 2\ngmake[1]: Leaving directory `/home/ler/pg-dev/pgsql/src'\ngmake: *** [all] Error 2\n> \n> -- \n> Peter Eisentraut [email protected] http://yi.org/peter-e/\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: [email protected]\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n", "msg_date": "Sun, 26 Nov 2000 12:33:16 -0600", "msg_from": "Larry Rosenman <[email protected]>", "msg_from_op": true, "msg_subject": "Re: tcl/FreeBSD 4.2-STABLE, multiple TCL versions installed" }, { "msg_contents": "Larry Rosenman writes:\n\n> > Fixed\n> nope. Still dies in the same place in the same way. The configure\n> input I gave was:\n> \n> ./configure --prefix=/home/ler/pg-test --enable-syslog \\\n> \t--with-CXX --with-perl --enable-multibyte --enable-cassert \\\n> \t--with-openssl \\\n> \t--with-includes=\"/usr/local/include/tcl8.3,/usr/local/include/tk8.3\" \\\n> \t--with-tcl \\\n> \t--with-tclconfig=/usr/local/lib/tcl8.3 \\\n> \t--with-tkconfig=/usr/local/lib/tk8.3\n\nThat is *not* the configure input you gave last time, nor is it valid. \n(Hint: line 4)\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n", "msg_date": "Sun, 26 Nov 2000 19:47:33 +0100 (CET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: tcl/FreeBSD 4.2-STABLE, multiple TCL versions installed" }, { "msg_contents": "* Peter Eisentraut <[email protected]> [001126 14:30]:\n> Larry Rosenman writes:\n> \n> > > Fixed\n> > nope. Still dies in the same place in the same way. The configure\n> > input I gave was:\n> > \n> > ./configure --prefix=/home/ler/pg-test --enable-syslog \\\n> > \t--with-CXX --with-perl --enable-multibyte --enable-cassert \\\n> > \t--with-openssl \\\n> > \t--with-includes=\"/usr/local/include/tcl8.3,/usr/local/include/tk8.3\" \\\n> > \t--with-tcl \\\n> > \t--with-tclconfig=/usr/local/lib/tcl8.3 \\\n> > \t--with-tkconfig=/usr/local/lib/tk8.3\n> \n> That is *not* the configure input you gave last time, nor is it valid. \n> (Hint: line 4)\nooops. I had fixed the output, but had tried, pre-fix, this way.\n\nWorks fine now. \n\nSorry.\n> \n> -- \n> Peter Eisentraut [email protected] http://yi.org/peter-e/\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: [email protected]\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n", "msg_date": "Sun, 26 Nov 2000 16:36:03 -0600", "msg_from": "Larry Rosenman <[email protected]>", "msg_from_op": true, "msg_subject": "Re: tcl/FreeBSD 4.2-STABLE, multiple TCL versions installed" } ]
[ { "msg_contents": "I looked over the last discussion of selecting IPC keys for shared memory\nand semaphores (pghackers thread \"shmem_seq may be a bad idea\" starting\n4/30/00). There were some good ideas there, but the discussion still\nassumed that there would be only one postmaster running on a given port\nnumber on a system, so that the port number is an adequate unique\nidentifier to generate IPC keys from. That assumption has been broken\nby the UUNET \"virtual hosting\" patch; furthermore, Joel Burton's recent\ntale of woe reminds us that the interlock that keeps two postmasters from\nstarting on the same port number is not bulletproof anyway. So, here is\na new proposal that still works when multiple postmasters share a port\nnumber.\n\nIt's nice to generate IPC keys that use the port number as high-order\ndigits, since in typical cases that makes it much easier to tell which IPC\nobjects belong to which postmaster. (I still dislike ftok-generated keys\nfor the same reasons I enumerated before: they don't guarantee uniqueness,\nso they don't actually simplify life at all; and they do make it hard to\ntell which IPC object is which.)\n\nWhat we must be able to do is cope with collisions in selected key\nnumbers. The collision may be against a key number belonging to another\napplication, or one belonging to another still-active postmaster, or one\nbelonging to a dead postmaster, or one belonging to a previous reset cycle\nof the current postmaster. We want to detect the latter two cases and\nattempt to free the no-longer-used IPC object, without risking breaking\nthings in the first two cases. If we cannot free the IPC object then we\nmust move on to a new key number and try again.\n\nTo identify shmem segments reliably, I propose we adopt a convention that\nthe first word of every Postgres shmem segment contain a magic number\n(some constant we select at random) and the second word contain the PID\nof the creating postmaster or standalone backend. The magic number will\nhelp guard against misidentifying segments belonging to other applications.\n\nPseudocode for allocating a new shmem segment is as follows:\n\n// Do this during startup or at the beginning of postmaster reset:\nNextShmemSegID := port# * 1000;\n\n// Do this to allocate each shared memory segment:\nfor (NextShmemSegID++ ; ; NextShmemSegID++)\n{\n Attempt to create shmem seg with ID NextShmemSegID, desired size,\n and flags IPC_EXCL;\n if (successful)\n break;\n if (error is not due to key collision)\n fail;\n Attempt to attach to shmem seg with ID NextShmemSegID;\n if (unsuccessful)\n continue; // segment is some other app's\n if (first word is not correct magic number)\n detach and continue; // segment is some other app's\n if (second word is PID of some existing process other than me)\n detach and continue; // segment is some other postmaster's\n // segment appears to be from a dead postmaster or my previous cycle,\n // so try to get rid of it\n detach from segment;\n try to delete segment;\n if (unsuccessful)\n continue; // segment does not belong to Postgres user\n Attempt to create shmem seg with ID NextShmemSegID, desired size,\n and flags IPC_EXCL;\n if (successful)\n break;\n // Can only get here if some other process just grabbed the same\n // shmem key. Let him have that one, loop around to try another.\n}\n// NextShmemSegID is ID of successfully created segment;\n// attach to it and set the PID and magic-number words, IN THAT ORDER.\n\n\nNote that at each postmaster reset, we restart the NextShmemSegID\ncounter. Therefore, during a reset we will normally find the same\nshmem keys we used on the last cycle, free them, and reuse them.\n\nThe magic-number word is not really necessary; it just improves the\nodds that we won't clobber some other app's shared mem. To get into\ntrouble that way, the other app would have to (a) be running as the\nsame user as Postgres (else we'll not be able to delete its shmem);\n(b) use one of the same shmem keys as we do; and (c) have the magic\nnumber as the value of the first word in its shmem. (a) and (b)\nare already pretty unlikely, but I like the extra bit of assurance.\n\nWith this scheme we are not dependent at all on the assumption of\ndifferent postmasters having different port numbers. Running multiple\npostmasters on the same port number has no consequences worse than\nslightly slowing down postmaster startup while we search for currently\nunused shmem keys.\n\nThis scheme also fixes our problems with dying if there is an existing\nshmem segment of the right key but wrong size, as can happen after a\nversion upgrade or change of -N or -B parameters. Since the scheme always\ndeletes and recreates an old segment, rather than trying to use it as-is,\nit handles size changes automatically.\n\n\nThe exact same logic can be applied to assignment of IPC semaphore\nsets, but there is a small difficulty to be resolved: where do we\nput the identification information (magic number and creator's PID)?\nI propose that we allocate one extra semaphore in each semaphore set\n--- ie, 17 per set instead of 16 --- to hold this info. This extra\nsemaphore is never touched during normal operations. During creation\nof a semaphore set, the creating process does\n semctl(semid, 16, SETVAL, PGSEMMAGIC-1);\nfollowed by a semop() increment of that semaphore. This leaves the\nextra semaphore with a count of PGSEMMAGIC and a sempid referencing\nthe creating process. These values can be read nondestructively\nby other postmasters using semctl(). We will attempt to free an old\nsemaphore set only if it has exactly 17 semaphores and the last one\nhas the right count and a sempid that doesn't refer to another live\nprocess.\n\nWe have to assign PGSEMMAGIC small enough to ensure that it won't fall\nfoul of SEMVMX, but that probably isn't a big problem. A more serious\npotential portability issue is that some implementations might not\nsupport the semctl(GETPID) operation (ie, get PID of last process that\ndid a semop() on that semaphore). It seems like a pretty basic part\nof the SysV semaphore functionality to me, but ... Anyone know of any\nplatforms where that's missing?\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 25 Nov 2000 21:55:18 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Proposal for fixing IPC key assignment" } ]
[ { "msg_contents": "When locale is enabled, we have always had a problem using an index\nwith:\n\n\tcol LIKE 'abc%'\n\nWe need to make this:\n\n\tcol LIKE 'abc%' AND\n\tcol >= \"abc\" AND\n\tcol < \"abd\"\n\nbut when locale is enabled, we can't be sure what letter is greater than\n'c' in this case.\n\nWhy don't we just spin through all 255 locale values, and find the\nlowest value that is greater than comparison target. It takes only 255\ncomparisons, which is certainly faster than not using an index with\nLIKE.\n\nIt is so simple, I don't know why I didn't think of it before. If\nperformance is a problem, we can do it once in the backend or even in\nthe postmaster and keep the collation ordering in a 256-byte array. We\nmay even be able to handle multi-byte in this way, though with 256^2, we\nwould need to do it once on postmaster startup.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 26 Nov 2000 01:12:56 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "LIKE optimization and locale" }, { "msg_contents": "Bruce Momjian writes:\n\n> Why don't we just spin through all 255 locale values, and find the\n> lowest value that is greater than comparison target.\n\nThe issue is not that the 255 extended ASCII characters have a different\nordering in various locales (although that's part of it). The real\nproblem lies with multi-character collating elements, context dependent\ncollation order, multi-pass sorting algorithms, etc. I'm almost convinced\nthat it is not possible to do any such optimization as we had for the most\ngeneral case.\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n", "msg_date": "Sun, 26 Nov 2000 11:03:43 +0100 (CET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: LIKE optimization and locale" }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> The real\n> problem lies with multi-character collating elements, context dependent\n> collation order, multi-pass sorting algorithms, etc. I'm almost convinced\n> that it is not possible to do any such optimization as we had for the most\n> general case.\n\nIt's not impossible, surely. The question is whether it's practical,\nparticularly from a manpower and maintenance standpoint. We do not\nwant to build something that needs explicit knowledge of a ton of\nlocale-specific special cases.\n\nThe core problem is: given a string \"foo\", find a string \"fop\" that\nis greater than any possible extension \"foobar\" of \"foo\". We need\nnot find the least such string (else it would indeed be a hard\nproblem), just a reasonably close upper bound. The algorithm we have\nin 7.0.* increments the last byte(s) of \"foo\" until it finds\nsomething greater than \"foo\". That handles collation orders that are\ndifferent from numerical order, but it still breaks down in the cases\nPeter mentions.\n\nOne variant I've been wondering about is to test a candidate bound\nstring against not only \"foo\", but all single-character extensions of\n\"foo\", ie, \"foo\\001\" through \"foo\\255\". That would catch situations\nlike the one most recently complained of, where the last character\nof the proposed bound string is just a noise-character in dictionary\norder. But I'm afraid it's still not good enough to catch all cases\n... and it doesn't generalize to MULTIBYTE very well anyway.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 26 Nov 2000 13:16:58 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: LIKE optimization and locale " }, { "msg_contents": "> The core problem is: given a string \"foo\", find a string \"fop\" that\n> is greater than any possible extension \"foobar\" of \"foo\". We need\n> not find the least such string (else it would indeed be a hard\n> problem), just a reasonably close upper bound. The algorithm we have\n> in 7.0.* increments the last byte(s) of \"foo\" until it finds\n> something greater than \"foo\". That handles collation orders that are\n> different from numerical order, but it still breaks down in the cases\n> Peter mentions.\n\nThis increment seems sub-optimal.\n\n> \n> One variant I've been wondering about is to test a candidate bound\n> string against not only \"foo\", but all single-character extensions of\n> \"foo\", ie, \"foo\\001\" through \"foo\\255\". That would catch situations\n> like the one most recently complained of, where the last character\n> of the proposed bound string is just a noise-character in dictionary\n> order. But I'm afraid it's still not good enough to catch all cases\n> ... and it doesn't generalize to MULTIBYTE very well anyway.\n\nThis was my suggestion, to test all 255 chars and find the lowest that\nis greater than the target, but I see that multi-byte would be a\nproblem. Oh, well. I hoped some postmaster-generated lookup table\ncould fix this.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 26 Nov 2000 15:43:48 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: LIKE optimization and locale" } ]
[ { "msg_contents": "following is only program fragment, original program structure is from sample\nnamed Duwamish in ms vs.net 2002.\n\n/////////////////////////////////\nprivate const String ID_PARM\t= \"@id\"; \nprivate const String NAME_PARM\t= \"@name\"; \n\npublic UserData GetUserById(int id)\n{\n\tif ( dataAdapter == null )\n\t{\n\t\tthrow new System.ObjectDisposedException( GetType().FullName );\n\t}\n\n\tUserData userData = new UserData();\n\n\tdataAdapter.SelectCommand = GetUserByIdCommand();\n\tdataAdapter.SelectCommand.Parameters[ID_PARM].Value = id; \n\tdataAdapter.Fill(data); \n\n\treturn userData;\n}\n\nprivate SelectCommand GetUserByIdCommand()\n{\n\tif ( getUserCommand == null) \n\t{ \n\t\tselectUserByIdCommand = new SelectCommand(\"user_select_by_id\", Configuration.ConnectDB());\n\t\tselectUserByIdCommand.CommandType = CommandType.StoredProcedure;\n\n\t\tParameterCollection params = selectUserByIdCommand.Parameters; \n\t\tparams.Add(new Parameter(ID_PARM, DbType.Int32));\n\t} \n\treturn selectUserByIdCommand; \n}\n/////////////////////////////////\n\n---------------------------------\nCREATE TABLE users (\n id serial NOT NULL,\n name character varying(32) NOT NULL\n);\n---------------------------------\nCREATE TYPE user_set AS (\n\tid integer,\n\tname character varying(32)\n);\n---------------------------------\nCREATE FUNCTION user_select_by_id(\"@id\" int4)\nRETURNS SETOF user_set\nAS '\ndeclare rec record;\n\nbegin\n\n\tfor rec in\n\t\tselect * from users where id = \"@id\"\n\tloop\n\t\treturn next rec;\n\tend loop;\n\treturn;\n\nend; '\n LANGUAGE plpgsql;\n---------------------------------\n\nThanks & Regards!\n\t\t\t \nArnold.Zhu\n2004-11-26\n\n\n\n", "msg_date": "Sun, 26 Nov 2000 15:55:31 +0800", "msg_from": "\"Arnold.Zhu\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: How to make @id or $id as parameter name in plpgsql,\n\tisit available?" } ]
[ { "msg_contents": ">We have to assign PGSEMMAGIC small enough to ensure that it won't fall\n>foul of SEMVMX, but that probably isn't a big problem. A more serious\n>potential portability issue is that some implementations might not\n>support the semctl(GETPID) operation (ie, get PID of last process that\n>did a semop() on that semaphore). It seems like a pretty basic part\n>of the SysV semaphore functionality to me, but ... Anyone know of any\n>platforms where that's missing?\n\nThe beos SysV emulation does not support this call. But, as it's done today, \nadding it should be easy.\n\n cyril\n\n \n>\n>\t\t\tregards, tom lane\n>\n\n", "msg_date": "Sun, 26 Nov 2000 11:02:27 +0100", "msg_from": "Cyril VELTER <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Proposal for fixing IPC key assignment" } ]
[ { "msg_contents": "\n I've a problem with initdb on beos with the current tree. (The last one \nrunning clean is one month old).\n\n when I run initdb, I get the following : \n\n$ initdb -d -n\nRunning with debug mode on.\nRunning with noclean mode on. Mistakes will not be cleaned up.\n\nInitdb variables:\n PGDATA=/boot/home/pgsql/data\n datadir=/boot/home/pgsql/share\n PGPATH=/boot/home/pgsql/bin\n TEMPFILE=/tmp/initdb.9519\n MULTIBYTE=\n MULTIBYTEID=0\n POSTGRES_SUPERUSERNAME=baron\n POSTGRES_SUPERUSERID=1\n TEMPLATE1_BKI=/boot/home/pgsql/share/template1.bki\n GLOBAL_BKI=/boot/home/pgsql/share/global.bki\n TEMPLATE1_DESCR=/boot/home/pgsql/share/template1.description\n GLOBAL_DESCR=/boot/home/pgsql/share/global.description\n POSTGRESQL_CONF_SAMPLE=/boot/home/pgsql/share/postgresql.conf.sample\n PG_HBA_SAMPLE=/boot/home/pgsql/share/pg_hba.conf.sample\n PG_IDENT_SAMPLE=/boot/home/pgsql/share/pg_ident.conf.sample\nThis database system will be initialized with username \"baron\".\nThis user will own all the data files and must also own the server process.\n\nCreating directory /boot/home/pgsql/data\nCreating directory /boot/home/pgsql/data/base\nCreating directory /boot/home/pgsql/data/global\nCreating directory /boot/home/pgsql/data/pg_xlog\nCreating template1 database in /boot/home/pgsql/data/base/1\nRunning: /boot/home/pgsql/bin/postgres -boot -x1 -C -F -D/boot/home/pgsql/\ndata -d template1\nFATAL 2: InitReopen(logfile 0 seg 0) failed: No such file or directory\n\ninitdb failed.\nData directory /boot/home/pgsql/data will not be removed at user's request. \n\n any idea ?\n\n cyril \n\n", "msg_date": "Sun, 26 Nov 2000 11:20:01 +0100", "msg_from": "Cyril VELTER <[email protected]>", "msg_from_op": true, "msg_subject": "Initdb not running on beos" }, { "msg_contents": "Cyril VELTER <[email protected]> writes:\n> FATAL 2: InitReopen(logfile 0 seg 0) failed: No such file or directory\n\nDoes BeOS not support link(2) ?\n\nSee XLogFileInit() in src/backend/access/transam/xlog.c.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 27 Nov 2000 00:32:58 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Initdb not running on beos " } ]
[ { "msg_contents": "[Cced to hackers list]\n\nJan, \n\nI have checked in the fixes you suggested. Thanks for the advice.\n--\nTatsuo Ishii\n\n> > I tried this new feature in PostgreSQL. I found one bug.\n> > Script UCS_to_8859.pl skips input lines which\n> > 1. code <0x80 or\n> > 2. ucs <0x100\n> > \n> > I think second one is not good idea because some codes in ISO8859-2\n> > have ucs <0x100 (e.g. 0xE9 - 0x00E9)\n> \n> Thank for pointing it out. I will check it as soon as possible (this\n> week I'm going to have a business trio to US).\n> --\n> Tatsuo Ishii\n", "msg_date": "Sun, 26 Nov 2000 19:46:32 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Bug in unicode conversion ..." } ]
[ { "msg_contents": "Hi,\n\nRemember also that the GiST library has been integrated into PG, (my brother\nis doing some thesis workon that at the moment), and you can create new\nindex types relatively quickly (assuming that you understand the indexing\ntheory ;-) using this mechanism. Run a web search on GiST for more info.\n\nCurrently GiST has support for btree and rtree indexes, and possibly r+ or *\ntrees, I can't remember which, if any, and IIRC, at least a couple more.\nHowever, if you have a requirement or 3d indexing, and you have the\nknowledge available, you should be able to hack a few 3d indexes together\nquite quickly.\n\n\nCheers...\n \n\n-----Original Message-----\nFrom: Tom Lane\nTo: Franck Martin\nCc: pgsql-general; pgsql-hackers\nSent: 11-26-00 4:35 AM\nSubject: Re: [HACKERS] Indexing for geographic objects? \n\nFranck Martin <[email protected]> writes:\n> I would greatly appreciate if someone could guide me through the\n> methodology to build an index for a custom type or point me to some\n> readings where the algorithm is explained (web, book, etc...).\n\nThe Programmer's Guide chapter \"Interfacing Extensions To Indices\"\noutlines the procedure for making a new datatype indexable. It\nonly discusses the case of adding btree support for a new type,\nthough. For other index classes such as R-tree there are different\nsets of required operators, which are not as well documented but\nyou can find out by looking at code for the already-supported\ndatatypes.\n\n> I plan to use 3D geographical objects...\n\nThat's a bit hard since we don't have any indexes suitable for 3-D\ncoordinates --- the existing R-tree type is for 2-D objects. What's\nmore it assumes that coordinates are Euclidean, which is probably\nnot the model you want for geographical coordinates.\n\nIn theory you could build a new index type suitable for indexing\n3-D points, using the R-tree code as a starting point. I wouldn't\nclass it as a project suitable for a newbie however :-(.\n\nDepending on what your needs are, you might be able to get by with\nprojecting your objects into a flat 2-D coordinate system and using\nan R-tree index in that space. It'd just be approximate but that\nmight be close enough for index purposes.\n\n\t\t\tregards, tom lane\n\n\n**********************************************************************\nThis email and any files transmitted with it are confidential and\nintended solely for the use of the individual or entity to whom they\nare addressed. If you have received this email in error please notify\nthe system manager.\n\nThis footnote also confirms that this email message has been swept by\nMIMEsweeper for the presence of computer viruses.\n\nwww.mimesweeper.com\n**********************************************************************\n\n\n\n\n\nRE: [HACKERS] Indexing for geographic objects? \n\n\nHi,\n\nRemember also that the GiST library has been integrated into PG, (my brother is doing some thesis workon that at the moment), and you can create new index types relatively quickly (assuming that you understand the indexing theory ;-) using this mechanism.  Run a web search on GiST for more info.\nCurrently GiST has support for btree and rtree indexes, and possibly r+ or * trees, I can't remember which, if any, and IIRC, at least a couple more.  However, if you have a requirement or 3d indexing, and you have the knowledge available, you should be able to hack a few 3d indexes together quite quickly.\n\nCheers...\n \n\n-----Original Message-----\nFrom: Tom Lane\nTo: Franck Martin\nCc: pgsql-general; pgsql-hackers\nSent: 11-26-00 4:35 AM\nSubject: Re: [HACKERS] Indexing for geographic objects? \n\nFranck Martin <[email protected]> writes:\n> I would greatly appreciate if someone could guide me through the\n> methodology to build an index for a custom type or point me to some\n> readings where the algorithm is explained (web, book, etc...).\n\nThe Programmer's Guide chapter \"Interfacing Extensions To Indices\"\noutlines the procedure for making a new datatype indexable.  It\nonly discusses the case of adding btree support for a new type,\nthough.  For other index classes such as R-tree there are different\nsets of required operators, which are not as well documented but\nyou can find out by looking at code for the already-supported\ndatatypes.\n\n> I plan to use 3D geographical objects...\n\nThat's a bit hard since we don't have any indexes suitable for 3-D\ncoordinates --- the existing R-tree type is for 2-D objects.  What's\nmore it assumes that coordinates are Euclidean, which is probably\nnot the model you want for geographical coordinates.\n\nIn theory you could build a new index type suitable for indexing\n3-D points, using the R-tree code as a starting point.  I wouldn't\nclass it as a project suitable for a newbie however :-(.\n\nDepending on what your needs are, you might be able to get by with\nprojecting your objects into a flat 2-D coordinate system and using\nan R-tree index in that space.  It'd just be approximate but that\nmight be close enough for index purposes.\n\n                        regards, tom lane\n\n\n\n**********************************************************************\nThis email and any files transmitted with it are confidential and\nintended solely for the use of the individual or entity to whom they\nare addressed. If you have received this email in error please notify\nthe system manager.\n\nThis footnote also confirms that this email message has been swept by\nMIMEsweeper for the presence of computer viruses.\n\nwww.mimesweeper.com\n**********************************************************************", "msg_date": "Sun, 26 Nov 2000 11:34:16 -0000", "msg_from": "Michael Ansley <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] Indexing for geographic objects? " }, { "msg_contents": "I'm also interested in GiST and would be happy if somebody could provide\nworkable example. I have an idea to use GiST indices for our fulltextsearch\nsystem.\n\n\tRegards,\n\n\t\t\tOleg\nOn Sun, 26 Nov 2000, Michael Ansley wrote:\n\n> Date: Sun, 26 Nov 2000 11:34:16 -0000\n> From: Michael Ansley <[email protected]>\n> To: 'Tom Lane ' <[email protected]>, 'Franck Martin ' <[email protected]>\n> Cc: 'pgsql-general ' <[email protected]>,\n> 'pgsql-hackers ' <[email protected]>,\n> \"'[email protected]'\" <[email protected]>\n> Subject: RE: [HACKERS] Indexing for geographic objects? \n> \n> Hi,\n> \n> Remember also that the GiST library has been integrated into PG, (my brother\n> is doing some thesis workon that at the moment), and you can create new\n> index types relatively quickly (assuming that you understand the indexing\n> theory ;-) using this mechanism. Run a web search on GiST for more info.\n> \n> Currently GiST has support for btree and rtree indexes, and possibly r+ or *\n> trees, I can't remember which, if any, and IIRC, at least a couple more.\n> However, if you have a requirement or 3d indexing, and you have the\n> knowledge available, you should be able to hack a few 3d indexes together\n> quite quickly.\n> \n> \n> Cheers...\n> \n> \n> -----Original Message-----\n> From: Tom Lane\n> To: Franck Martin\n> Cc: pgsql-general; pgsql-hackers\n> Sent: 11-26-00 4:35 AM\n> Subject: Re: [HACKERS] Indexing for geographic objects? \n> \n> Franck Martin <[email protected]> writes:\n> > I would greatly appreciate if someone could guide me through the\n> > methodology to build an index for a custom type or point me to some\n> > readings where the algorithm is explained (web, book, etc...).\n> \n> The Programmer's Guide chapter \"Interfacing Extensions To Indices\"\n> outlines the procedure for making a new datatype indexable. It\n> only discusses the case of adding btree support for a new type,\n> though. For other index classes such as R-tree there are different\n> sets of required operators, which are not as well documented but\n> you can find out by looking at code for the already-supported\n> datatypes.\n> \n> > I plan to use 3D geographical objects...\n> \n> That's a bit hard since we don't have any indexes suitable for 3-D\n> coordinates --- the existing R-tree type is for 2-D objects. What's\n> more it assumes that coordinates are Euclidean, which is probably\n> not the model you want for geographical coordinates.\n> \n> In theory you could build a new index type suitable for indexing\n> 3-D points, using the R-tree code as a starting point. I wouldn't\n> class it as a project suitable for a newbie however :-(.\n> \n> Depending on what your needs are, you might be able to get by with\n> projecting your objects into a flat 2-D coordinate system and using\n> an R-tree index in that space. It'd just be approximate but that\n> might be close enough for index purposes.\n> \n> \t\t\tregards, tom lane\n> \n> \n> **********************************************************************\n> This email and any files transmitted with it are confidential and\n> intended solely for the use of the individual or entity to whom they\n> are addressed. If you have received this email in error please notify\n> the system manager.\n> \n> This footnote also confirms that this email message has been swept by\n> MIMEsweeper for the presence of computer viruses.\n> \n> www.mimesweeper.com\n> **********************************************************************\n> \n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Sun, 26 Nov 2000 17:55:58 +0300 (GMT)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": false, "msg_subject": "RE: [HACKERS] Indexing for geographic objects? " }, { "msg_contents": "\nOleg Bartunov <[email protected]> wrote:\n>\n> I'm also interested in GiST and would be happy if somebody could provide\n> workable example. I have an idea to use GiST indices for our fulltextsearch\n> system.\n> \n\nI have recently replied to Franck Martin in regards to this indexing\nquestion, but I didn't think the subject was popular enough for me to\ncontaminate the list(s). You prove me wrong. Here goes:\n\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nTo: Franck Martin <[email protected]>\nFrom: [email protected]\nReply-to: [email protected]\nSubject: Re: [HACKERS] Indexing for geographic objects? \nIn-reply-to: <[email protected]> \nComments: In-reply-to Franck Martin <[email protected]>\n message dated \"Sat, 25 Nov 2000 10:43:16 +1300.\"\nMime-Version: 1.0 (generated by tm-edit 7.108)\nDate: Sat, 25 Nov 2000 02:56:03 -0600\n\nIt is probably possible to hook up an extension directly with the\nR-tree methods available in postgres -- if you stare at the code long\nenough and figure how to use the correct strategies. I chose an easier\npath years ago and I am still satisfied with the results. Check out\nthe GiST -- a general access method built on top of R-tree to provide\na user-friendly interface to it and to allow indexing of more abstract\ntypes, for which straight R-tree is not directly applicable.\n\nI have a small set of complete data types, of which a couple\nillustrate the use of GiST indexing with the geometrical objects, in:\n\nhttp://wit.mcs.anl.gov/~selkovjr/pg_extensions/\n\nIf you are using a pre-7.0 postrgres, grab the file contrib.tgz,\notherwise take contrib-7.0.tgz. The difference is insignificant, but\nthe pre-7.0 version will not fit the current schema. Unpack the source\ninto postgresql-*/contrib and follow instructions in the README\nfiles. The types of interest for you will be seg and cube. You will\nfind pointers to the original sources and docs in the CREDITS section\nof the README file. I also have a version of the original example code\nin pggist-patched.tgz, but I did not check if it works with current\npostgres. It should not be difficult to fix it if it doesn't -- the\nrecent development in the optimizer area made certain things\nunnecessary.\n\nYou might want to check out a working example of the segment data type at:\n\nhttp://wit.mcs.anl.gov/EMP/indexing.html\n\n(search the page for 'KM')\n\nI will be glad to help, but I would also recommend to send more\nsophisticated questions to Joe Hellerstein, the leader of the original\npostgres team that developed GiST. He was very helpful whenever I\nturned to him during the early stages of my data type project.\n\n--Gene\n\n", "msg_date": "Sun, 26 Nov 2000 13:15:20 -0600", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Indexing for geographic objects? " }, { "msg_contents": "Michael Ansley <[email protected]> writes:\n> Remember also that the GiST library has been integrated into PG, (my brother\n> is doing some thesis workon that at the moment),\n\nYeah? Does it still work?\n\nSince the GIST code is not tested by any standard regress test, and is\nso poorly documented that hardly anyone can be using it, I've always\nassumed that it is probably suffering from a severe case of bit-rot.\n\nI'd love to see someone contribute documentation and regression test\ncases for it --- it's a great feature, if it works.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 26 Nov 2000 22:32:41 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Indexing for geographic objects? " }, { "msg_contents": "Tom Lane wrote:\n> Michael Ansley <[email protected]> writes:\n> > Remember also that the GiST library has been integrated into PG, (my brother\n> > is doing some thesis workon that at the moment),\n> \n> Yeah? Does it still work?\n\nYou bet. One would otherwise be hearing from me. I depend on it quite\nheavily and I am checking with almost every release. I am now current\nwith 7.0.2 -- this time it required some change, although not in the c\ncode. And that's pretty amazing: I was only screwed once since\npostgres95 -- by a beta version I don't remember now; then I\ncomplained and the problem was fixed. I don't even know whom I owe\nthanks for that.\n\n> Since the GIST code is not tested by any standard regress test, and is\n> so poorly documented that hardly anyone can be using it, \nI've always\n> assumed that it is probably suffering from a severe case of bit-rot.\n> \n> I'd love to see someone contribute documentation and regression test\n> cases for it --- it's a great feature, if it works.\n\nThe bit rot fortunately did not happen, but the documentation I\npromised Bruce many months ago is still \"in the works\" -- meaning,\nsomething interfered and I haven't had a chance to start. Like a\nfriend of mine muses all the time, \"Promise doesn't mean\nmarriage\". Boy, do I feel guilty.\n\nIt's a bit better with the testing. I am not sure how to test the\nGiST directly, but I have adapted the current version of regression\ntests for the data types that depend on it. One can find them in my\ncontrib directory, under test/ (again, it's\nhttp://wit.mcs.anl.gov/~selkovjr/pg_extensions/contrib.tgz)\n\nIt would be nice if at least one of the GiST types became a built-in\n(that would provide for a more intensive testing), but I can also\nthink of the contrib code being (optionally) included into the main\nbuild and regression test trees. The top-level makefile can have a\ncouple of special targets to build and test the contribs. I believe my\nversion of the tests can be a useful example to other contributors\nwhose code is already in the source tree.\n\n--Gene\n", "msg_date": "Mon, 27 Nov 2000 12:36:42 -0600", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Indexing for geographic objects? " }, { "msg_contents": "[email protected] writes:\n> Tom Lane wrote:\n>> Michael Ansley <[email protected]> writes:\n>>>> Remember also that the GiST library has been integrated into PG, (my brother\n>>>> is doing some thesis workon that at the moment),\n>> \n>> Yeah? Does it still work?\n\n> You bet. One would otherwise be hearing from me. I depend on it quite\n> heavily and I am checking with almost every release.\n\nThat's very good to hear! I was not aware that anyone was banging on it.\n\nIt seems like it would be a fine idea to adopt your stuff at least into\nthe contrib part of the distribution, maybe even (or eventually) into\nthe main release. I think we could probably make it part of the regress\ntests even if it's contrib --- there's precedent, as regress already\nuses some contrib stuff.\n\nDo you have any problem with releasing your stuff under the Postgres\ndistribution terms (BSD license)?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 27 Nov 2000 14:07:16 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Indexing for geographic objects? " }, { "msg_contents": "Tom Lane wrote:\n\n> Do you have any problem with releasing your stuff under the Postgres\n> distribution terms (BSD license)?\n\nNo, I don't see any problem with the BSD license, or any other\nlicense, for that matter. I just had some reservations about releasing\nstuff that was far from perfect, and it took me a while to realize\nit could be useful as it is for some, and serve as a good starting\npoint for others. Now I wonder, what does it take to be in contrib?\n\n> there's precedent, as regress already\n> uses some contrib stuff.\n\nI'd be curious to find out what that stuff is and how it's done. \n\n--Gene\n", "msg_date": "Mon, 27 Nov 2000 18:53:10 -0600", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Indexing for geographic objects? " }, { "msg_contents": "[email protected] writes:\n> Tom Lane wrote:\n>> Do you have any problem with releasing your stuff under the Postgres\n>> distribution terms (BSD license)?\n\n> No, I don't see any problem with the BSD license, or any other\n> license, for that matter. I just had some reservations about releasing\n> stuff that was far from perfect, and it took me a while to realize\n> it could be useful as it is for some, and serve as a good starting\n> point for others. Now I wonder, what does it take to be in contrib?\n\nJust contributing it ;-), which I take the above as permission to do.\nWhen I come up for air from the IPC-hacking I'm doing, I'll grab your\ntarball and see about adding it as a contrib module.\n\nMany thanks!\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 28 Nov 2000 00:07:49 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Indexing for geographic objects? " }, { "msg_contents": "Hi,\n\nWe've done some work with GiST indices and found a little problem\nwith optimizer. The problem could be reproduced with Gene's code\n(link is in original message below). test data and sql I could send - \nit's just 52Kb gzipped file. What is a reason for optimizer to decide\nthat sequential scan is better (look below for a numbers).\nImplicite disabling of seq scan gave much better timings.\n\n\tRegards,\n\t\t\n\t\tOleg\n\n\ntest=# explain select * from test where s @ '1.05 .. 3.95';\nNOTICE: QUERY PLAN:\n\nSeq Scan on test (cost=0.00..184.01 rows=5000 width=12)\n\nEXPLAIN\n\n% ./bench.pl -d test -b 100\ntotal: 3.19 sec; number: 100; for one: 0.032 sec; found 18 docs\n\ntest=# set enable_seqscan = off;\nSET VARIABLE\ntest=# explain select * from test where s @ '1.05 .. 3.95';\nNOTICE: QUERY PLAN:\n\nIndex Scan using test_seg_ix on test (cost=0.00..369.42 rows=5000 width=12)\n\nEXPLAIN\n% ./bench.pl -d test -b 100 -i\ntotal: 1.71 sec; number: 100; for one: 0.017 sec; found 18 docs\n\n\nOn Mon, 27 Nov 2000 [email protected] wrote:\n\n> Date: Mon, 27 Nov 2000 12:36:42 -0600\n> From: [email protected]\n> To: Tom Lane <[email protected]>\n> Cc: 'pgsql-general ' <[email protected]>,\n> 'pgsql-hackers ' <[email protected]>\n> Subject: Re: [HACKERS] Indexing for geographic objects? \n> \n> Tom Lane wrote:\n> > Michael Ansley <[email protected]> writes:\n> > > Remember also that the GiST library has been integrated into PG, (my brother\n> > > is doing some thesis workon that at the moment),\n> > \n> > Yeah? Does it still work?\n> \n> You bet. One would otherwise be hearing from me. I depend on it quite\n> heavily and I am checking with almost every release. I am now current\n> with 7.0.2 -- this time it required some change, although not in the c\n> code. And that's pretty amazing: I was only screwed once since\n> postgres95 -- by a beta version I don't remember now; then I\n> complained and the problem was fixed. I don't even know whom I owe\n> thanks for that.\n> \n> > Since the GIST code is not tested by any standard regress test, and is\n> > so poorly documented that hardly anyone can be using it, \n> I've always\n> > assumed that it is probably suffering from a severe case of bit-rot.\n> > \n> > I'd love to see someone contribute documentation and regression test\n> > cases for it --- it's a great feature, if it works.\n> \n> The bit rot fortunately did not happen, but the documentation I\n> promised Bruce many months ago is still \"in the works\" -- meaning,\n> something interfered and I haven't had a chance to start. Like a\n> friend of mine muses all the time, \"Promise doesn't mean\n> marriage\". Boy, do I feel guilty.\n> \n> It's a bit better with the testing. I am not sure how to test the\n> GiST directly, but I have adapted the current version of regression\n> tests for the data types that depend on it. One can find them in my\n> contrib directory, under test/ (again, it's\n> http://wit.mcs.anl.gov/~selkovjr/pg_extensions/contrib.tgz)\n> \n> It would be nice if at least one of the GiST types became a built-in\n> (that would provide for a more intensive testing), but I can also\n> think of the contrib code being (optionally) included into the main\n> build and regression test trees. The top-level makefile can have a\n> couple of special targets to build and test the contribs. I believe my\n> version of the tests can be a useful example to other contributors\n> whose code is already in the source tree.\n> \n> --Gene\n> \n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Fri, 8 Dec 2000 18:16:30 +0300 (GMT)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Indexing for geographic objects? " }, { "msg_contents": "Oleg Bartunov <[email protected]> writes:\n> We've done some work with GiST indices and found a little problem\n> with optimizer.\n\n> test=# set enable_seqscan = off;\n> SET VARIABLE\n> test=# explain select * from test where s @ '1.05 .. 3.95';\n> NOTICE: QUERY PLAN:\n\n> Index Scan using test_seg_ix on test (cost=0.00..369.42 rows=5000 width=12)\n\n> EXPLAIN\n> % ./bench.pl -d test -b 100 -i\n> total: 1.71 sec; number: 100; for one: 0.017 sec; found 18 docs\n\nI'd venture that the major problem here is bogus estimated selectivities\nfor rtree/gist operators. Note the discrepancy between the estimated\nrow count and the actual (I assume the \"found 18 docs\" is the true\nnumber of rows output by the query). With an estimated row count even\nhalf that (ie, merely two orders of magnitude away from reality ;-))\nthe thing would've correctly chosen the index scan over sequential.\n\n5000 looks like a suspiciously round number ... how many rows are in\nthe table? Have you done a vacuum analyze on it?\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 08 Dec 2000 10:47:37 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Indexing for geographic objects? " }, { "msg_contents": "\njust a note here ... recently, we had a client with similar problems with\nusing index scan, where turning off seqscan did the trick ... we took his\ntables, loaded them into a v7.1beta1 server and it correctly comes up with\nthe index scan ...\n\nOleg, have you tried this with v7.1 yet? \n\nOn Fri, 8 Dec 2000, Tom Lane wrote:\n\n> Oleg Bartunov <[email protected]> writes:\n> > We've done some work with GiST indices and found a little problem\n> > with optimizer.\n> \n> > test=# set enable_seqscan = off;\n> > SET VARIABLE\n> > test=# explain select * from test where s @ '1.05 .. 3.95';\n> > NOTICE: QUERY PLAN:\n> \n> > Index Scan using test_seg_ix on test (cost=0.00..369.42 rows=5000 width=12)\n> \n> > EXPLAIN\n> > % ./bench.pl -d test -b 100 -i\n> > total: 1.71 sec; number: 100; for one: 0.017 sec; found 18 docs\n> \n> I'd venture that the major problem here is bogus estimated selectivities\n> for rtree/gist operators. Note the discrepancy between the estimated\n> row count and the actual (I assume the \"found 18 docs\" is the true\n> number of rows output by the query). With an estimated row count even\n> half that (ie, merely two orders of magnitude away from reality ;-))\n> the thing would've correctly chosen the index scan over sequential.\n> \n> 5000 looks like a suspiciously round number ... how many rows are in\n> the table? Have you done a vacuum analyze on it?\n> \n> \t\t\tregards, tom lane\n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Fri, 8 Dec 2000 12:19:56 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Indexing for geographic objects? " }, { "msg_contents": "On Fri, 8 Dec 2000, Tom Lane wrote:\n\n> Date: Fri, 08 Dec 2000 10:47:37 -0500\n> From: Tom Lane <[email protected]>\n> To: Oleg Bartunov <[email protected]>\n> Cc: [email protected], 'pgsql-hackers ' <[email protected]>\n> Subject: Re: [HACKERS] Indexing for geographic objects? \n> \n> Oleg Bartunov <[email protected]> writes:\n> > We've done some work with GiST indices and found a little problem\n> > with optimizer.\n> \n> > test=# set enable_seqscan = off;\n> > SET VARIABLE\n> > test=# explain select * from test where s @ '1.05 .. 3.95';\n> > NOTICE: QUERY PLAN:\n> \n> > Index Scan using test_seg_ix on test (cost=0.00..369.42 rows=5000 width=12)\n> \n> > EXPLAIN\n> > % ./bench.pl -d test -b 100 -i\n> > total: 1.71 sec; number: 100; for one: 0.017 sec; found 18 docs\n> \n> I'd venture that the major problem here is bogus estimated selectivities\n> for rtree/gist operators. Note the discrepancy between the estimated\n> row count and the actual (I assume the \"found 18 docs\" is the true\n> number of rows output by the query). With an estimated row count even\n\nyes, 18 docs is the true number\n\n> half that (ie, merely two orders of magnitude away from reality ;-))\n> the thing would've correctly chosen the index scan over sequential.\n> \n> 5000 looks like a suspiciously round number ... how many rows are in\n> the table? Have you done a vacuum analyze on it?\n\npark-lane:~/app/pgsql/gist_problem$ wc SQL \n 10009 10049 157987 SQL\nabout 10,000 rows, \nrelevant part of script is:\n.....skipped...\n1.9039...3.5139\n1.8716...3.9317\n\\.\nCREATE INDEX test_seg_ix ON test USING gist (s);\nvacuum analyze;\n^^^^^^^^^^^^^^\nexplain select * from test where s @ '1.05 .. 3.95';\nset enable_seqscan = off;\nexplain select * from test where s @ '1.05 .. 3.95';\n\n\tRegards,\n\t\tOleg\n\n\n> \n> \t\t\tregards, tom lane\n> \n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Fri, 8 Dec 2000 20:41:32 +0300 (GMT)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Indexing for geographic objects? " }, { "msg_contents": "On Fri, 8 Dec 2000, The Hermit Hacker wrote:\n\n> Date: Fri, 8 Dec 2000 12:19:56 -0400 (AST)\n> From: The Hermit Hacker <[email protected]>\n> To: Tom Lane <[email protected]>\n> Cc: Oleg Bartunov <[email protected]>, [email protected],\n> 'pgsql-hackers ' <[email protected]>\n> Subject: Re: [HACKERS] Indexing for geographic objects? \n> \n> \n> just a note here ... recently, we had a client with similar problems with\n> using index scan, where turning off seqscan did the trick ... we took his\n> tables, loaded them into a v7.1beta1 server and it correctly comes up with\n> the index scan ...\n> \n> Oleg, have you tried this with v7.1 yet? \n\nNot yet. Just a plain 7.0.3 release. Will play with 7.1beta.\nBut we're working in real life and need things to work in production :-)\n\n\tregards,\n\t\tOleg\n\n> \n> On Fri, 8 Dec 2000, Tom Lane wrote:\n> \n> > Oleg Bartunov <[email protected]> writes:\n> > > We've done some work with GiST indices and found a little problem\n> > > with optimizer.\n> > \n> > > test=# set enable_seqscan = off;\n> > > SET VARIABLE\n> > > test=# explain select * from test where s @ '1.05 .. 3.95';\n> > > NOTICE: QUERY PLAN:\n> > \n> > > Index Scan using test_seg_ix on test (cost=0.00..369.42 rows=5000 width=12)\n> > \n> > > EXPLAIN\n> > > % ./bench.pl -d test -b 100 -i\n> > > total: 1.71 sec; number: 100; for one: 0.017 sec; found 18 docs\n> > \n> > I'd venture that the major problem here is bogus estimated selectivities\n> > for rtree/gist operators. Note the discrepancy between the estimated\n> > row count and the actual (I assume the \"found 18 docs\" is the true\n> > number of rows output by the query). With an estimated row count even\n> > half that (ie, merely two orders of magnitude away from reality ;-))\n> > the thing would've correctly chosen the index scan over sequential.\n> > \n> > 5000 looks like a suspiciously round number ... how many rows are in\n> > the table? Have you done a vacuum analyze on it?\n> > \n> > \t\t\tregards, tom lane\n> > \n> \n> Marc G. Fournier ICQ#7615664 IRC Nick: Scrappy\n> Systems Administrator @ hub.org \n> primary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n> \n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Fri, 8 Dec 2000 20:45:09 +0300 (GMT)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Indexing for geographic objects? " }, { "msg_contents": "Oleg Bartunov <[email protected]> writes:\n>> 5000 looks like a suspiciously round number ... how many rows are in\n>> the table? Have you done a vacuum analyze on it?\n\n> about 10,000 rows, \n\nSo the thing is estimating 0.5 selectivity, which is a fallback for\noperators it knows nothing whatever about.\n\n[ ... digs in Selkov's scripts ... ]\n\nCREATE OPERATOR @ (\n LEFTARG = seg, RIGHTARG = seg, PROCEDURE = seg_contains,\n COMMUTATOR = '~'\n);\n\nCREATE OPERATOR ~ (\n LEFTARG = seg, RIGHTARG = seg, PROCEDURE = seg_contained,\n COMMUTATOR = '@'\n);\n\nSure 'nuff, no selectivity info attached to these declarations.\nTry adding\n\n RESTRICT = contsel, JOIN = contjoinsel\n\nto them. That's still an entirely bogus estimate, but at least\nit's a smaller bogus estimate ... small enough to select an indexscan,\none hopes (see utils/adt/geo_selfuncs.c).\n\nI have not dug through Gene's stuff to see which other indexable\noperators might be missing selectivity estimates, but I'll bet there\nare others. If you have the time to look through it and submit a\npatch, I can incorporate it into the version that will go into contrib.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 08 Dec 2000 12:59:27 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Indexing for geographic objects? " }, { "msg_contents": "On Fri, 8 Dec 2000, Oleg Bartunov wrote:\n\n> On Fri, 8 Dec 2000, The Hermit Hacker wrote:\n> \n> > Date: Fri, 8 Dec 2000 12:19:56 -0400 (AST)\n> > From: The Hermit Hacker <[email protected]>\n> > To: Tom Lane <[email protected]>\n> > Cc: Oleg Bartunov <[email protected]>, [email protected],\n> > 'pgsql-hackers ' <[email protected]>\n> > Subject: Re: [HACKERS] Indexing for geographic objects? \n> > \n> > \n> > just a note here ... recently, we had a client with similar problems with\n> > using index scan, where turning off seqscan did the trick ... we took his\n> > tables, loaded them into a v7.1beta1 server and it correctly comes up with\n> > the index scan ...\n> > \n> > Oleg, have you tried this with v7.1 yet? \n> \n> Not yet. Just a plain 7.0.3 release. Will play with 7.1beta.\n> But we're working in real life and need things to work in production :-)\n\nOkay, then I believe that what you are experience wiht v7.0.3 is already\nfixed in v7.1beta, based on similar results I got with some queries and\nthen tested uver v7.1 ...\n\n > \n> \tregards,\n> \t\tOleg\n> \n> > \n> > On Fri, 8 Dec 2000, Tom Lane wrote:\n> > \n> > > Oleg Bartunov <[email protected]> writes:\n> > > > We've done some work with GiST indices and found a little problem\n> > > > with optimizer.\n> > > \n> > > > test=# set enable_seqscan = off;\n> > > > SET VARIABLE\n> > > > test=# explain select * from test where s @ '1.05 .. 3.95';\n> > > > NOTICE: QUERY PLAN:\n> > > \n> > > > Index Scan using test_seg_ix on test (cost=0.00..369.42 rows=5000 width=12)\n> > > \n> > > > EXPLAIN\n> > > > % ./bench.pl -d test -b 100 -i\n> > > > total: 1.71 sec; number: 100; for one: 0.017 sec; found 18 docs\n> > > \n> > > I'd venture that the major problem here is bogus estimated selectivities\n> > > for rtree/gist operators. Note the discrepancy between the estimated\n> > > row count and the actual (I assume the \"found 18 docs\" is the true\n> > > number of rows output by the query). With an estimated row count even\n> > > half that (ie, merely two orders of magnitude away from reality ;-))\n> > > the thing would've correctly chosen the index scan over sequential.\n> > > \n> > > 5000 looks like a suspiciously round number ... how many rows are in\n> > > the table? Have you done a vacuum analyze on it?\n> > > \n> > > \t\t\tregards, tom lane\n> > > \n> > \n> > Marc G. Fournier ICQ#7615664 IRC Nick: Scrappy\n> > Systems Administrator @ hub.org \n> > primary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n> > \n> \n> _____________________________________________________________\n> Oleg Bartunov, sci.researcher, hostmaster of AstroNet,\n> Sternberg Astronomical Institute, Moscow University (Russia)\n> Internet: [email protected], http://www.sai.msu.su/~megera/\n> phone: +007(095)939-16-83, +007(095)939-23-83\n> \n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Fri, 8 Dec 2000 14:03:54 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Indexing for geographic objects? " }, { "msg_contents": "On Fri, 8 Dec 2000, Tom Lane wrote:\n\n> Date: Fri, 08 Dec 2000 12:59:27 -0500\n> From: Tom Lane <[email protected]>\n> To: Oleg Bartunov <[email protected]>\n> Cc: [email protected], 'pgsql-hackers ' <[email protected]>\n> Subject: Re: [HACKERS] Indexing for geographic objects? \n> \n> Oleg Bartunov <[email protected]> writes:\n> >> 5000 looks like a suspiciously round number ... how many rows are in\n> >> the table? Have you done a vacuum analyze on it?\n> \n> > about 10,000 rows, \n> \n> So the thing is estimating 0.5 selectivity, which is a fallback for\n> operators it knows nothing whatever about.\n> \n> [ ... digs in Selkov's scripts ... ]\n> \n> CREATE OPERATOR @ (\n> LEFTARG = seg, RIGHTARG = seg, PROCEDURE = seg_contains,\n> COMMUTATOR = '~'\n> );\n> \n> CREATE OPERATOR ~ (\n> LEFTARG = seg, RIGHTARG = seg, PROCEDURE = seg_contained,\n> COMMUTATOR = '@'\n> );\n> \n> Sure 'nuff, no selectivity info attached to these declarations.\n> Try adding\n> \n> RESTRICT = contsel, JOIN = contjoinsel\n> \n> to them. That's still an entirely bogus estimate, but at least\n> it's a smaller bogus estimate ... small enough to select an indexscan,\n> one hopes (see utils/adt/geo_selfuncs.c).\n\nGreat ! Now we have better plan:\n\ntest=# explain select * from test where s @ '1.05 .. 3.95';\nNOTICE: QUERY PLAN:\n\nIndex Scan using test_seg_ix on test (cost=0.00..61.56 rows=100 width=12)\n\nEXPLAIN\n\n\n> \n> I have not dug through Gene's stuff to see which other indexable\n> operators might be missing selectivity estimates, but I'll bet there\n> are others. If you have the time to look through it and submit a\n> patch, I can incorporate it into the version that will go into contrib.\n> \n\nWe didn't look at Gene's stuff yet. Maybe Gene could find a time to\ncheck his code.\n\n> \t\t\tregards, tom lane\n> \n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Fri, 8 Dec 2000 22:03:15 +0300 (GMT)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Indexing for geographic objects? " }, { "msg_contents": "Tom Lane wrote: \n> Oleg Bartunov <[email protected]> writes:\n> > We've done some work with GiST indices and found a little problem\n> > with optimizer.\n> \n> > test=# set enable_seqscan = off;\n> > SET VARIABLE\n> > test=# explain select * from test where s @ '1.05 .. 3.95';\n> > NOTICE: QUERY PLAN:\n> \n> > Index Scan using test_seg_ix on test (cost=0.00..369.42 rows=5000 width=12)\n> \n> > EXPLAIN\n> > % ./bench.pl -d test -b 100 -i\n> > total: 1.71 sec; number: 100; for one: 0.017 sec; found 18 docs\n> \n> I'd venture that the major problem here is bogus estimated selectivities\n> for rtree/gist operators.\n\nYes, the problem is, I didn't have the foggiest idea how to estimate\nselectivity, nor I had any stats when I developed the type. Before\n7.0, I had some success using selectivity estimators of another\ndatatype (I think that was int, but I am not sure). In 7.0, most of\nthose estimators were gone and I have probably chosen the wrong ones\nor none at all, just so I could get it to work again. The performance\nwas good enough for my taste, so I have even forgotten that was an\nissue.\n\nI know, I know: 'good enough' is never good. I apoligize.\n\n--Gene\n", "msg_date": "Fri, 08 Dec 2000 21:22:02 -0600", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: Indexing for geographic objects? " }, { "msg_contents": "Hi,\n\nwe are getting a bit close to add index support for int arrays using\nGiST interface. This will really drive up performance of our full text\nsearch fully based on postgresql. We have a problem with broken index\nand couldn't find a reason. I attached archive with sources\nfor GiST functions and test suite to show a problem - vacuum analyze\nat end end of TESTSQL should complain about broken index.\nHere is a short description:\n1. untar in contrib 7.0.*\n2. cd _intarray\n3. edit Makefile for TESTDB (name of db for test)\n4. createdb TESTDB\n5. gmake\n6. gmake install\n7. psql TESTDB < TESTSQL\n\n\tRegards,\n\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83", "msg_date": "Wed, 13 Dec 2000 18:48:40 +0300 (GMT)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": false, "msg_subject": "index support for arrays (GiST)" }, { "msg_contents": "Well,\n\nwe found an answer ourserlves. Patch for 7.0.3 is included below.\nCredits to Teodor Sigaev ([email protected])\nSome comments: \n\n>From src/backend/access/gist/gist.c\n/*\n** Take a compressed entry, and install it on a page. Since we now know\n** where the entry will live, we decompress it and recompress it using\n** that knowledge (some compression routines may want to fish around\n** on the page, for example, or do something special for leaf nodes.)\n*/\n\nAfter compressing of index it's written to disk decompressed (!) which\nis the reason we have the problem with broken index !\nIt looks like other people just didn't use index decompression function\n(at least in Gene's code decompression function just do return ) and \nthat's why this bug was not discovered. We could make a patch for \nupcoming 7.1 if hackers desired. I consider this patch as a bugfix\nnot a new feature or improvement. We got a very promising results.\n\nAnother question to this code is - why gistPageAddItem does\ncompress - decompress - compress. It's not clear from the comment.\n\n\tBest regards,\n\n\t\tOleg\n\n-------------------------------------------------------------------------\nmaze% diff -c backend/access/gist/gist.c \nbackend/access/gist/gist.c.orig \n*** backend/access/gist/gist.c Fri Dec 15 13:03:40 2000\n--- backend/access/gist/gist.c.orig Fri Dec 15 13:00:50 2000\n***************\n*** 374,380 ****\n {\n GISTENTRY tmpcentry;\n IndexTuple itup = (IndexTuple) item;\n- OffsetNumber retval;\n \n /*\n * recompress the item given that we now know the exact page and\n--- 374,379 ----\n***************\n*** 386,400 ****\n IndexTupleSize(itup) -\nsizeof(IndexTupleData), FALSE);\n gistcentryinit(giststate, &tmpcentry, dentry->pred, r, page,\n offsetNumber, dentry->bytes, FALSE);\n! *newtup = gist_tuple_replacekey(r, tmpcentry, itup);\n! retval = PageAddItem(page, (Item) *newtup,\nIndexTupleSize(*newtup),\n! offsetNumber, flags);\n /* be tidy */\n if (tmpcentry.pred != dentry->pred\n && tmpcentry.pred != (((char *) itup) +\nsizeof(IndexTupleData)))\n pfree(tmpcentry.pred);\n \n! return (retval);\n }\n \n \n--- 385,398 ----\n IndexTupleSize(itup) -\nsizeof(IndexTupleData), FALSE);\n gistcentryinit(giststate, &tmpcentry, dentry->pred, r, page,\n offsetNumber, dentry->bytes, FALSE);\n! *newtup = gist_tuple_replacekey(r, *dentry, itup);\n /* be tidy */\n if (tmpcentry.pred != dentry->pred\n && tmpcentry.pred != (((char *) itup) +\nsizeof(IndexTupleData)))\n pfree(tmpcentry.pred);\n \n! return (PageAddItem(page, (Item) *newtup,\nIndexTupleSize(*newtup),\n! offsetNumber, flags));\n }\n \n\n-----------------------------------------------------------------------\n\n\n\nOn Wed, 13 Dec 2000, Oleg Bartunov wrote:\n\n> Date: Wed, 13 Dec 2000 18:48:40 +0300 (GMT)\n> From: Oleg Bartunov <[email protected]>\n> To: [email protected]\n> Cc: Tom Lane <[email protected]>, [email protected],\n> 'pgsql-hackers ' <[email protected]>\n> Subject: [HACKERS] index support for arrays (GiST)\n> \n> Hi,\n> \n> we are getting a bit close to add index support for int arrays using\n> GiST interface. This will really drive up performance of our full text\n> search fully based on postgresql. We have a problem with broken index\n> and couldn't find a reason. I attached archive with sources\n> for GiST functions and test suite to show a problem - vacuum analyze\n> at end end of TESTSQL should complain about broken index.\n> Here is a short description:\n> 1. untar in contrib 7.0.*\n> 2. cd _intarray\n> 3. edit Makefile for TESTDB (name of db for test)\n> 4. createdb TESTDB\n> 5. gmake\n> 6. gmake install\n> 7. psql TESTDB < TESTSQL\n> \n> \tRegards,\n> \n> \t\tOleg\n> _____________________________________________________________\n> Oleg Bartunov, sci.researcher, hostmaster of AstroNet,\n> Sternberg Astronomical Institute, Moscow University (Russia)\n> Internet: [email protected], http://www.sai.msu.su/~megera/\n> phone: +007(095)939-16-83, +007(095)939-23-83\n> \n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n\n\n", "msg_date": "Fri, 15 Dec 2000 15:08:26 +0300 (GMT)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: index support for arrays (GiST)" }, { "msg_contents": "On Fri, 15 Dec 2000, Thomas Lockhart wrote:\n\n> Date: Fri, 15 Dec 2000 15:47:01 +0000\n> From: Thomas Lockhart <[email protected]>\n> To: Oleg Bartunov <[email protected]>\n> Cc: [email protected], Tom Lane <[email protected]>,\n> [email protected],\n> 'pgsql-hackers ' <[email protected]>\n> Subject: Re: [HACKERS] index support for arrays (GiST)\n> \n> > It looks like other people just didn't use index decompression function\n> > (at least in Gene's code decompression function just do return ) and\n> > that's why this bug was not discovered. We could make a patch for\n> > upcoming 7.1 if hackers desired. I consider this patch as a bugfix\n> > not a new feature or improvement. We got a very promising results.\n> \n> Yes, send patches! Thanks to you and Gene for getting GiST back into\n> view; it seems like a great feature which was neglected for too long.\n> \n\nWe found one more bug with handling NULL values, so continue digging :-)\n\n\tО©╫О©╫О©╫О©╫\n\n> - Thomas\n> \n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Fri, 15 Dec 2000 18:45:54 +0300 (GMT)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: index support for arrays (GiST)" }, { "msg_contents": "> It looks like other people just didn't use index decompression function\n> (at least in Gene's code decompression function just do return ) and\n> that's why this bug was not discovered. We could make a patch for\n> upcoming 7.1 if hackers desired. I consider this patch as a bugfix\n> not a new feature or improvement. We got a very promising results.\n\nYes, send patches! Thanks to you and Gene for getting GiST back into\nview; it seems like a great feature which was neglected for too long.\n\n - Thomas\n", "msg_date": "Fri, 15 Dec 2000 15:47:01 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: index support for arrays (GiST)" } ]
[ { "msg_contents": "It seems that your code is exactly what I want.\n\nI have already created geographical objects which contains MBR(Minimum\nBounding Rectangle) in their structure, so it is a question of rewriting\nyour code to change the access to the cube structure to the MBR structure\ninside my geoobject. (cf http://fmaps.sourceforge.net/) Look in the CVS for\nlatest. I have been slack lately on the project, but I'm not forgetting it.\n\nQuickly I ran through the code, and I think your cube is strictly speaking a\nbox, which also a MBR.\n\nHowever I didn't see the case of intersection, which is the main question\nwhen you want to display object that are visible inside a box.\n\nI suppose your code is under GPL, and you have no problem for me to use it,\nproviding I put your name and credits somewhere.\n\nCheers.\n\nFranck Martin\nDatabase Development Officer\nSOPAC South Pacific Applied Geoscience Commission\nFiji\nE-mail: [email protected]\nWeb site: http://www.sopac.org/\n\nThis e-mail is intended for its recipients only. Do not forward this e-mail\nwithout approval. The views expressed in this e-mail may not be necessarily\nthe views of SOPAC.\n\n-----Original Message-----\nFrom: [email protected] [mailto:[email protected]]\nSent: Saturday, 25 November 2000 8:56 \nTo: Franck Martin\nSubject: Re: [HACKERS] Indexing for geographic objects? \n\nIt is probably possible to hook up an extension directly with the\nR-tree methods available in postgres -- if you stare at the code long\nenough and figure how to use the correct strategies. I chose an easier\npath years ago and I am still satisfied with the results. Check out\nthe GiST -- a general access method built on top of R-tree to provide\na user-friendly interface to it and to allow indexing of more abstract\ntypes, for which straight R-tree is not directly applicable.\n\nI have a small set of complete data types, of which a couple\nillustrate the use of GiST indexing with the geometrical objects, in:\n\nhttp://wit.mcs.anl.gov/~selkovjr/pg_extensions/\n\nIf you are using a pre-7.0 postrgres, grab the file contrib.tgz,\notherwise take contrib-7.0.tgz. The difference is insignificant, but\nthe pre-7.0 version will not fit the current schema. Unpack the source\ninto postgresql-*/contrib and follow instructions in the README\nfiles. The types of interest for you will be seg and cube. You will\nfind pointers to the original sources and docs in the CREDITS section\nof the README file. I also have a version of the original example code\nin pggist-patched.tgz, but I did not check if it works with current\npostgres. It should not be difficult to fix it if it doesn't -- the\nrecent development in the optimizer area made certain things\nunnecessary.\n\nYou might want to check out a working example of the segment data type at:\n\nhttp://wit.mcs.anl.gov/EMP/indexing.html\n\n(search the page for 'KM')\n\nI will be glad to help, but I would also recommend to send more\nsophisticated questions to Joe Hellerstein, the leader of the original\npostgres team that developed GiST. He was very helpful whenever I\nturned to him during the early stages of my data type project.\n\n--Gene\n", "msg_date": "Mon, 27 Nov 2000 11:06:21 +1200", "msg_from": "Franck Martin <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] Indexing for geographic objects? " }, { "msg_contents": "Franck Martin wrote:\n> \n> It seems that your code is exactly what I want.\n> \n> I have already created geographical objects which contains MBR(Minimum\n> Bounding Rectangle) in their structure, so it is a question of rewriting\n> your code to change the access to the cube structure to the MBR structure\n> inside my geoobject. (cf http://fmaps.sourceforge.net/) Look in the CVS for\n> latest. I have been slack lately on the project, but I'm not forgetting it.\n> \n> Quickly I ran through the code, and I think your cube is strictly speaking a\n> box, which also a MBR.\n> \n> However I didn't see the case of intersection, which is the main question\n> when you want to display object that are visible inside a box.\n> \n> I suppose your code is under GPL, and you have no problem for me to use it,\n> providing I put your name and credits somewhere.\n\nIt would be much better if it were under the standard PostgreSQL license\nand \nif it is included in the standard distribution. \n\nAs Tom said, working Gist would be a great feature. \n\nNow if only someone would write the regression tests ;)\n\nBTW, the regression tests for pl/pgsql seem to be somewhat sparse as\nwell, \nmissing at least some types of loops, possibly more.\n\n> Franck Martin\n> Database Development Officer\n> SOPAC South Pacific Applied Geoscience Commission\n> Fiji\n> E-mail: [email protected]\n> Web site: http://www.sopac.org/\n> \n> This e-mail is intended for its recipients only. Do not forward this e-mail\n> without approval. The views expressed in this e-mail may not be necessarily\n> the views of SOPAC.\n> \n> -----Original Message-----\n> From: [email protected] [mailto:[email protected]]\n> Sent: Saturday, 25 November 2000 8:56\n> To: Franck Martin\n> Subject: Re: [HACKERS] Indexing for geographic objects?\n> \n> It is probably possible to hook up an extension directly with the\n> R-tree methods available in postgres -- if you stare at the code long\n> enough and figure how to use the correct strategies. I chose an easier\n> path years ago and I am still satisfied with the results. Check out\n> the GiST -- a general access method built on top of R-tree to provide\n> a user-friendly interface to it and to allow indexing of more abstract\n> types, for which straight R-tree is not directly applicable.\n> \n> I have a small set of complete data types, of which a couple\n> illustrate the use of GiST indexing with the geometrical objects, in:\n> \n> http://wit.mcs.anl.gov/~selkovjr/pg_extensions/\n> \n> If you are using a pre-7.0 postrgres, grab the file contrib.tgz,\n> otherwise take contrib-7.0.tgz. The difference is insignificant, but\n> the pre-7.0 version will not fit the current schema. Unpack the source\n> into postgresql-*/contrib and follow instructions in the README\n> files. The types of interest for you will be seg and cube. You will\n> find pointers to the original sources and docs in the CREDITS section\n> of the README file. I also have a version of the original example code\n> in pggist-patched.tgz, but I did not check if it works with current\n> postgres. It should not be difficult to fix it if it doesn't -- the\n> recent development in the optimizer area made certain things\n> unnecessary.\n> \n> You might want to check out a working example of the segment data type at:\n> \n> http://wit.mcs.anl.gov/EMP/indexing.html\n> \n> (search the page for 'KM')\n> \n> I will be glad to help, but I would also recommend to send more\n> sophisticated questions to Joe Hellerstein, the leader of the original\n> postgres team that developed GiST. He was very helpful whenever I\n> turned to him during the early stages of my data type project.\n> \n> --Gene\n", "msg_date": "Mon, 27 Nov 2000 11:21:57 +0000", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Indexing for geographic objects?" }, { "msg_contents": "Franck Martin wrote:\n> I have already created geographical objects which contains MBR(Minimum\n> Bounding Rectangle) in their structure, so it is a question of rewriting\n> your code to change the access to the cube structure to the MBR structure\n> inside my geoobject. (cf http://fmaps.sourceforge.net/) Look in the CVS for\n> latest. I have been slack lately on the project, but I'm not forgetting it.\n\nI see where you are aiming. I definitely want to be around when it\nstarts working.\n\n> Quickly I ran through the code, and I think your cube is strictly speaking a\n> box, which also a MBR.\n\nYes, cube is definitely a misnomer -- it suggests things are\nequihedral, which they aren't. I am still looking for a short name or\nan acronym that would indicate it is a box with an arbitrary number of\ndimensions. With your application, you will surely benefit from a\nsmaller and faster code geared specifically for 3D.\n\n> However I didn't see the case of intersection, which is the main question\n> when you want to display object that are visible inside a box.\n\nThe procedure is there, it is called cube_inter, but there is no\noperator for it.\n \n> I suppose your code is under GPL, and you have no problem for me to use it,\n> providing I put your name and credits somewhere.\n\nNo problem at all -- I will be honored if you use it. Was I careless\nenough not to include a license? It's not exactly a GPL -- it's\ncompletely unrestricted. I should have said that somewhere.\n\nGood luck,\n\n--Gene\n", "msg_date": "Mon, 27 Nov 2000 18:03:33 -0600", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Indexing for geographic objects? " }, { "msg_contents": "On Mon, Nov 27, 2000 at 06:03:33PM -0600, [email protected] wrote:\n> Franck Martin wrote:\n> > I suppose your code is under GPL, and you have no problem for me to\n> > use it, providing I put your name and credits somewhere.\n> \n> No problem at all -- I will be honored if you use it. Was I careless\n> enough not to include a license? It's not exactly a GPL -- it's\n> completely unrestricted. I should have said that somewhere.\n\nNote that (AIUI) placing code in the public domain leaves you liable \nfor damages from somebody misusing it. You have to retain copyright \njust to be able to disclaim liability, in the license -- but then you \nneed to actually have a license. That's why you don't see much public \ndomain software. (I am not a lawyer.)\n\nNathan Myers\[email protected]\n", "msg_date": "Mon, 27 Nov 2000 17:56:37 -0800", "msg_from": "Nathan Myers <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Indexing for geographic objects?" }, { "msg_contents": "It seems that R-trees become inefficient when the number of dimensions\nincrease. Has anyone thoght of a transparent way to use Peano codes (hhcode\nin Oracle lingo), and use B-tree indexes instead?\n\nAlso, I've read that R-trees sometimes suffer a lot when an update overflows\na node in the index.\n\nThe only initial problem I see with Peano codes is that the index is made on\nreal cubes (all dimensions are equal, due to the recursive decomposition of\nspace). To overcome that, people have talked about using\nmultiple-entry-indexes. That is, an object is decomposed in a number of\ncubes (not too many), which are then indexed.\n\nIn this case, there should be a way to make intersection searches be\ntransparent. Oracle does it using tables and merge-joins. I have thought of\nusing GiST to do that, but it seemed too advanced for me yet.\n\nSo I thought of using the Oracle technique (make tables and use joins).\nProblem: I would need a C function to make the cubes describing an spatial\nobject, but currently C functions cannot return more than one value (have of\nthoght of returning an array, but have not tried it). And making inserts\ndirectly from a C function has been described as magic stuff in the\ndocumentation.\n\nYours sincerely,\n\nEdmar Wiggers\nBRASMAP Information Systems\n+55 48 9960 2752\n\n", "msg_date": "Tue, 5 Dec 2000 11:31:29 -0200", "msg_from": "\"Edmar Wiggers\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: [HACKERS] Indexing for geographic objects? " }, { "msg_contents": "Edmar Wiggers wrote:\n> \n> It seems that R-trees become inefficient when the number of dimensions\n> increase. Has anyone thoght of a transparent way to use Peano codes (hhcode\n> in Oracle lingo), and use B-tree indexes instead?\n> \n\nDo you have a reference, or more information on what a Peano code is?\n\nBernie\n", "msg_date": "Wed, 06 Dec 2000 02:43:05 -0500", "msg_from": "Bernard Frankpitt <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] RE: Indexing for geographic objects?" }, { "msg_contents": "> Do you have a reference, or more information on what a Peano code is?\n\nCheck this out http://www.statkart.no/nlhdb/iveher/hhtext.htm\n\nShortly, this technique relies on a space filling curve. That is, a\nuni-dimensional curve that, on a given plane, covers every single point, and\nonly covers it once. Since the curve is 1-dimensional, one can use B-tree\nindexes on it.\n\nThere a number of curves of this type, e.g. Hilbert's and Peano's. The Peano\ncurve yelds easier calculations, hence is the one Oracle used to make their\nSpatial Data Option.\n\nMoreover, the Peano curve describes a point in an helical kind of way,\nrecursively dividing space. That's why the Norwegian Hydrographic Service\ndecided to call it \"Helical Hyperspatial Codes\" (hhcodes). It was from their\nresearch that Oracle Spatial Data Option was born, back in 1995.\n\nI'm not sure about the exact applicability of hhcodes to index multimedia\nstuff yet (images, sound), because those are VERY high-dimensional spaces.\nBut I've done quite some reading/research, and hhcodes have two very nice\nadvantages over R-trees:\n\n- it is easy (and not costly in performance), to index things in 3D or 4D\n(including time too);\n- concurrency is much better, because one does not suffer from costly R-tree\nupdates (B-trees are much better in that). When dealing with 3D or 4D, this\nbecomes even more important.\n\nBy the way, are you Brazilian Bernard? Oddly enough, maybe we live in the\nvery same city. Florianopolis, SC, Brazil. It's a small world he? :))\n\n", "msg_date": "6 Dec 2000 12:58:39 +0100", "msg_from": "[email protected] (\"Edmar Wiggers\")", "msg_from_op": false, "msg_subject": "RE: [HACKERS] Indexing for geographic objects?" } ]
[ { "msg_contents": "In terms of making constraints more declarative, I think it would be neat to\nadd an extra field to pg_relcheck that is a foreign key (or null) into the\npg_attribute table.\n\nThat way, it would be possible to recover column CHECK constraints. If the\nfield is NULL, you'd assume it was a table CHECK constraint...\n\nTheoretically, it would mean that less information is 'lost' after the table\ncreation.\n\nChris\n\n", "msg_date": "Mon, 27 Nov 2000 11:54:38 +0800", "msg_from": "\"Christopher Kings-Lynne\" <[email protected]>", "msg_from_op": true, "msg_subject": "pg_relcheck" } ]
[ { "msg_contents": "I'm wondering about the intent of this snippet in xlog.c:\n\n fd = BasicOpenFile(tpath, O_RDWR | O_CREAT | O_EXCL | PG_BINARY, S_IRUSR | S_IWUSR);\n if (fd < 0)\n elog(STOP, \"InitCreate(logfile %u seg %u) failed: %m\",\n logId, logSeg);\n\n if (lseek(fd, XLogSegSize - 1, SEEK_SET) != (off_t) (XLogSegSize - 1))\n elog(STOP, \"lseek(logfile %u seg %u) failed: %m\",\n logId, logSeg);\n\n if (write(fd, \"\", 1) != 1)\n elog(STOP, \"write(logfile %u seg %u) failed: %m\",\n logId, logSeg);\n\n if (fsync(fd) != 0)\n elog(STOP, \"fsync(logfile %u seg %u) failed: %m\",\n logId, logSeg);\n\n if (lseek(fd, 0, SEEK_SET) < 0)\n elog(STOP, \"lseek(logfile %u seg %u off %u) failed: %m\",\n log, seg, 0);\n\n close(fd);\n\nIf the idea here is to force XLogSegSize bytes of disk space to be\nallocated, it's a loser. Most Unix file systems that I know about\nwill treat the file as containing a \"hole\", and only allocate the\nsingle block in which data has actually been written. The fact\nthat 'ls' shows the file as 16MB is a user-interface artifact of ls;\ndu will tell you the grim truth:\n\n$ initdb\n...\n$ ls -l data/pg_xlog\ntotal 328\n-rw------- 1 postgres users 16777216 Nov 27 00:44 0000000000000000\n$ du data/pg_xlog/0000000000000000\n328 data/pg_xlog/0000000000000000\n\nI don't know whether you consider it important to force the logfile\nto be fully allocated before you start using it; but if you do,\nthe above code will not get the job done.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 27 Nov 2000 00:48:21 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Is XLogFileInit() hoping to force space allocation?" } ]
[ { "msg_contents": "\n> > At 17:16 19/11/00 -0500, Tom Lane wrote:\n> >> \n> http://www.postgresql.org/cgi/cvsweb.cgi/pgsql/src/backend/utils/fmgr/README\n> \n\nYes, this is now an imho very much improved way to go :-)\n\nI would probably have used the \"parameter style\" extension to create function,\nbut that is imho not an important issue.\n\nThanks especially for the \"standard C function\" handler.\n\nAndreas\n", "msg_date": "Mon, 27 Nov 2000 12:51:08 +0100", "msg_from": "Zeugswetter Andreas SB <[email protected]>", "msg_from_op": true, "msg_subject": "AW: Final proposal for resolving C-vs-newC issue " }, { "msg_contents": "[ Charset ISO-8859-1 unsupported, converting... ]\n> \n> > > At 17:16 19/11/00 -0500, Tom Lane wrote:\n> > >> \n> > http://www.postgresql.org/cgi/cvsweb.cgi/pgsql/src/backend/utils/fmgr/README\n> > \n> \n> Yes, this is now an imho very much improved way to go :-)\n> \n> I would probably have used the \"parameter style\" extension to create function,\n> but that is imho not an important issue.\n> \n> Thanks especially for the \"standard C function\" handler.\n\nThe other nice thing about this is that we can make make the new style\nstandard at some point, so the macros are not even required.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 27 Nov 2000 11:15:55 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: AW: Final proposal for resolving C-vs-newC issue" } ]
[ { "msg_contents": "To be honest, Tom, I've always seen GiST not just as a great feature, but as\nan essential feature. Using Stonebraker's definition of an\nobject-relational database (which I tend to do, as it's the only one that\nI've read about in depth), we really need to be able to properly index\ncomplex data, and using GiST, we can. Besides, it's just plain useful ;-)\n\nMikeA\n\n\n-----Original Message-----\nFrom: Tom Lane\nTo: Michael Ansley\nCc: 'Franck Martin '; 'pgsql-general '; 'pgsql-hackers ';\n'[email protected]'\nSent: 11-27-00 3:32 AM\nSubject: Re: [HACKERS] Indexing for geographic objects? \n\nMichael Ansley <[email protected]> writes:\n> Remember also that the GiST library has been integrated into PG, (my\nbrother\n> is doing some thesis workon that at the moment),\n\nYeah? Does it still work?\n\nSince the GIST code is not tested by any standard regress test, and is\nso poorly documented that hardly anyone can be using it, I've always\nassumed that it is probably suffering from a severe case of bit-rot.\n\nI'd love to see someone contribute documentation and regression test\ncases for it --- it's a great feature, if it works.\n\n\t\t\tregards, tom lane\n\n\n**********************************************************************\nThis email and any files transmitted with it are confidential and\nintended solely for the use of the individual or entity to whom they\nare addressed. If you have received this email in error please notify\nthe system manager.\n\nThis footnote also confirms that this email message has been swept by\nMIMEsweeper for the presence of computer viruses.\n\nwww.mimesweeper.com\n**********************************************************************\n\n\n\n\n\nRE: [HACKERS] Indexing for geographic objects? \n\n\nTo be honest, Tom, I've always seen GiST not just as a great feature, but as an essential feature.  Using Stonebraker's definition of an object-relational database (which I tend to do, as it's the only one that I've read about in depth), we really need to be able to properly index complex data, and using GiST, we can.  Besides, it's just plain useful ;-)\nMikeA\n\n\n-----Original Message-----\nFrom: Tom Lane\nTo: Michael Ansley\nCc: 'Franck Martin '; 'pgsql-general '; 'pgsql-hackers '; '[email protected]'\nSent: 11-27-00 3:32 AM\nSubject: Re: [HACKERS] Indexing for geographic objects? \n\nMichael Ansley <[email protected]> writes:\n> Remember also that the GiST library has been integrated into PG, (my\nbrother\n> is doing some thesis workon that at the moment),\n\nYeah?  Does it still work?\n\nSince the GIST code is not tested by any standard regress test, and is\nso poorly documented that hardly anyone can be using it, I've always\nassumed that it is probably suffering from a severe case of bit-rot.\n\nI'd love to see someone contribute documentation and regression test\ncases for it --- it's a great feature, if it works.\n\n                        regards, tom lane\n\n\n\n**********************************************************************\nThis email and any files transmitted with it are confidential and\nintended solely for the use of the individual or entity to whom they\nare addressed. If you have received this email in error please notify\nthe system manager.\n\nThis footnote also confirms that this email message has been swept by\nMIMEsweeper for the presence of computer viruses.\n\nwww.mimesweeper.com\n**********************************************************************", "msg_date": "Mon, 27 Nov 2000 12:47:25 -0000", "msg_from": "Michael Ansley <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] Indexing for geographic objects? " } ]
[ { "msg_contents": "Hi everybody,\n\n\nthere must be a nice way of getting the size of my database (in mB,\npreferably), but I couldn't find it in the documentation that I searched\nthrough briefly.\n\nThe reason why I wanna do this is because the server might get full quickly\nand to make sure it doensn't happen before I know I'm writing a script that\nsends me the size of this database per mail.\n\nCan anyone direct me to an answer to this problem? \n\nI would be most thankful,\n\n\nGus\n", "msg_date": "Mon, 27 Nov 2000 16:47:06 +0100", "msg_from": "Guus Kerpel <[email protected]>", "msg_from_op": true, "msg_subject": "Size of my data base?" }, { "msg_contents": "If you installed in the default directory then the files relating to a\ndatabase are in\n\n/usr/local/pgsql/data/base/<databasename>\n\nSo you could just total up the size of everything under that directory.\n\n-Mitch\n----- Original Message -----\nFrom: \"Guus Kerpel\" <[email protected]>\nTo: <[email protected]>\nSent: Monday, November 27, 2000 7:47 AM\nSubject: [HACKERS] Size of my data base?\n\n\n> Hi everybody,\n>\n>\n> there must be a nice way of getting the size of my database (in mB,\n> preferably), but I couldn't find it in the documentation that I searched\n> through briefly.\n>\n> The reason why I wanna do this is because the server might get full\nquickly\n> and to make sure it doensn't happen before I know I'm writing a script\nthat\n> sends me the size of this database per mail.\n>\n> Can anyone direct me to an answer to this problem?\n>\n> I would be most thankful,\n>\n>\n> Gus\n>\n\n", "msg_date": "Thu, 30 Nov 2000 11:17:28 -0800", "msg_from": "\"Mitch Vincent\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Size of my data base?" }, { "msg_contents": "I was wondering if someone could tell me if I have gotten the fields of\ntgargs correct:\n\n<unnamed>\\000 -- Constraint name?\n\nforeign_table_multi\\000 -- table with foreign key(s)\n\nprimary_table_multi\\000 -- table with primary key(s)\n\nUNSPECIFIED\\000 -- ??\n\nforeign_int_1\\000 -- 1st field in foreign key\n\nprimary_int_1\\000 -- 1st field in referenced primary key\n\nforeign_int_2\\000 -- 1st field in foreign key\n\nprimary_int_2\\000 -- 1st field in referenced primary key\n\nThanks\n\nMichael Fork - CCNA - MCP - A+\nNetwork Support - Toledo Internet Access - Toledo Ohio\n\n\n", "msg_date": "Thu, 30 Nov 2000 17:54:27 -0500 (EST)", "msg_from": "Michael Fork <[email protected]>", "msg_from_op": false, "msg_subject": "pg_trigger and tgargs" }, { "msg_contents": "\nOn Thu, 30 Nov 2000, Michael Fork wrote:\n\n> I was wondering if someone could tell me if I have gotten the fields of\n> tgargs correct:\nFor foreign key constraints, yes. Other triggers can use tgargs for\nwhatever they want.\n \n> <unnamed>\\000 -- Constraint name?\nYes.\n\n> foreign_table_multi\\000 -- table with foreign key(s)\n> primary_table_multi\\000 -- table with primary key(s)\nYep.\n \n> UNSPECIFIED\\000 -- ??\nWhat match type was specified (or unspecified if none was specified).\n\n> foreign_int_1\\000 -- 1st field in foreign key\n> \n> primary_int_1\\000 -- 1st field in referenced primary key\n> \n> foreign_int_2\\000 -- 1st field in foreign key \n> primary_int_2\\000 -- 1st field in referenced primary key\n2nd on the latter two, but yes in general\n\n", "msg_date": "Thu, 30 Nov 2000 15:39:04 -0800 (PST)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_trigger and tgargs" } ]
[ { "msg_contents": "Sorry if I'm posting to the wrong list, but I don't know which list is \nappropriate for this question.\n\nI've a question concerning compatibilty Postgres <-> Oracle. In Oracle, empty \nstrings and null are basicly the same, but it does not seem to be under \nPostgres, making migration a pain.\n\nExample:\nORACLE:\nselect id \n from anytable\nwhere field='';\n\nPOSTGRES:\nselect id\n from anytable\nwhere field='' or field is null;\n\nOr another example: The oracle query\nupdate anytable set adatefiled=''\nfails in Postgres, I've to write\nupdate anytable set adatefield=null;\n\nThis gets really bad when the actual data is coming from a webinterface, I've \nto handle 2 different queries for the case empty string and non-empty string.\n\nIs there a better way to achieve this?\n\nThanks!\n\nBest regards,\n\tMario Weilguni\n\n", "msg_date": "Mon, 27 Nov 2000 18:09:52 +0100", "msg_from": "Mario Weilguni <[email protected]>", "msg_from_op": true, "msg_subject": "Question about Oracle compatibility" }, { "msg_contents": "On Mon, 27 Nov 2000, Mario Weilguni wrote:\n\n> Sorry if I'm posting to the wrong list, but I don't know which list is \n> appropriate for this question.\n> \n> I've a question concerning compatibilty Postgres <-> Oracle. In Oracle, \n> empty strings and null are basicly the same, but it does not seem to \n> be under Postgres, making migration a pain.\n> \n\nActually, they aren't the same at all under Oracle or under Postgres.\n\nA null represents a lack of data, whereas an empty string is represents\ndata of zero length and zero content. Null is a state and not a value.\n\nWhat you are probably seeing is a difference in table layout that sets\na default value of '' for the particular column you're touching. You can \nhave postgres do the same by specifying DEFAULT '' when you create your\ntable (or you could ALTER it in..).\n\nNull values are actually quite important because they tell you when you \ndon't have data. An empty tring means something is there, whereas a null\nin the same place means complete absense of all data.\n\nHope this helps.\n\nThanks\n\nAlex\n\n> Example:\n> ORACLE:\n> select id \n> from anytable\n> where field='';\n> \n> POSTGRES:\n> select id\n> from anytable\n> where field='' or field is null;\n> \n> Or another example: The oracle query\n> update anytable set adatefiled=''\n> fails in Postgres, I've to write\n> update anytable set adatefield=null;\n\nThat seems really weird.\n\n> \n> This gets really bad when the actual data is coming from a webinterface, I've \n> to handle 2 different queries for the case empty string and non-empty string.\n> \n> Is there a better way to achieve this?\n> \n> Thanks!\n> \n> Best regards,\n> \tMario Weilguni\n> \n> \n\n-- \n Alex G. Perel -=- AP5081\[email protected] -=- [email protected]\n play -=- work \n\t \nDisturbed Networks - Powered exclusively by FreeBSD\n== The Power to Serve -=- http://www.freebsd.org/ \n\n", "msg_date": "Mon, 27 Nov 2000 12:39:32 -0500 (EST)", "msg_from": "Alex Perel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Question about Oracle compatibility" }, { "msg_contents": "Mario Weilguni <[email protected]> writes:\n> In Oracle, empty strings and null are basicly the same,\n\nAre you sure about that? It'd be a pretty major failure to comply with\nSQL standard semantics, if so.\n\nSQL92 3.1 (Definitions):\n\n null value (null): A special value, or mark, that is used to\n indicate the absence of any data value.\n\nSQL92 4.1 (Data types)\n\n A null value is an implementation-dependent special value that\n is distinct from all non-null values of the associated data type.\n There is effectively only one null value and that value is a member\n of every SQL data type. There is no <literal> for a null value,\n although the keyword NULL is used in some places to indicate that a\n null value is desired.\n\nThere is no room there for equating NULL with an empty string. I also\nread the last-quoted sentence to specifically forbid treating the\nliteral '' as NULL.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 27 Nov 2000 12:44:46 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Question about Oracle compatibility " }, { "msg_contents": "At 06:09 PM 11/27/00 +0100, Mario Weilguni wrote:\n>Sorry if I'm posting to the wrong list, but I don't know which list is \n>appropriate for this question.\n>\n>I've a question concerning compatibilty Postgres <-> Oracle. In Oracle,\nempty \n>strings and null are basicly the same, but it does not seem to be under \n>Postgres, making migration a pain.\n\nGo complain to Oracle - their behavior is NON-STANDARD. PG is doing it right.\nAn empty string isn't the same as NULL any more than 0 is the same as NULL for\nthe integer type. Adopting the Oracle-ism would break PG's SQL92-compliance\nin this area.\n\n>This gets really bad when the actual data is coming from a webinterface,\nI've \n>to handle 2 different queries for the case empty string and non-empty string.\n>\n>Is there a better way to achieve this?\n\nYou could rewrite your logic to use the empty string rather than NULL, that's\none idea. In the OpenACS project, we ported nearly 10,000 lines of datamodel\nplus a thousands of queries from Oracle to Postgres and wrote a little utility\nroutine that turned a string returned from a from into either NULL or 'the\nstring'\ndepending on its length. The select queries in the Oracle version were\nproperly\nwritten using \"IS NULL\" so they worked fine. It sounds like you've got a\nlittle\nmore work to do if the Oracle queries aren't written as \"is null or ...\"\n\nThis is a very nasty misfeature of Oracle, though, because porting from SQL92\nto Oracle can be very difficult if the SQL92 compliant code depends on the\nempty\nstring being different than NULL. Going to SQL92 from Oracle is easier and\nyou\ncan write the Oracle queries and inserts in an SQL92-compliant manner.\n\nBenefits of doing so are that your stuff will be easier to port to InterBase,\netc as well as Postgres.\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n", "msg_date": "Mon, 27 Nov 2000 10:38:24 -0800", "msg_from": "Don Baccus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Question about Oracle compatibility" }, { "msg_contents": "At 12:39 PM 11/27/00 -0500, Alex Perel wrote:\n>On Mon, 27 Nov 2000, Mario Weilguni wrote:\n>\n>> Sorry if I'm posting to the wrong list, but I don't know which list is \n>> appropriate for this question.\n>> \n>> I've a question concerning compatibilty Postgres <-> Oracle. In Oracle, \n>> empty strings and null are basicly the same, but it does not seem to \n>> be under Postgres, making migration a pain.\n>> \n>\n>Actually, they aren't the same at all under Oracle or under Postgres.\n>\n>A null represents a lack of data, whereas an empty string is represents\n>data of zero length and zero content. Null is a state and not a value.\n\nUnfortunately Mario's entirely correct (I use Oracle...)\n\ninsert into foo (some_string) values ('');\n\nwill insert a NULL, not an empty string, into the column some_string.\n\n>What you are probably seeing is a difference in table layout that sets\n>a default value of '' for the particular column you're touching. You can \n>have postgres do the same by specifying DEFAULT '' when you create your\n>table (or you could ALTER it in..).\n\nUsing \"DEFAULT ''\" might help some, but he specifically mentioned inserting\nform data from a web page, and in this case he'll have to check the string\nand explicitly insert NULL (or write a trigger for each table that does\nthe check and the resulting massage of the value) or rewrite his queries\nto treat empty string as being the same as NULL explicitly.\n\n>Null values are actually quite important because they tell you when you \n>don't have data. An empty tring means something is there, whereas a null\n>in the same place means complete absense of all data.\n\nAbsolutely right, and Oracle's misimplementation truly sucks.\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n", "msg_date": "Mon, 27 Nov 2000 10:42:01 -0800", "msg_from": "Don Baccus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Question about Oracle compatibility" }, { "msg_contents": "At 12:44 PM 11/27/00 -0500, Tom Lane wrote:\n>Mario Weilguni <[email protected]> writes:\n>> In Oracle, empty strings and null are basicly the same,\n>\n>Are you sure about that? It'd be a pretty major failure to comply with\n>SQL standard semantics, if so.\n\nThought you'd get a kick out of this:\n\nConnected to:\nOracle8i Enterprise Edition Release 8.1.6.1.0 - Production\nWith the Partitioning option\nJServer Release 8.1.6.0.0 - Production\n\nSQL> create table fubar(some_string varchar(1000));\n\nTable created.\n\nSQL> insert into fubar values('');\n\n1 row created.\n\nSQL> select count(*) from fubar where some_string is null;\n\n COUNT(*)\n----------\n 1\n\nSQL> \n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n", "msg_date": "Mon, 27 Nov 2000 10:47:27 -0800", "msg_from": "Don Baccus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Question about Oracle compatibility " }, { "msg_contents": "On Mon, 27 Nov 2000, Don Baccus wrote:\n\n> >Actually, they aren't the same at all under Oracle or under Postgres.\n> >\n> >A null represents a lack of data, whereas an empty string is represents\n> >data of zero length and zero content. Null is a state and not a value.\n> \n> Unfortunately Mario's entirely correct (I use Oracle...)\n> \n> insert into foo (some_string) values ('');\n> \n> will insert a NULL, not an empty string, into the column some_string.\n\nI stand corrupted. I didn't remember this behavior. :/\n \n> >What you are probably seeing is a difference in table layout that sets\n> >a default value of '' for the particular column you're touching. You can \n> >have postgres do the same by specifying DEFAULT '' when you create your\n> >table (or you could ALTER it in..).\n> \n> Using \"DEFAULT ''\" might help some, but he specifically mentioned inserting\n> form data from a web page, and in this case he'll have to check the string\n> and explicitly insert NULL (or write a trigger for each table that does\n> the check and the resulting massage of the value) or rewrite his queries\n> to treat empty string as being the same as NULL explicitly.\n\nMight be easiest to feed the data through a simple stored proc. Doesn't take\nlong at all to whip something together for the purpose..\n \n\n-- \n Alex G. Perel -=- AP5081\[email protected] -=- [email protected]\n play -=- work \n\t \nDisturbed Networks - Powered exclusively by FreeBSD\n== The Power to Serve -=- http://www.freebsd.org/ \n\n", "msg_date": "Mon, 27 Nov 2000 14:00:33 -0500 (EST)", "msg_from": "Alex Perel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Question about Oracle compatibility" }, { "msg_contents": "Mario Weilguni writes:\n > This gets really bad when the actual data is coming from a\n > webinterface, I've to handle 2 different queries for the case empty\n > string and non-empty string.\n\nIn their documentation both Oracle 7 and 8 state:\n\n Oracle currently treats a character value with a length of zero\n as null. However, this may not continue to be true in future\n releases, and Oracle recommends that you do not treat empty\n strings the same as NULLs.\n\n-- \nPete Forman -./\\.- Disclaimer: This post is originated\nWestern Geophysical -./\\.- by myself and does not represent\[email protected] -./\\.- the opinion of Baker Hughes or\nhttp://www.crosswinds.net/~petef -./\\.- its divisions.\n", "msg_date": "Tue, 28 Nov 2000 09:59:38 +0000 (GMT)", "msg_from": "Pete Forman <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Question about Oracle compatibility" }, { "msg_contents": "At 09:59 AM 11/28/00 +0000, Pete Forman wrote:\n>Mario Weilguni writes:\n> > This gets really bad when the actual data is coming from a\n> > webinterface, I've to handle 2 different queries for the case empty\n> > string and non-empty string.\n>\n>In their documentation both Oracle 7 and 8 state:\n>\n> Oracle currently treats a character value with a length of zero\n> as null. However, this may not continue to be true in future\n> releases, and Oracle recommends that you do not treat empty\n> strings the same as NULLs.\n\nYeah, but this is harder than it sounds! NULL and '' are indistinguishable\nin queries, so how do you treat them differently? Has to be in the \napplication code, I guess.\n\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n", "msg_date": "Tue, 28 Nov 2000 06:44:21 -0800", "msg_from": "Don Baccus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Question about Oracle compatibility" } ]
[ { "msg_contents": "> > Thomas Lockhart would be the authority on this, but my impression is\n> > that tinterval is deprecated and will eventually go away in favor of\n> > the SQL-standard interval type. If you've found functions that exist\n> > for tinterval and not for interval, then that's an item for the TODO\n> > list --- please submit details.\n> Perhaps I'm not picking up things from the documentation, but it appears to\n> me that \"interval\" is only a time length, while \"tinterval\" is actually for\n> specific times. To use a geometric analogy: interval is a length, while\n> tinterval is a specific line segment.\n> So it seems to me that interval is just way to generic (or rather,\n> tinterval already supports things that I want to do, such as testing for\n> overlaps).\n\nTINTERVAL is a poorly supported, old and creaky data type. It is based\non ABSTIME, which is not as capable as TIMESTAMP.\n\n> Am I missing something in the documentation that would explain to me how I\n> could use a starttime/length combination (something like abstime/interval,\n> or timestamp/interval) to check for overlaps like can be done with tinterval?\n\nMaybe. The SQL9x function/operator OVERLAPS is recognized by PostgreSQL\n7.x, and probably does what you want.\n\nOf course, now that I'm testing it, something has broken with OVERLAPS\n(in 7.0.3 and current sources). I've defined a function overlaps() which\ntakes four arguments of timestamp type. The parser recognizes the\nOVERLAPS syntax, and converts that to a function call syntax. I've also\ndefined a few more functions in pg_proc.h in the \"SQL language\" to map\nvariations of arguments, say (timestamp,interval,timestamp,interval), to\nthe underlying single implementation. Pretty sure that I tested this\nexhaustively (?). That mapping now fails (hmm, remind me to add this to\nthe regression tests) with a parser error.\n\nTest cases would be:\n\n select ('today', 'tomorrow') OVERLAPS ('yesterday', 'now');\n\nand\n\n select ('today', interval '1 day') OVERLAPS ('yesterday', interval '18\nhours');\n\n(the second one fails). Now that I look, this breakage was introduced in\nMarch when \"we\" expunged operators allowed as identifiers (Tom Lane and\nI have blood on our hands on this one ;) See gram.y around line 5409.\nSuggestions?\n\n - Thomas\n", "msg_date": "Mon, 27 Nov 2000 18:41:54 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: FWD: tinterval vs interval on pgsql-novice" }, { "msg_contents": "> (the second one fails). Now that I look, this breakage was introduced in\n> March when \"we\" expunged operators allowed as identifiers (Tom Lane and\n> I have blood on our hands on this one ;) See gram.y around line 5409.\n> Suggestions?\n\nAny problems with allowing OVERLAPS and BETWEEN as function names? bison\nseems happy with no shift/reduce conflicts...\n\n - Thomas\n", "msg_date": "Tue, 28 Nov 2000 08:06:49 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Re: FWD: tinterval vs interval on pgsql-novice" }, { "msg_contents": "Thomas Lockhart <[email protected]> writes:\n> select ('today', interval '1 day') OVERLAPS ('yesterday', interval '18\n> hours');\n\n> (the second one fails). Now that I look, this breakage was introduced in\n> March when \"we\" expunged operators allowed as identifiers (Tom Lane and\n> I have blood on our hands on this one ;) See gram.y around line 5409.\n\nI see it does fail, but I'm at a complete loss to understand why,\nespecially given that the first case still works. The grammar looks\nperfectly fine AFAICT. Can you explain what's wrong here?\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 28 Nov 2000 09:49:37 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: FWD: tinterval vs interval on pgsql-novice " }, { "msg_contents": "Thomas Lockhart <[email protected]> writes:\n>> I see it does fail, but I'm at a complete loss to understand why,\n>> especially given that the first case still works. The grammar looks\n>> perfectly fine AFAICT. Can you explain what's wrong here?\n\n> Here is what I'm planning on doing (already tested, but not committed).\n> I'm adding some productions to the func_name rule in gram.y to handle\n> the various \"stringy operators\" such as LIKE and OVERLAPS. These tokens\n> will also be allowed in the ColLabel rule (as several are already).\n> This fixes the immediate problem, and makes LIKE handling more\n> consistant with other special functions. Comments?\n\nThat all sounds fine, but it doesn't seem to fix the problem I'm looking\nat, which is that the OVERLAPS production is broken in current sources:\n\ntemplate1=# select ('today', 'tomorrow') OVERLAPS ('yesterday', 'now');\n overlaps\n----------\n t\n(1 row)\n\ntemplate1=# select ('today', interval '1 day') OVERLAPS ('yesterday', interval\n'18 hours');\nERROR: parser: parse error at or near \"overlaps\"\n\nI don't understand why we're getting a parse error here ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 28 Nov 2000 11:14:59 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: FWD: tinterval vs interval on pgsql-novice " }, { "msg_contents": "> Thomas Lockhart <[email protected]> writes:\n> > select ('today', interval '1 day') OVERLAPS ('yesterday', interval '18\n> > hours');\n> > (the second one fails). Now that I look, this breakage was introduced in\n> > March when \"we\" expunged operators allowed as identifiers (Tom Lane and\n> > I have blood on our hands on this one ;) See gram.y around line 5409.\n> I see it does fail, but I'm at a complete loss to understand why,\n> especially given that the first case still works. The grammar looks\n> perfectly fine AFAICT. Can you explain what's wrong here?\n\nYes. There is one underlying routine implementing the OVERLAPS operator.\nAs you might expect, it is called overlaps() in the catalog, has an\nentry point of overlaps_timestamp(), and takes four arguments of type\ntimestamp. The other variants which accept an interval type for the\nsecond and/or fourth arguments are defined in pg_proc.h as SQL\nprocedures which simply add, say, the first and second arguments to end\nup with four timestamp arguments.\n\nThe SQL routine explicitly calls overlaps() as a function, which is\ncurrently disallowed.\n\nHere is what I'm planning on doing (already tested, but not committed).\nI'm adding some productions to the func_name rule in gram.y to handle\nthe various \"stringy operators\" such as LIKE and OVERLAPS. These tokens\nwill also be allowed in the ColLabel rule (as several are already).\n\nThis fixes the immediate problem, and makes LIKE handling more\nconsistant with other special functions. Comments?\n\n - Thomas\n", "msg_date": "Tue, 28 Nov 2000 16:23:10 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: FWD: tinterval vs interval on pgsql-novice" }, { "msg_contents": "Tom Lane writes:\n\n> template1=# select ('today', interval '1 day') OVERLAPS ('yesterday', interval\n> '18 hours');\n> ERROR: parser: parse error at or near \"overlaps\"\n> \n> I don't understand why we're getting a parse error here ...\n\nThe OVERLAPS special SQL-construct is converted into the 'select\noverlaps(...)' function call, which isn't allowed because OVERLAPS is a\nkeyword. *That* is where the parse error is coming from.\n\nTo fix this you simply need to double-quote \"overlaps\" when it's used as a\nstraight function call. See how substring does it in pg_proc.h.\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n", "msg_date": "Tue, 28 Nov 2000 18:16:47 +0100 (CET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: FWD: tinterval vs interval on pgsql-novice " }, { "msg_contents": "Tom Lane wrote:\n> \n> Thomas Lockhart <[email protected]> writes:\n> >> I see it does fail, but I'm at a complete loss to understand why,\n> >> especially given that the first case still works. The grammar looks\n> >> perfectly fine AFAICT. Can you explain what's wrong here?\n> \n> > Here is what I'm planning on doing (already tested, but not committed).\n> > I'm adding some productions to the func_name rule in gram.y to handle\n> > the various \"stringy operators\" such as LIKE and OVERLAPS. These tokens\n> > will also be allowed in the ColLabel rule (as several are already).\n> > This fixes the immediate problem, and makes LIKE handling more\n> > consistant with other special functions. Comments?\n> That all sounds fine, but it doesn't seem to fix the problem I'm looking\n> at, which is that the OVERLAPS production is broken in current sources:\n\nYes it does. When you execute\n\n select (timestamp 'today', interval '1 day')\n OVERLAPS (timestamp 'yesterday', timestamp 'tomorrow');\n\nThis is matched up with an entry in pg_proc which declares an SQL\nlanguage implementation as\n\n 'select overlaps($1, ($1+$2), $3, $4)'\n\nwhich is what fails.\n\nIt may be better to declare this as\n\n 'select ($1, ($1+$2)) overlaps ($3, $4)'\n\nbut that is not what is there now. I've just tested the latter form and\nit seems to work, so I'll include that in my next patchball.\n\n - Thomas\n", "msg_date": "Tue, 28 Nov 2000 17:20:48 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: FWD: tinterval vs interval on pgsql-novice" }, { "msg_contents": "> To fix this you simply need to double-quote \"overlaps\" when it's used as a\n> straight function call. See how substring does it in pg_proc.h.\n\nHmm. Why was this required for the substring() example? afaik all of\nthis should be handled (correctly) in the grammar...\n\n - Thomas\n", "msg_date": "Tue, 28 Nov 2000 18:07:45 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Re: FWD: tinterval vs interval on pgsql-novice" }, { "msg_contents": "> > To fix this you simply need to double-quote \"overlaps\" when it's used as a\n> > straight function call. See how substring does it in pg_proc.h.\n> Hmm. Why was this required for the substring() example? afaik all of\n> this should be handled (correctly) in the grammar...\n\nI see it now. Will look at it...\n\n - Thomas\n", "msg_date": "Tue, 28 Nov 2000 18:20:42 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Re: FWD: tinterval vs interval on pgsql-novice" } ]
[ { "msg_contents": "Hi:\n\nI posted this in pgsql-general last week, but I got no answer. Maybe I\nhave better luck this time?\n\nTIA.\n\n-- \nAlvaro Herrera (<alvherre[@]protecne.cl>)\n\n---------- Forwarded message ----------\nDate: Tue, 21 Nov 2000 17:26:54 -0300 (CLST)\nFrom: Alvaro Herrera <[email protected]>\nTo: [email protected]\nSubject: [GENERAL] DROP TABLE, and children?\n\nHi:\n\nIf I'm creating some inherited tables from a parent,\n\nrwtest=> CREATE TABLE test (col1 INTEGER);\nCREATE\nrwtest=> CREATE TABLE testchld () INHERITS (test);\nCREATE\n\n and then try to drop the parent, it says\n\nrwtest=> DROP TABLE test;\nERROR: Relation '22057' inherits 'test'\n\nIs there some easy way to DROP all children tables? I was looking\nthrough some old archives, I found that neither UPDATE nor DELETE dealt\nwith inheritance (some grammar that had to do with relation_name rather\nthan relation_expr), but couldn't find anything about DROP.\n\nMaybe some query to get all relations that inherit from the one I'm\ntrying to drop?\n\nTIA.\n\n-- \nAlvaro Herrera (<alvherre[@]protecne.cl>)\n\n\n", "msg_date": "Mon, 27 Nov 2000 16:42:29 -0300 (CLST)", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": true, "msg_subject": "[GENERAL] DROP TABLE, and children? (fwd)" } ]
[ { "msg_contents": "\n\n Unfortunatly, there is no hard link on beos :=(. link and unlink are \nthere, but link always return \"No such file or directory\".\n\n BTW, What the code in XLogFileInit is supposed to do ? Why not create the \nfile with the right name in the first step ?\n\n I have tried to create the file whith the right name and remove all link/\nunlink. After that, initdb does works but after I have a quite strange \nbehavior : \n * Every select return 0 row (the columns are there, but no datas)\n * Every create or insert crash the backend.\n * If I do nothing, the backend will crash after some minutes\n \n is it related to the first hack ? or is there something else ?\n\n\n cyril\n\n\n>Cyril VELTER <[email protected]> writes:\n>> FATAL 2: InitReopen(logfile 0 seg 0) failed: No such file or directory\n>\n>Does BeOS not support link(2) ?\n>\n>See XLogFileInit() in src/backend/access/transam/xlog.c.\n>\n>\t\t\tregards, tom lane\n>\n\n", "msg_date": "Mon, 27 Nov 2000 21:36:40 +0100", "msg_from": "Cyril VELTER <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Initdb not running on beos " }, { "msg_contents": "Cyril VELTER <[email protected]> writes:\n> Unfortunatly, there is no hard link on beos :=(. link and unlink are \n> there, but link always return \"No such file or directory\".\n\nSomewhere right around here is where I am going to ask why we are\nentertaining the idea of a BeOS port in the first place... it's\nevidently not Unix or even trying hard to be close to Unix.\n\nBad enough to have dozens of #ifdef __BEOS__ already uglifying the code;\nI don't intend to hold still for people saying \"you can't use link()\".\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 27 Nov 2000 16:09:46 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Initdb not running on beos " }, { "msg_contents": "Adam Haberlach <[email protected]> writes:\n> On Mon, Nov 27, 2000 at 04:09:46PM -0500, Tom Lane wrote:\n>> Somewhere right around here is where I am going to ask why we are\n>> entertaining the idea of a BeOS port in the first place... it's\n>> evidently not Unix or even trying hard to be close to Unix.\n\n> \tYou've asked this before.\n\n> \tHow does Windows manage to work?\n\nObjection! Point not in evidence!\n\n;-)\n\nSeriously, we do not pretend to run on Windows. It does seem to be\npossible to run Postgres atop Cygwin's Unix emulation atop Windows.\nHowever, that's only because of some superhuman efforts from the\nCygwin team, not because Windows is a Postgres-compatible platform.\n\nAs far as the original question goes, I suspect that a rename() would\nwork just as well as the link()/unlink() combo that's in that code now.\nI would have no objection to a submitted patch along that line. But the\ntarget audience for Postgres is POSIX-compatible platforms, and I do not\nthink that the core group of developers should be spending much time on\nhacking the code to work on platforms that can't meet the POSIX spec.\nIf anyone else wants to make that happen, we'll accept patches ... but\ndon't expect us to supply solutions, OK?\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 28 Nov 2000 23:44:37 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Initdb not running on beos " }, { "msg_contents": "* Tom Lane <[email protected]> [001128 20:48] wrote:\n> Adam Haberlach <[email protected]> writes:\n> > On Mon, Nov 27, 2000 at 04:09:46PM -0500, Tom Lane wrote:\n> >> Somewhere right around here is where I am going to ask why we are\n> >> entertaining the idea of a BeOS port in the first place... it's\n> >> evidently not Unix or even trying hard to be close to Unix.\n> \n> > \tYou've asked this before.\n> \n> > \tHow does Windows manage to work?\n> \n> Objection! Point not in evidence!\n> \n> ;-)\n> \n> Seriously, we do not pretend to run on Windows. It does seem to be\n> possible to run Postgres atop Cygwin's Unix emulation atop Windows.\n> However, that's only because of some superhuman efforts from the\n> Cygwin team, not because Windows is a Postgres-compatible platform.\n> \n> As far as the original question goes, I suspect that a rename() would\n> work just as well as the link()/unlink() combo that's in that code now.\n> I would have no objection to a submitted patch along that line. But the\n> target audience for Postgres is POSIX-compatible platforms, and I do not\n> think that the core group of developers should be spending much time on\n> hacking the code to work on platforms that can't meet the POSIX spec.\n> If anyone else wants to make that happen, we'll accept patches ... but\n> don't expect us to supply solutions, OK?\n\nAfaik the atomicity of rename() (the same as a link()/unlink() pair)\nis specified by POSIX.\n\nSorry for jumping in late in the thread, but rename() sure sounds a\nlot better than a link()/unlink() pair, but I'm probably taking it\nout of context.\n\n-- \n-Alfred Perlstein - [[email protected]|[email protected]]\n\"I have the heart of a child; I keep it in a jar on my desk.\"\n", "msg_date": "Tue, 28 Nov 2000 20:51:45 -0800", "msg_from": "Alfred Perlstein <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Initdb not running on beos" }, { "msg_contents": "On Mon, Nov 27, 2000 at 04:09:46PM -0500, Tom Lane wrote:\n> Cyril VELTER <[email protected]> writes:\n> > Unfortunatly, there is no hard link on beos :=(. link and unlink are \n> > there, but link always return \"No such file or directory\".\n> \n> Somewhere right around here is where I am going to ask why we are\n> entertaining the idea of a BeOS port in the first place... it's\n> evidently not Unix or even trying hard to be close to Unix.\n\n\tYou've asked this before.\n\n\tHow does Windows manage to work?\n\n-- \nAdam Haberlach |\"California's the big burrito, Texas is the big\[email protected] | taco ... and following that theme, Florida is\nhttp://www.newsnipple.com| the big tamale ... and the only tamale that \n'88 EX500 | counts any more.\" -- Dan Rather \n", "msg_date": "Tue, 28 Nov 2000 21:11:34 -0800", "msg_from": "Adam Haberlach <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Initdb not running on beos" }, { "msg_contents": "Adam Haberlach writes:\n\n> \tHow does Windows manage to work?\n\nWindows NT has hard links.\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n", "msg_date": "Wed, 29 Nov 2000 17:31:12 +0100 (CET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Initdb not running on beos" } ]
[ { "msg_contents": "Hi all,\n\nI heard there is a patch which can assign encoding other the database\ndefault. Can anyone tell me where to get it, or where can I get more\ninformation.\n\nThanks\nDave\n", "msg_date": "Tue, 28 Nov 2000 04:47:00 +0800", "msg_from": "Dave <[email protected]>", "msg_from_op": true, "msg_subject": "JDBC charSet patch" } ]
[ { "msg_contents": "\nJust noticed this:\n\npjw=# create table pk1(f1 integer, constraint zzz primary key(f1));\nNOTICE: CREATE TABLE/PRIMARY KEY will create implicit index 'zzz' for\ntable 'pk1'\nCREATE\npjw=# create table zzz(f1 integer);\nERROR: Relation 'zzz' already exists\n\nIs there a good reason why the automatically created items do not have a\n'pg_' in front of their names?\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Tue, 28 Nov 2000 10:09:59 +1100", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": true, "msg_subject": "Constraint names using 'user namespace'?" }, { "msg_contents": "Philip Warner writes:\n\n> pjw=# create table pk1(f1 integer, constraint zzz primary key(f1));\n> NOTICE: CREATE TABLE/PRIMARY KEY will create implicit index 'zzz' for\n> table 'pk1'\n> CREATE\n> pjw=# create table zzz(f1 integer);\n> ERROR: Relation 'zzz' already exists\n> \n> Is there a good reason why the automatically created items do not have a\n> 'pg_' in front of their names?\n\nOne thing that has always bugged me is why indexes have names at all. I\ncan never think of any. The table name and the attribute\nname(s)/number(s) should suffice.\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n", "msg_date": "Tue, 28 Nov 2000 00:33:57 +0100 (CET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Constraint names using 'user namespace'?" }, { "msg_contents": "Philip Warner <[email protected]> writes:\n> Just noticed this:\n\n> pjw=# create table pk1(f1 integer, constraint zzz primary key(f1));\n> NOTICE: CREATE TABLE/PRIMARY KEY will create implicit index 'zzz' for\n> table 'pk1'\n> CREATE\n> pjw=# create table zzz(f1 integer);\n> ERROR: Relation 'zzz' already exists\n\n> Is there a good reason why the automatically created items do not have a\n> 'pg_' in front of their names?\n\nNot a good idea. I think it should probably be pk1_zzz in this case.\n\nIf we do either, it will break the recently submitted pg_dump patch that\nuses the index name as the constraint name. I thought that patch was\nwrongheaded anyway, and would recommend reversing it...\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 28 Nov 2000 00:24:01 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Constraint names using 'user namespace'? " }, { "msg_contents": "At 00:24 28/11/00 -0500, Tom Lane wrote:\n>Philip Warner <[email protected]> writes:\n>> Just noticed this:\n>\n>> pjw=# create table pk1(f1 integer, constraint zzz primary key(f1));\n>> NOTICE: CREATE TABLE/PRIMARY KEY will create implicit index 'zzz' for\n>> table 'pk1'\n>> CREATE\n>> pjw=# create table zzz(f1 integer);\n>> ERROR: Relation 'zzz' already exists\n>\n>> Is there a good reason why the automatically created items do not have a\n>> 'pg_' in front of their names?\n>\n>Not a good idea. I think it should probably be pk1_zzz in this case.\n\nThat would at least be consistent, but it's still using 'user namespace'\nfor system-related items, which seems like a bad practice if it can be\navoided. I don't mind a longer name, if that is your objection:\npg_constraint_pk1_zzz or some such.\n\n>If we do either, it will break the recently submitted pg_dump patch that\n\nNot too hard to fix.\n\n\n>uses the index name as the constraint name. I thought that patch was\n>wrongheaded anyway, and would recommend reversing it...\n\nI wasn't too keen on it, but could not come up with any good arguments\nagainst it. We need a unified approach to constraints, but in the mean time\nit seems OK. Do you have any more definite objections?\n\n\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Tue, 28 Nov 2000 16:43:22 +1100", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Constraint names using 'user namespace'? " }, { "msg_contents": "Philip Warner <[email protected]> writes:\n>>> Is there a good reason why the automatically created items do not have a\n>>> 'pg_' in front of their names?\n>> \n>> Not a good idea. I think it should probably be pk1_zzz in this case.\n\n> That would at least be consistent, but it's still using 'user namespace'\n> for system-related items, which seems like a bad practice if it can be\n> avoided. I don't mind a longer name, if that is your objection:\n> pg_constraint_pk1_zzz or some such.\n\nNo, my objection is that I couldn't get rid of the index with \"drop index\".\nA pg_ prefix has semantic connotations that I think are inappropriate\nfor a user-table index.\n\nAs for the treading-on-user-namespace issue, we already do that for all\nimplicitly created indexes (see UNIQUE, PRIMARY KEY, etc). I'd prefer\nto treat named constraints consistently with that long-established\npractice until we have a better idea that can be implemented uniformly\nacross that whole set of constructs. (Once we have schemas, for\nexample, it might be practical to give indexes a separate namespace\nfrom tables, which'd help a lot.)\n\n>> uses the index name as the constraint name. I thought that patch was\n>> wrongheaded anyway, and would recommend reversing it...\n\n> I wasn't too keen on it, but could not come up with any good arguments\n> against it. We need a unified approach to constraints, but in the mean time\n> it seems OK. Do you have any more definite objections?\n\nWhat I didn't like was the entirely unjustified assumption that a\nprimary key constraint *has* a name. Introducing a name where none\nexisted is just as bad a sin as dropping one, if not worse (which\ncase do you think is more common?). Given the choice between\nthose two evils, I'll take the one that takes less code, until such\ntime as we can do it right...\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 28 Nov 2000 01:03:27 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Constraint names using 'user namespace'? " }, { "msg_contents": "> As for the treading-on-user-namespace issue, we already do that for all\n> implicitly created indexes (see UNIQUE, PRIMARY KEY, etc). I'd prefer\n> to treat named constraints consistently with that long-established\n> practice until we have a better idea that can be implemented uniformly\n> across that whole set of constructs. (Once we have schemas, for\n> example, it might be practical to give indexes a separate namespace\n> from tables, which'd help a lot.)\n\nSurely the best way to do it would be to make the unique and primary key\nimplicitly created indices totally invisible to the user. Or at least add a\n'system' flag to their entries in the pg_indexes table. Create a\npg_constraint table instead that people can use to find constraints.\n\nTo support this, dropping unique and pk constraints would no longer be\npossible (and _should_ no longer be possible) with a CREATE/DROP INDEX\ncommand, and instead would be achieved with a functional ALTER TABLE\nADD/DROP CONSTRAINT statement.\n\nThis seems good in that in the future, the way pk's and uniques are\nimplemented may change (and no longer be indices for some reason), and any\nchanges will be invisible to the user.\n\nAnd while we're at it, add not null and fk constraints to pg_constraint, and\nmake the fk triggers totally invisible to the user, for similar reasons.\n\nI'm not sure what to do with check constraints - they seem fairly clearly\ndeclared as it is...\n\nChris\n\n", "msg_date": "Tue, 28 Nov 2000 14:18:07 +0800", "msg_from": "\"Christopher Kings-Lynne\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: Constraint names using 'user namespace'? " }, { "msg_contents": "At 02:18 PM 11/28/00 +0800, Christopher Kings-Lynne wrote:\n>> As for the treading-on-user-namespace issue, we already do that for all\n>> implicitly created indexes (see UNIQUE, PRIMARY KEY, etc). I'd prefer\n>> to treat named constraints consistently with that long-established\n>> practice until we have a better idea that can be implemented uniformly\n>> across that whole set of constructs. (Once we have schemas, for\n>> example, it might be practical to give indexes a separate namespace\n>> from tables, which'd help a lot.)\n>\n>Surely the best way to do it would be to make the unique and primary key\n>implicitly created indices totally invisible to the user. Or at least add a\n>'system' flag to their entries in the pg_indexes table. Create a\n>pg_constraint table instead that people can use to find constraints.\n\nOracle has a \"user_constraints\" table. Explicitly named contraints have\nthat name entered into the user's namespace, implicitly named constraints\nget stuffed into \"sys\" in the form \"sys.cnnnnn\", where \"nnnnn\" is drawn\nfrom some system sequence.\n\nIn Oracle you NEED the user_constraints table, particularly for RI constraint\nerrors, because their wonderful error messages just give you the RI constraint\nname. If you've not given it a meaningful name yourself, which typically one\ndoesn't (\"integer references some_table\"), you need to do a select on the\nuser_constraints table to see what went wrong.\n\nKeep PG's superior error messages no matter what else is done :)\n\nThe above is offered as a datapoint, that's all.\n\n>To support this, dropping unique and pk constraints would no longer be\n>possible (and _should_ no longer be possible) with a CREATE/DROP INDEX\n>command, and instead would be achieved with a functional ALTER TABLE\n>ADD/DROP CONSTRAINT statement.\n\nThis is essentially the case in Oracle, though I suspect you could dig\naround, find the name of the unannounced unique index, and drop it by\nhand if you wanted.\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n", "msg_date": "Tue, 28 Nov 2000 06:29:49 -0800", "msg_from": "Don Baccus <[email protected]>", "msg_from_op": false, "msg_subject": "RE: Constraint names using 'user namespace'? " } ]
[ { "msg_contents": "This is just a curiosity.\n\nWhy is the default postgres block size 8192? These days, with caching\nfile systems, high speed DMA disks, hundreds of megabytes of RAM, maybe\neven gigabytes. Surely, 8K is inefficient.\n\nHas anyone done any tests to see if a default 32K block would provide a\nbetter overall performance? 8K seems so small, and 32K looks to be where\nmost x86 operating systems seem to have a sweet spot.\n\nIf someone has the answer off the top of their head, and I'm just being\nstupid, let me have it. However, I have needed to up the block size to\n32K for a text management system and have seen no performance problems.\n(It has not been a scientific experiment, admittedly.)\n\nThis isn't a rant, but my gut tells me that a 32k block size as default\nwould be better, and that smaller deployments should adjust down as\nneeded.\n", "msg_date": "Mon, 27 Nov 2000 18:28:36 -0500", "msg_from": "mlw <[email protected]>", "msg_from_op": true, "msg_subject": "8192 BLCKSZ ?" }, { "msg_contents": "I've been using a 32k BLCKSZ for months now without any trouble, though I've\nnot benchmarked it to see if it's any faster than one with a BLCKSZ of 8k..\n\n-Mitch\n\n> This is just a curiosity.\n>\n> Why is the default postgres block size 8192? These days, with caching\n> file systems, high speed DMA disks, hundreds of megabytes of RAM, maybe\n> even gigabytes. Surely, 8K is inefficient.\n>\n> Has anyone done any tests to see if a default 32K block would provide a\n> better overall performance? 8K seems so small, and 32K looks to be where\n> most x86 operating systems seem to have a sweet spot.\n>\n> If someone has the answer off the top of their head, and I'm just being\n> stupid, let me have it. However, I have needed to up the block size to\n> 32K for a text management system and have seen no performance problems.\n> (It has not been a scientific experiment, admittedly.)\n>\n> This isn't a rant, but my gut tells me that a 32k block size as default\n> would be better, and that smaller deployments should adjust down as\n> needed.\n>\n\n", "msg_date": "Mon, 27 Nov 2000 16:39:49 -0800", "msg_from": "\"Mitch Vincent\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 8192 BLCKSZ ?" }, { "msg_contents": "I don't believe it's a performance issue, I believe it's that writes to\nblocks greater than 8k cannot be guaranteed 'atomic' by the operating\nsystem. Hence, 32k blocks would break the transactions system. (Or\nsomething like that - am I correct?)\n\nChris\n\n> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]]On Behalf Of Mitch Vincent\n> Sent: Tuesday, November 28, 2000 8:40 AM\n> To: mlw; Hackers List\n> Subject: Re: [HACKERS] 8192 BLCKSZ ?\n>\n>\n> I've been using a 32k BLCKSZ for months now without any trouble,\n> though I've\n> not benchmarked it to see if it's any faster than one with a\n> BLCKSZ of 8k..\n>\n> -Mitch\n>\n> > This is just a curiosity.\n> >\n> > Why is the default postgres block size 8192? These days, with caching\n> > file systems, high speed DMA disks, hundreds of megabytes of RAM, maybe\n> > even gigabytes. Surely, 8K is inefficient.\n> >\n> > Has anyone done any tests to see if a default 32K block would provide a\n> > better overall performance? 8K seems so small, and 32K looks to be where\n> > most x86 operating systems seem to have a sweet spot.\n> >\n> > If someone has the answer off the top of their head, and I'm just being\n> > stupid, let me have it. However, I have needed to up the block size to\n> > 32K for a text management system and have seen no performance problems.\n> > (It has not been a scientific experiment, admittedly.)\n> >\n> > This isn't a rant, but my gut tells me that a 32k block size as default\n> > would be better, and that smaller deployments should adjust down as\n> > needed.\n> >\n>\n\n", "msg_date": "Tue, 28 Nov 2000 09:14:15 +0800", "msg_from": "\"Christopher Kings-Lynne\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: 8192 BLCKSZ ?" }, { "msg_contents": "[ Charset ISO-8859-1 unsupported, converting... ]\n> If it breaks anything in PostgreSQL I sure haven't seen any evidence -- the\n> box this database is running on gets hit pretty hard and I haven't had a\n> single ounce of trouble since I went to 7.0.X\n\nLarger block sizes mean larger blocks in the cache, therefore fewer\nblocks per megabyte. The more granular the cache, the better.\n\n8k is the standard Unix file system disk transfer size. Less than that\nwould be overhead of transfering more info that we actually retrieve\nfrom the kernel. Larger and the cache is less granular.\n\nNo transaction issues because we use fsync.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 27 Nov 2000 20:39:51 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 8192 BLCKSZ ?" }, { "msg_contents": "\nNothing is guaranteed for anything larger than 512 bytes, and even \nthen you have maybe 1e-13 likelihood of a badly-written block written \nduring a power outage going unnoticed. (That is why the FAQ recommends\nyou invest in a UPS.) If PG crashes, you're covered, regardless of \nblock size. If the OS crashes, you're not. If the power goes out, \nyou're not.\n\nThe block size affects how much is written when you change only a \nsingle record within a block. When you update a two-byte field in a \n100-byte record, do you want to write 32k? (The answer is \"maybe\".)\n\nNathan Myers\[email protected]\n\nOn Tue, Nov 28, 2000 at 09:14:15AM +0800, Christopher Kings-Lynne wrote:\n> I don't believe it's a performance issue, I believe it's that writes to\n> blocks greater than 8k cannot be guaranteed 'atomic' by the operating\n> system. Hence, 32k blocks would break the transactions system. (Or\n> something like that - am I correct?)\n> \n> > From: [email protected] <On Behalf Of Mitch Vincent>\n> > Sent: Tuesday, November 28, 2000 8:40 AM\n> > Subject: Re: [HACKERS] 8192 BLCKSZ ?\n> >\n> > I've been using a 32k BLCKSZ for months now without any trouble,\n> > though I've\n> > not benchmarked it to see if it's any faster than one with a\n> > BLCKSZ of 8k..\n> >\n> > > This is just a curiosity.\n> > >\n> > > Why is the default postgres block size 8192? These days, with caching\n> > > file systems, high speed DMA disks, hundreds of megabytes of RAM, maybe\n> > > even gigabytes. Surely, 8K is inefficient.\n> > >\n> > > Has anyone done any tests to see if a default 32K block would provide a\n> > > better overall performance? 8K seems so small, and 32K looks to be where\n> > > most x86 operating systems seem to have a sweet spot.\n> > >\n> > > If someone has the answer off the top of their head, and I'm just being\n> > > stupid, let me have it. However, I have needed to up the block size to\n> > > 32K for a text management system and have seen no performance problems.\n> > > (It has not been a scientific experiment, admittedly.)\n> > >\n> > > This isn't a rant, but my gut tells me that a 32k block size as default\n> > > would be better, and that smaller deployments should adjust down as\n> > > needed.\n", "msg_date": "Mon, 27 Nov 2000 17:49:46 -0800", "msg_from": "Nathan Myers <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 8192 BLCKSZ ?" }, { "msg_contents": "At 08:39 PM 11/27/00 -0500, Bruce Momjian wrote:\n>[ Charset ISO-8859-1 unsupported, converting... ]\n>> If it breaks anything in PostgreSQL I sure haven't seen any evidence -- the\n>> box this database is running on gets hit pretty hard and I haven't had a\n>> single ounce of trouble since I went to 7.0.X\n>\n>Larger block sizes mean larger blocks in the cache, therefore fewer\n>blocks per megabyte. The more granular the cache, the better.\n\nWell, true, but when you have 256 MB or a half-gig or more to devote to\nthe cache, you get plenty of blocks, and in pre-PG 7.1 the 8KB limit is a\npain for a lot of folks.\n\nThough the entire discussion's moot with PG 7.1, with the removal of the\ntuple-size limit, it has been unfortunate that the fact that a blocksize\nof up to 32KB can easily be configured at build time hasn't been printed\nin a flaming-red oversized font on the front page of www.postgresql.org.\n\nTHE ENTIRE WORLD seems to believe that PG suffers from a hard-wired 8KB\nlimit on tuple size, rather than simply defaulting to that limit. When\nI tell the heathens that the REAL limit is 32KB, they're surprised, amazed,\npleased etc.\n\nThis default has unfairly contributed to the poor reputation PG has suffered\nfrom for so long due to widespread ignorance that it's only a default, easily\nchanged.\n\nFor instance the November Linux Journal has a column on PG, favorable but\nmentions the 8KB limit as though it's absolute. Tim Perdue's article on\nPHP Builder implied the same when he spoke of PG 7.1 removing the limit.\n\nAgain, PG 7.1 removes the issue entirely, but it is ironic that so many\npeople had heard that PG suffered from a hard-wired 8KB limit on tuple\nlength...\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n", "msg_date": "Mon, 27 Nov 2000 18:25:34 -0800", "msg_from": "Don Baccus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 8192 BLCKSZ ?" }, { "msg_contents": "> At 08:39 PM 11/27/00 -0500, Bruce Momjian wrote:\n> >[ Charset ISO-8859-1 unsupported, converting... ]\n> >> If it breaks anything in PostgreSQL I sure haven't seen any evidence -- the\n> >> box this database is running on gets hit pretty hard and I haven't had a\n> >> single ounce of trouble since I went to 7.0.X\n> >\n> >Larger block sizes mean larger blocks in the cache, therefore fewer\n> >blocks per megabyte. The more granular the cache, the better.\n> \n> Well, true, but when you have 256 MB or a half-gig or more to devote to\n> the cache, you get plenty of blocks, and in pre-PG 7.1 the 8KB limit is a\n> pain for a lot of folks.\n\nAgreed. The other problem is that most people have 2-4MB of cache, so a\n32k default would be too big for them.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 27 Nov 2000 21:30:48 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 8192 BLCKSZ ?" }, { "msg_contents": "If it breaks anything in PostgreSQL I sure haven't seen any evidence -- the\nbox this database is running on gets hit pretty hard and I haven't had a\nsingle ounce of trouble since I went to 7.0.X\n\n-Mitch\n\n----- Original Message -----\nFrom: \"Christopher Kings-Lynne\" <[email protected]>\nTo: \"Hackers List\" <[email protected]>\nSent: Monday, November 27, 2000 5:14 PM\nSubject: RE: [HACKERS] 8192 BLCKSZ ?\n\n\n> I don't believe it's a performance issue, I believe it's that writes to\n> blocks greater than 8k cannot be guaranteed 'atomic' by the operating\n> system. Hence, 32k blocks would break the transactions system. (Or\n> something like that - am I correct?)\n>\n> Chris\n>\n> > -----Original Message-----\n> > From: [email protected]\n> > [mailto:[email protected]]On Behalf Of Mitch Vincent\n> > Sent: Tuesday, November 28, 2000 8:40 AM\n> > To: mlw; Hackers List\n> > Subject: Re: [HACKERS] 8192 BLCKSZ ?\n> >\n> >\n> > I've been using a 32k BLCKSZ for months now without any trouble,\n> > though I've\n> > not benchmarked it to see if it's any faster than one with a\n> > BLCKSZ of 8k..\n> >\n> > -Mitch\n> >\n> > > This is just a curiosity.\n> > >\n> > > Why is the default postgres block size 8192? These days, with caching\n> > > file systems, high speed DMA disks, hundreds of megabytes of RAM,\nmaybe\n> > > even gigabytes. Surely, 8K is inefficient.\n> > >\n> > > Has anyone done any tests to see if a default 32K block would provide\na\n> > > better overall performance? 8K seems so small, and 32K looks to be\nwhere\n> > > most x86 operating systems seem to have a sweet spot.\n> > >\n> > > If someone has the answer off the top of their head, and I'm just\nbeing\n> > > stupid, let me have it. However, I have needed to up the block size to\n> > > 32K for a text management system and have seen no performance\nproblems.\n> > > (It has not been a scientific experiment, admittedly.)\n> > >\n> > > This isn't a rant, but my gut tells me that a 32k block size as defaul\nt\n> > > would be better, and that smaller deployments should adjust down as\n> > > needed.\n> > >\n> >\n>\n>\n\n", "msg_date": "Mon, 27 Nov 2000 18:39:15 -0800", "msg_from": "\"Mitch Vincent\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 8192 BLCKSZ ?" }, { "msg_contents": "At 09:30 PM 11/27/00 -0500, Bruce Momjian wrote:\n\n>> Well, true, but when you have 256 MB or a half-gig or more to devote to\n>> the cache, you get plenty of blocks, and in pre-PG 7.1 the 8KB limit is a\n>> pain for a lot of folks.\n>\n>Agreed. The other problem is that most people have 2-4MB of cache, so a\n>32k default would be too big for them.\n\nI've always been fine with the default, and in fact agree with it. The\nOpenACS project recommends a 16KB default for PG 7.0, but that's only so\nwe can hold reasonable-sized lzText strings in forum tables, etc.\n\nI was only lamenting the fact that the world seems to have the impression\nthat it's not a default, but rather a hard-wired limit.\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n", "msg_date": "Mon, 27 Nov 2000 18:40:07 -0800", "msg_from": "Don Baccus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 8192 BLCKSZ ?" }, { "msg_contents": "\"Christopher Kings-Lynne\" <[email protected]> writes:\n> I don't believe it's a performance issue, I believe it's that writes to\n> blocks greater than 8k cannot be guaranteed 'atomic' by the operating\n> system. Hence, 32k blocks would break the transactions system.\n\nAs Nathan remarks nearby, it's hard to tell how big a write can be\nassumed atomic, unless you have considerable knowledge of your OS and\nhardware. However, on traditional Unix filesystems (BSD-derived) it's\na pretty certain bet that writes larger than 8K will *not* be atomic,\nsince 8K is the filesystem block size. You don't even need any crash\nscenario to see why not: just consider running your disk down to zero\nfree space. If there's one block left when you try to add a\nmulti-block page to your table, you are left with a corrupted page,\nnot an unwritten page.\n\nNot sure about the wild-and-wooly world of Linux filesystems...\nanybody know what the allocation unit is on the popular Linux FSes?\n\nMy feeling is that 8K is an entirely reasonable size now that we have\nTOAST, and so there's no longer much interest in changing the default\nvalue of BLCKSZ.\n\nIn theory, I think, WAL should reduce the importance of page writes\nbeing atomic --- but it still seems like a good idea to ensure that\nthey are as atomic as we can make them.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 28 Nov 2000 00:38:37 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 8192 BLCKSZ ? " }, { "msg_contents": "\nOn Mon, 27 Nov 2000, mlw wrote:\n\n> This is just a curiosity.\n> \n> Why is the default postgres block size 8192? These days, with caching\n> file systems, high speed DMA disks, hundreds of megabytes of RAM, maybe\n> even gigabytes. Surely, 8K is inefficient.\n\n I think it is a pretty wild assumption to say that 32k is more efficient\nthan 8k. Considering how blocks are used, 32k may be in fact quite a bit\nslower than 8k blocks.\n\n\nTom\n\n", "msg_date": "Mon, 27 Nov 2000 22:38:35 -0800 (PST)", "msg_from": "Tom Samplonius <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 8192 BLCKSZ ?" }, { "msg_contents": "> At 09:30 PM 11/27/00 -0500, Bruce Momjian wrote:\n> \n> >> Well, true, but when you have 256 MB or a half-gig or more to devote to\n> >> the cache, you get plenty of blocks, and in pre-PG 7.1 the 8KB limit is a\n> >> pain for a lot of folks.\n> >\n> >Agreed. The other problem is that most people have 2-4MB of cache, so a\n> >32k default would be too big for them.\n> \n> I've always been fine with the default, and in fact agree with it. The\n> OpenACS project recommends a 16KB default for PG 7.0, but that's only so\n> we can hold reasonable-sized lzText strings in forum tables, etc.\n> \n> I was only lamenting the fact that the world seems to have the impression\n> that it's not a default, but rather a hard-wired limit.\n\nAgreed.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 28 Nov 2000 01:53:47 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 8192 BLCKSZ ?" }, { "msg_contents": "On Tue, Nov 28, 2000 at 12:38:37AM -0500, Tom Lane wrote:\n> Not sure about the wild-and-wooly world of Linux filesystems...\n> anybody know what the allocation unit is on the popular Linux FSes?\n\nIt rather depends on the filesystem. Current ext2 (the most common)\nsystems default to 1K on small partitions and 4K otherwise. IIRC,\nreiserfs uses 4K blocks in a tree structure that includes tail merging\nwhich makes the question of block size tricky. Linux 2.3.x passes all\nfile I/O through its page cache, which deals in 4K pages on most 32-bit\narchitectures.\n-- \nBruce Guenter <[email protected]> http://em.ca/~bruceg/", "msg_date": "Tue, 28 Nov 2000 12:32:49 -0600", "msg_from": "Bruce Guenter <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 8192 BLCKSZ ?" }, { "msg_contents": "On Tue, Nov 28, 2000 at 12:38:37AM -0500, Tom Lane wrote:\n> \"Christopher Kings-Lynne\" <[email protected]> writes:\n> > I don't believe it's a performance issue, I believe it's that writes to\n> > blocks greater than 8k cannot be guaranteed 'atomic' by the operating\n> > system. Hence, 32k blocks would break the transactions system.\n> \n> As Nathan remarks nearby, it's hard to tell how big a write can be\n> assumed atomic, unless you have considerable knowledge of your OS and\n> hardware. \n\nNot to harp on the subject, but even if you _do_ know a great deal\nabout your OS and hardware, you _still_ can't assume any write is\natomic.\n\nTo give an idea of what is involved, consider that modern disk \ndrives routinely re-order writes, by themselves. You think you\nhave asked for a sequential write of 8K bytes, or 16 sectors,\nbut the disk might write the first and last sectors first, and \nthen the middle sectors in random order. A block of all zeroes\nmight not be written at all, but just noted in the track metadata.\n\nMost disks have a \"feature\" that they report the write complete\nas soon as it is in the RAM cache, rather than after the sectors\nare on the disk. (It's a \"feature\" because it makes their\nbenchmarks come out better.) It can usually be turned off, but \ndifferent vendors have different ways to do it. Have you turned\nit off on your production drives?\n\nIn the event of a power outage, the drive will stop writing in\nmid-sector. If you're lucky, that sector would have a bad checksum\nif you tried to read it. If the half-written sector happens to \ncontain track metadata, you might have a bigger problem. \n\n----\nThe short summary is: for power outage or OS-crash recovery purposes,\nthere is no such thing as atomicity. This is why backups and \ntransaction logs are important.\n\n\"Invest in a UPS.\" Use a reliable OS, and operate it in a way that\ndoesn't stress it. Even a well-built OS will behave oddly when \nresources are badly stressed. (That the oddities may be documented\ndoesn't really help much.)\n\nFor performance purposes, it may be more or less efficient to group \nwrites into 4K, 8K, or 32K chunks. That's not a matter of database \natomicity, but of I/O optimization. It can only confuse people to \nuse \"atomicity\" in that context.\n\nNathan Myers\[email protected]\n\n", "msg_date": "Tue, 28 Nov 2000 13:01:34 -0800", "msg_from": "Nathan Myers <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 8192 BLCKSZ ?" }, { "msg_contents": "Nathan Myers <[email protected]> writes:\n> In the event of a power outage, the drive will stop writing in\n> mid-sector.\n\nReally? Any competent drive firmware designer would've made sure that\ncan't happen. The drive has to detect power loss well before it\nactually loses control of its actuators, because it's got to move\nthe heads to the safe landing zone. If it checks for power loss and\nstarts that shutdown process between sector writes, never in the middle\nof one, voila: atomic writes.\n\nOf course, there's still no guarantee if you get a hardware failure\nor sector write failure (recovery from the write failure might well\ntake longer than the drive has got). But guarding against a plain\npower-failure scenario is actually simpler than doing it the wrong\nway.\n\nBut, as you say, customary page sizes are bigger than a sector, so\nthis is all moot for our purposes anyway :-(\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 28 Nov 2000 16:24:34 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 8192 BLCKSZ ? " }, { "msg_contents": "On Tue, Nov 28, 2000 at 04:24:34PM -0500, Tom Lane wrote:\n> Nathan Myers <[email protected]> writes:\n> > In the event of a power outage, the drive will stop writing in\n> > mid-sector.\n> \n> Really? Any competent drive firmware designer would've made sure that\n> can't happen. The drive has to detect power loss well before it\n> actually loses control of its actuators, because it's got to move\n> the heads to the safe landing zone. If it checks for power loss and\n> starts that shutdown process between sector writes, never in the middle\n> of one, voila: atomic writes.\n\nI used to think that way too, because that's how I would design a drive.\n(Anyway that would still only give you 512-byte-atomic writes, which \nisn't enough.)\n\nTalking to people who build them was a rude awakening. They have\napparatus to yank the head off the drive and lock it away when the \npower starts to go down, and it will happily operate in mid-write.\n(It's possible that some drives are made the way Tom describes, but \nevidently not the commodity stuff.)\n\nThe level of software-development competence, and of reliability \nengineering, that I've seen among disk drive firmware maintainers\ndistresses me whenever I think about it. A disk drive is best\nconsidered as throwaway cache image of your real medium.\n\n> Of course, there's still no guarantee if you get a hardware failure\n> or sector write failure (recovery from the write failure might well\n> take longer than the drive has got). But guarding against a plain\n> power-failure scenario is actually simpler than doing it the wrong\n> way.\n\nIf only the disk-drive vendors (and buyers!) thought that way...\n\nNathan Myers\[email protected]\n\n", "msg_date": "Tue, 28 Nov 2000 13:50:18 -0800", "msg_from": "Nathan Myers <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 8192 BLCKSZ ?" }, { "msg_contents": "On Tue, 28 Nov 2000, Tom Lane wrote:\n\n> Nathan Myers <[email protected]> writes:\n> > In the event of a power outage, the drive will stop writing in\n> > mid-sector.\n> \n> Really? Any competent drive firmware designer would've made sure that\n> can't happen. The drive has to detect power loss well before it\n> actually loses control of its actuators, because it's got to move the\n> heads to the safe landing zone. If it checks for power loss and\n> starts that shutdown process between sector writes, never in the\n> middle of one, voila: atomic writes.\n\nIn principle, that is correct. However, the SGI XFS people\nhave apparently found otherwise -- what can happen is that\nthe drive itself has enough power to complete a write, but\nthat the disk/controller buffers lose power and so you end\nup writing a (perhaps partial) block of zeroes.\n\nMatthew.\n\n", "msg_date": "Wed, 29 Nov 2000 13:09:05 +0000 (GMT)", "msg_from": "Matthew Kirkwood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 8192 BLCKSZ ? " }, { "msg_contents": "Matthew Kirkwood wrote:\n> \n> On Tue, 28 Nov 2000, Tom Lane wrote:\n> \n> > Nathan Myers <[email protected]> writes:\n> > > In the event of a power outage, the drive will stop writing in\n> > > mid-sector.\n> >\n> > Really? Any competent drive firmware designer would've made sure that\n> > can't happen. The drive has to detect power loss well before it\n> > actually loses control of its actuators, because it's got to move the\n> > heads to the safe landing zone. If it checks for power loss and\n> > starts that shutdown process between sector writes, never in the\n> > middle of one, voila: atomic writes.\n> \n> In principle, that is correct. However, the SGI XFS people\n> have apparently found otherwise -- what can happen is that\n> the drive itself has enough power to complete a write, but\n> that the disk/controller buffers lose power and so you end\n> up writing a (perhaps partial) block of zeroes.\n\nI have worked on a few systems that intend to take a hard power failure\ngracefully. It is a very hard thing to do, with a lot of specialized\ncircuitry.\n\nWhile it is nice to think about, on a normal computer systems one can\nnot depend on a system shutting down gracefully on a hard power loss\nwithout a smart UPS and daemon to shut down the system.\n\nIt does not matter one bit about disk write sizes or what ever. Unless\nthe computer can know it is about to lose power, it can not halt its\noperations and enter a safe mode.\n\nThe whole \"pull the plug\" mentality is silly. Unless a system hardware\nis specifically designed to manage this and proper software in place, it\ncan not be done, and any \"compliance\" you think you see is simply luck.\n\nAny computer that has important data should have a smart UPS and a\ndaemon to manage it. \n\n-- \nhttp://www.mohawksoft.com\n", "msg_date": "Wed, 29 Nov 2000 08:26:16 -0500", "msg_from": "mlw <[email protected]>", "msg_from_op": true, "msg_subject": "Re: 8192 BLCKSZ ?" } ]
[ { "msg_contents": "proc.c has the following code --- unchanged since Postgres95 ---\nin HandleDeadlock():\n\n /* ---------------------\n * Check to see if we've been awoken by anyone in the interim.\n *\n * If we have we can return and resume our transaction -- happy day.\n * Before we are awoken the process releasing the lock grants it to\n * us so we know that we don't have to wait anymore.\n *\n * Damn these names are LONG! -mer\n * ---------------------\n */\n if (IpcSemaphoreGetCount(MyProc->sem.semId, MyProc->sem.semNum) ==\n IpcSemaphoreDefaultStartValue)\n {\n UnlockLockTable();\n return;\n }\n\n /*\n * you would think this would be unnecessary, but...\n *\n * this also means we've been removed already. in some ports (e.g.,\n * sparc and aix) the semop(2) implementation is such that we can\n * actually end up in this handler after someone has removed us from\n * the queue and bopped the semaphore *but the test above fails to\n * detect the semaphore update* (presumably something weird having to\n * do with the order in which the semaphore wakeup signal and SIGALRM\n * get handled).\n */\n if (MyProc->links.prev == INVALID_OFFSET ||\n MyProc->links.next == INVALID_OFFSET)\n {\n UnlockLockTable();\n return;\n }\n\nWell, the reason control can get to the \"apparently unnecessary\" part is\nnot some weird portability glitch; it is that the first test is WRONG.\nIpcSemaphoreGetCount() calls semctl(GETNCNT), which returns not the\ncurrent value of the semaphore as this code expects, but the number of\nwaiters on the semaphore. Since the process doing this test is the\nonly one that'll ever wait on that semaphore, it is impossible for\nIpcSemaphoreGetCount() to return anything but zero, and so the first\nif() has never ever succeeded in the entire history of Postgres.\n\nIpcSemaphoreGetCount is used nowhere else. I see no particularly good\nreason to have it at all, and certainly not to have it with a name so\neasily mistaken for IpcSemaphoreGetValue. It's about to be toast.\n\nNext question is whether to recode the first test \"correctly\" with\nIpcSemaphoreGetValue(), or just remove it. Since we clearly have done\njust fine with testing our internal link pointers to detect removal\nfrom the queue, I'm inclined to remove the first test and thus save an\nunnecessary kernel call. Comments?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 27 Nov 2000 18:30:09 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Ancient lock bug figured out" } ]
[ { "msg_contents": "Would you please consider bringing the contributed package into the\nofficial distribution. \n\nI found that trying to compile it with the RedHat RPM based installation\nwas a monumental pain. I gave up.\n\nIts useful, people ask about it on the list, so why not?\n\nFor comparison, checkout what MSSQL 7 can do with text indexing.\n\nRegards\n\nJohn\n\n", "msg_date": "Tue, 28 Nov 2000 14:08:00 +1300", "msg_from": "\"John Huttley\" <[email protected]>", "msg_from_op": true, "msg_subject": "full text indexing" }, { "msg_contents": "John Huttley wrote:\n> Would you please consider bringing the contributed package into the\n> official distribution.\n \n> I found that trying to compile it with the RedHat RPM based installation\n> was a monumental pain. I gave up.\n \n> Its useful, people ask about it on the list, so why not?\n\nIf there's enough demand, I'd consider doing a 'postgresql-fti' RPM as\npart of the RPMset -- but, unless there are really good reasons for it\ncoming out of contrib, it should stay there.\n\nMaybe asking 'Why isn't the contrib full-text-indexer not in the main\ntree?' would be more productive on that front.\n\nHow did you attempt to build it under the RPM install? I assume you had\nthe postgresql-devel package installed, and the include paths set\nproperly.... Of course, most of what is in contrib assumes a full source\ntree is lying around (argh)....ie, it's wanting to include\nMakefile.global in its Makefile. And, on the RPM dist, Makefile.global\nisn't (yet) packaged.\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Mon, 27 Nov 2000 20:21:55 -0500", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: full text indexing" } ]
[ { "msg_contents": ">\n> Maybe asking 'Why isn't the contrib full-text-indexer not in the main\n> tree?' would be more productive on that front.\n\nWell, yes. Why isn't it?\n\nFull text indexing should be just as much a feature as any other key feature in\nPG.\nWith the advent of unlimited file and record lengths in 7.1, this would be a good\ntime to\ninclude it.\n\nFTI is particularly useful in the context of web content engines.\n\n\n> How did you attempt to build it under the RPM install? I assume you had\n> the postgresql-devel package installed, and the include paths set\n> properly.... Of course, most of what is in contrib assumes a full source\n> tree is lying around (argh)....ie, it's wanting to include\n> Makefile.global in its Makefile. And, on the RPM dist, Makefile.global\n> isn't (yet) packaged.\n\n\nYes, I have the devel RPM, but FTI couldn't find its include files. Or its\nlibraries. Or something.\n\nBuilding from a source tree has always been better for me. The catch is mixing\nthat with the RPMS,\nwhich put things in unholy locations.\nIts very hard (for me at least) to update an RPM version with a version compiled\nfrom the source.\n\nI recently updated my PG system from 6.5.3 to 7.0.3, still RPMs, (another fun\njob) and have not tried to compile FTI\nsubsequently.\n\nHowever if I tried hard enough I'm sure I could fix it. For the moment, I'm on\nanother job so I'm not worrying.\n\nRegards\n\nJohn\n\n\n", "msg_date": "Tue, 28 Nov 2000 14:51:43 +1300", "msg_from": "\"John Huttley\" <[email protected]>", "msg_from_op": true, "msg_subject": "Full text Indexing -out of contrib and into main.." }, { "msg_contents": "At 02:51 PM 11/28/00 +1300, John Huttley wrote:\n>>\n>> Maybe asking 'Why isn't the contrib full-text-indexer not in the main\n>> tree?' would be more productive on that front.\n>\n>Well, yes. Why isn't it?\n>\n>Full text indexing should be just as much a feature as any other key feature in\n>PG.\n>With the advent of unlimited file and record lengths in 7.1, this would be a good\n>time to\n>include it.\n>\n>FTI is particularly useful in the context of web content engines.\n\nWell ... it's pretty inadequate, actually. That might be one reason it's only\nin contrib.\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n", "msg_date": "Mon, 27 Nov 2000 18:27:04 -0800", "msg_from": "Don Baccus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Full text Indexing -out of contrib and into\n main.." }, { "msg_contents": "John Huttley wrote:\n> > Maybe asking 'Why isn't the contrib full-text-indexer not in the main\n> > tree?' would be more productive on that front.\n \n> Well, yes. Why isn't it?\n\nI'm hoping to see the answer to that one myself, as that is outside my\nscope currently. I just RPMize things... Although, I didn't intend for\nmy statement to appear as harsh as it does -- for that I apologize.\n\n> Yes, I have the devel RPM, but FTI couldn't find its include files. Or its\n> libraries. Or something.\n\nMakefile.global. Tried it here. The RPMset hasn't heretofore needed\nMakefile.global. I may package that, amongst other stuff necessary to\nbuild certain things in the devel package -- once I find out how to go\nabout doing it.\n\n> Building from a source tree has always been better for me. The catch is mixing\n> that with the RPMS,\n> which put things in unholy locations.\n> Its very hard (for me at least) to update an RPM version with a version compiled\n> from the source.\n\nI recommend completely removing the RPM version and installing from\nsource rather than trying to upgrade from an RPM distribution to the\nfrom-source distribution. Or just install the next RPM version. Let\nRPM work the headaches for you. If you want to run from a 'from-source'\nbuild, then nix the RPMset altogether and don't worry about it\nafterward.\n\nAlthough I am going to consider a pre-built set of contribs -- most\nnotably, geodistance is likely to find its way into an RPM in the\nfuture. I just haven't decided whether to split out to individual\ncontribs or to just make a single 'postgresql-contrib' subpackage. I'm\nopen to suggestions.\n\nYou can, however, install a source-tree preconfigured and built for the\nRPM modifications by installing the _source_ RPM, and then issuing, as\nroot, 'rpm -bi postgresql.spec' from within the /usr/src/redhat/SPECS\ndir. You will then have a source tree primed for building whatever in\n/usr/src/redhat/BUILD/postgresql-x.y.z (where x.y.z is the version, of\ncourse). You will need python-devel installed in order to do that,\nhowever.\n\nHTH.\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Mon, 27 Nov 2000 21:50:35 -0500", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Full text Indexing -out of contrib and into main.." }, { "msg_contents": "> >Well, yes. Why isn't it?\n> >\n> >Full text indexing should be just as much a feature as any other key feature in\n> >PG.\n> >With the advent of unlimited file and record lengths in 7.1, this would be a good\n> >time to\n> >include it.\n> >\n> >FTI is particularly useful in the context of web content engines.\n> \n> Well ... it's pretty inadequate, actually. That might be one reason it's only\n> in contrib.\n\nOK, can someone collect suggestions, add the code, and integrate it for\n7.1?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 27 Nov 2000 21:53:11 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Full text Indexing -out of contrib and into main.." }, { "msg_contents": "On Mon, 27 Nov 2000, Bruce Momjian wrote:\n\n> > >Well, yes. Why isn't it?\n> > >\n> > >Full text indexing should be just as much a feature as any other key feature in\n> > >PG.\n> > >With the advent of unlimited file and record lengths in 7.1, this would be a good\n> > >time to\n> > >include it.\n> > >\n> > >FTI is particularly useful in the context of web content engines.\n> > \n> > Well ... it's pretty inadequate, actually. That might be one reason it's only\n> > in contrib.\n> \n> OK, can someone collect suggestions, add the code, and integrate it for\n> 7.1?\n\ntoo late in cycle ... \n\n\n", "msg_date": "Mon, 27 Nov 2000 23:06:01 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Full text Indexing -out of contrib and into main.." }, { "msg_contents": "At 11:06 PM 11/27/00 -0400, The Hermit Hacker wrote:\n>On Mon, 27 Nov 2000, Bruce Momjian wrote:\n\n>> OK, can someone collect suggestions, add the code, and integrate it for\n>> 7.1?\n>\n>too late in cycle ... \n\nYes...\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n", "msg_date": "Mon, 27 Nov 2000 21:06:03 -0800", "msg_from": "Don Baccus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Full text Indexing -out of contrib and into\n main.." }, { "msg_contents": " I modified the FTI trigger for my own use a while ago (indexes whole\nwords, eliminates duplicate a few other things) -- I'm not sure if it would\ndo anyone any good but you're welcome to it. To whom should I send it?\n\n-Mitch\n\n----- Original Message -----\nFrom: \"The Hermit Hacker\" <[email protected]>\nTo: \"Bruce Momjian\" <[email protected]>\nCc: \"Don Baccus\" <[email protected]>; \"John Huttley\" <[email protected]>;\n<[email protected]>\nSent: Monday, November 27, 2000 7:06 PM\nSubject: Re: [HACKERS] Full text Indexing -out of contrib and into main..\n\n\n> On Mon, 27 Nov 2000, Bruce Momjian wrote:\n>\n> > > >Well, yes. Why isn't it?\n> > > >\n> > > >Full text indexing should be just as much a feature as any other key\nfeature in\n> > > >PG.\n> > > >With the advent of unlimited file and record lengths in 7.1, this\nwould be a good\n> > > >time to\n> > > >include it.\n> > > >\n> > > >FTI is particularly useful in the context of web content engines.\n> > >\n> > > Well ... it's pretty inadequate, actually. That might be one reason\nit's only\n> > > in contrib.\n> >\n> > OK, can someone collect suggestions, add the code, and integrate it for\n> > 7.1?\n>\n> too late in cycle ...\n>\n>\n>\n\n", "msg_date": "Mon, 27 Nov 2000 22:26:10 -0800", "msg_from": "\"Mitch Vincent\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Full text Indexing -out of contrib and into main.." }, { "msg_contents": "> > > Maybe asking 'Why isn't the contrib full-text-indexer not in the main\n> > > tree?' would be more productive on that front.\n> > Well, yes. Why isn't it?\n\nI believe that it is appropriate for contrib/ because it is a good demo\nof FTI-like capabilities. But nothing more, yet. For at least a couple\nof reasons:\n\n1) It generates the \"index\" as a table, not a PostgreSQL index or\nindex-like thing.\n\n2) It has a hardcoded list of non-indexed words. This should come from a\ntable, to allow it to be tuned to the application requirements.\n\nComments?\n\n - Thomas\n", "msg_date": "Tue, 28 Nov 2000 06:29:49 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Full text Indexing -out of contrib and into main.." }, { "msg_contents": "[ Charset ISO-8859-1 unsupported, converting... ]\n> I modified the FTI trigger for my own use a while ago (indexes whole\n> words, eliminates duplicate a few other things) -- I'm not sure if it would\n> do anyone any good but you're welcome to it. To whom should I send it?\n\nIs full-word optional or mandatory? It has to be an option.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 28 Nov 2000 01:56:26 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Full text Indexing -out of contrib and into main.." }, { "msg_contents": "> > OK, can someone collect suggestions, add the code, and integrate it for\n> > 7.1?\n> \n> too late in cycle ... \n\n\nHow about first thing for 7.2 then? While it lies in limbo,\nits never going to get the attention it deserves.\n\nRegards\n\n\n", "msg_date": "Tue, 28 Nov 2000 20:53:42 +1300", "msg_from": "\"john huttley\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Full text Indexing -out of contrib and into main.." }, { "msg_contents": "\n> I believe that it is appropriate for contrib/ because it is a good demo\n> of FTI-like capabilities. But nothing more, yet. For at least a couple\n> of reasons:\n>\n> 1) It generates the \"index\" as a table, not a PostgreSQL index or\n> index-like thing.\n>\n> 2) It has a hardcoded list of non-indexed words. This should come from a\n> table, to allow it to be tuned to the application requirements.\n>\n> Comments?\n>\n> - Thomas\n>\n\nIn general..\na) Considering that I was coding up the same thing with triggers and such,\nthings could only get better.\n\nb) Check out MSSQL 7's capabilities and weep.\n\nc) It would be a start. One its in the tree, it gets used more, gets\nimproved..\n\nIt would be a while yet before 7.2 starts, plenty of time then to develop\nit further.\n\nRegards\n\nJohn\n\n\n\n", "msg_date": "Tue, 28 Nov 2000 21:09:38 +1300", "msg_from": "\"john huttley\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Full text Indexing -out of contrib and into main.." }, { "msg_contents": "john huttley wrote:\n> \n> > I believe that it is appropriate for contrib/ because it is a good demo\n> > of FTI-like capabilities. But nothing more, yet. For at least a couple\n> > of reasons:\n> >\n> > 1) It generates the \"index\" as a table, not a PostgreSQL index or\n> > index-like thing.\n> >\n> > 2) It has a hardcoded list of non-indexed words. This should come from a\n> > table, to allow it to be tuned to the application requirements.\n> >\n> > Comments?\n> >\n> > - Thomas\n> >\n> \n> In general..\n> a) Considering that I was coding up the same thing with triggers and such,\n> things could only get better.\n\nAFAIK, the one in contrib _is_ the same thing coded up with triggers and\nsuch ;)\n\n> b) Check out MSSQL 7's capabilities and weep.\n\nBTW, have you studied MSSQL enough to tell me if it has a\nseparate/standalone \n(as a process) fti engine or just another index type.\n\nI have been contemplating about implementing FTI for postgres for some\ntime and my \ncurrent plan would be to implement a out-of-process fti engine (API +\nsample \nimplementation, in the spirit of PostgreSQLs extensibility) that could\npostpone \nthe actual indexing but still help with queries even for not yet fully\nindexed stuff.\n\nWill probably need some choreography but essential for high performance.\n\nYou generally don't want to wait for all index entries of an inverted\nindex to be saved.\n\nAlso the thing should be more general than the one in contrib , being\nable to index \nboth fields and full records and support functional indexes.\n\n\nIs there a way to make PostgresQL optimiser aware of the\nselectivity/cost of function, \nso that it can do the right thing for a query like\n\nSELECT * FROM ARTICLES\n WHERE ADATE BETWEEN YESTERDAY AND TOMORROW\n AND ARTICLES.FTI_MATCHES('(CAT & DOG) ! PRESIDENT')\n\nIt would be almost automatic if functions could return sets and then be\nused like\n\nSELECT * FROM ARTICLE\n WHERE ADATE BETWEEN YESTERDAY AND TOMORROW\n AND ARTICLE_ID = ARTICLE.FTI_MATCHING_IDS('(CAT & DOG) ! PRESIDENT')\n\nand somehow the optimiser would know that it can join on the returned\nids but this \nis probably not the case ;)\n\n> c) It would be a start. One its in the tree, it gets used more, gets\n> improved..\n\nBut, it is not a _real_ full text index, just a postgresql sample\napplication that \nimplements a full text index using an sql database.\n\n----------\nHannu\n", "msg_date": "Tue, 28 Nov 2000 11:27:51 +0200", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Full text Indexing -out of contrib and into main.." } ]
[ { "msg_contents": "Hello all,\n\nI am new to postgreSQL. When I read the documents, I find out the Postmaster\ndaemon actual spawns a new backend server process to serve a new client\nrequest. Why not use threads instead? Is that just for a historical reason,\nor some performance/implementation concern?\n\nThank you very much.\nJunfeng\n\n", "msg_date": "Mon, 27 Nov 2000 23:42:24 -0600", "msg_from": "\"Junfeng Zhang\" <[email protected]>", "msg_from_op": true, "msg_subject": "Using Threads?" }, { "msg_contents": "> I am new to postgreSQL. When I read the documents, I find out the Postmaster\n> daemon actual spawns a new backend server process to serve a new client\n> request. Why not use threads instead? Is that just for a historical reason,\n> or some performance/implementation concern?\n\nBoth. Not all systems supported by PostgreSQL have a standards-compliant\nthreading implementation (even more true for the systems PostgreSQL has\nsupported over the years).\n\nBut there are performance and reliability considerations too. A\nthread-only server is likely more brittle than a process-per-client\nimplementation, since all threads share the same address space.\nCorruption in one server might more easily propagate to other servers.\n\nThe time to start a backend is quite often small compared to the time\nrequired for a complete session, so imho the differences in absolute\nspeed are not generally significant.\n\n - Thomas\n", "msg_date": "Mon, 04 Dec 2000 06:42:58 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Using Threads?" }, { "msg_contents": "I maybe wrong but I think that PGSQL is not threaded mostly due to\nhistorical reasons. It looks to me like the source has developed over\ntime where much of the source is not reentrant with many global variables\nthroughout. In addition, the parser is generated by flex which\ncan be made to generate reentrant code but is still not thread safe b/c\nglobal variables are used.\n\nThat being said, I experimented with the 7.0.2 source and came up with a\nmultithreaded backend for PGSQL which uses Solaris Threads. It seems to\nwork, but I drifted very far from the original source. I\nhad to hack flex to generate threadsafe code as well. I use it as a\nlinked library with my own fe<->be protocol. This ended up being much much\nmore than I bargained for and looking back would probably not have tried\nhad I known any better.\n\n\nMyron Scott\n\n\nOn Mon, 27 Nov 2000, Junfeng Zhang wrote:\n\n> Hello all,\n> \n> I am new to postgreSQL. When I read the documents, I find out the Postmaster\n> daemon actual spawns a new backend server process to serve a new client\n> request. Why not use threads instead? Is that just for a historical reason,\n> or some performance/implementation concern?\n> \n> Thank you very much.\n> Junfeng\n> \n\n", "msg_date": "Mon, 4 Dec 2000 00:20:20 -0800 (PST)", "msg_from": "Myron Scott <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Using Threads?" }, { "msg_contents": "On Mon, 27 Nov 2000, Junfeng Zhang wrote:\n\n> Hello all,\n> \n> I am new to postgreSQL. When I read the documents, I find out the Postmaster\n> daemon actual spawns a new backend server process to serve a new client\n> request. Why not use threads instead? Is that just for a historical reason,\n> or some performance/implementation concern?\n\n It's a little a historical reason, but not only. The PostgreSQL allows\n to use user defined modules (functions), it means that bad module or\n bug in core code crash one backend only, but postmaster run still. In the \n thread model crash all running backend. Big differntion is in the lock \n method too. \n\n\t\t\t\tKarel\n\n", "msg_date": "Mon, 4 Dec 2000 12:23:34 +0100 (CET)", "msg_from": "Karel Zak <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Using Threads?" }, { "msg_contents": "On Mon, 27 Nov 2000, Junfeng Zhang wrote:\n\n> Hello all,\n> \n> I am new to postgreSQL. When I read the documents, I find out the\n> Postmaster daemon actual spawns a new backend server process to serve\n> a new client request. Why not use threads instead? Is that just for a\n> historical reason, or some performance/implementation concern?\n\nSeveral reasons, 'historical' probably being the strongest right now\n... since PostgreSQL was never designed for threading, its about as\n'un-thread-safe' as they come, and cleaning that up will/would be a\ncomplete nightmare (should eventually be done, mind you) ...\n\nThe other is stability ... right now, if one backend drops away, for\nwhatever reason, it doesn't take down the whole system ... if you ran\nthings as one process, and that one process died, you just lost your whole\nsystem ...\n\n\n", "msg_date": "Mon, 4 Dec 2000 08:55:53 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Using Threads?" }, { "msg_contents": "All the major operating systems should have POSIX threads implemented.\nActually this can be configurable--multithreads or one thread.\n\nThread-only server is unsafe, I agree. Maybe the following model can be a\nlittle better. Several servers, each is multi-threaded. Every server can\nsupport a maximum number of requests simultaneously. If anything bad\nhappends, it is limited to that server. \n\nThe cons side of processes model is not the startup time. It is about\nkernel resource and context-switch cost. Processes consume much more\nkernel resource than threads, and have a much higher cost for context\nswitch. The scalability of threads model is much better than that of\nprocesses model.\n\n-Junfeng\n\nOn Mon, 4 Dec 2000, Thomas Lockhart wrote:\n\n> > I am new to postgreSQL. When I read the documents, I find out the Postmaster\n> > daemon actual spawns a new backend server process to serve a new client\n> > request. Why not use threads instead? Is that just for a historical reason,\n> > or some performance/implementation concern?\n> \n> Both. Not all systems supported by PostgreSQL have a standards-compliant\n> threading implementation (even more true for the systems PostgreSQL has\n> supported over the years).\n> \n> But there are performance and reliability considerations too. A\n> thread-only server is likely more brittle than a process-per-client\n> implementation, since all threads share the same address space.\n> Corruption in one server might more easily propagate to other servers.\n> \n> The time to start a backend is quite often small compared to the time\n> required for a complete session, so imho the differences in absolute\n> speed are not generally significant.\n> \n> - Thomas\n> \n\n", "msg_date": "Mon, 4 Dec 2000 09:31:14 -0600 (CST)", "msg_from": "Junfeng Zhang <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Using Threads?" }, { "msg_contents": "Myron - \nPutting aside the fork/threads discussion for a moment (the reasons,\nboth historical and other, such as inter-backend protection, are well\ncovered in the archives), the work you did sounds like an interesting\nexperiment in code redesign. Would you be willing to release the hacked\ncode somewhere for others to learn from? Hacking flex to generate\nthread-safe code is of itself interesting, and the question about PG and\nthreads comes up so often, that an example of why it's not a simple task\nwould be useful.\n\nRoss\n\nOn Mon, Dec 04, 2000 at 12:20:20AM -0800, Myron Scott wrote:\n> I maybe wrong but I think that PGSQL is not threaded mostly due to\n> historical reasons. It looks to me like the source has developed over\n> time where much of the source is not reentrant with many global variables\n> throughout. In addition, the parser is generated by flex which\n> can be made to generate reentrant code but is still not thread safe b/c\n> global variables are used.\n> \n> That being said, I experimented with the 7.0.2 source and came up with a\n> multithreaded backend for PGSQL which uses Solaris Threads. It seems to\n> work, but I drifted very far from the original source. I\n> had to hack flex to generate threadsafe code as well. I use it as a\n> linked library with my own fe<->be protocol. This ended up being much much\n> more than I bargained for and looking back would probably not have tried\n> had I known any better.\n> \n> \n> Myron Scott\n> \n", "msg_date": "Mon, 4 Dec 2000 11:33:07 -0600", "msg_from": "\"Ross J. Reedstrom\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Using Threads?" }, { "msg_contents": "\nif we were to do this in steps, I beliee that one of the major problems\nirght now is that we have global variables up the wazoo ... my\n'thread-awareness' is limited, as I've yet to use them, so excuse my\nignorance ... if we got patches that cleaned up the code in stages, moving\ntowards a cleaner code base, then we could get it into the main source\ntree ... ?\n\n On Mon, 4 Dec 2000, Ross J. Reedstrom wrote:\n\n> Myron - \n> Putting aside the fork/threads discussion for a moment (the reasons,\n> both historical and other, such as inter-backend protection, are well\n> covered in the archives), the work you did sounds like an interesting\n> experiment in code redesign. Would you be willing to release the hacked\n> code somewhere for others to learn from? Hacking flex to generate\n> thread-safe code is of itself interesting, and the question about PG and\n> threads comes up so often, that an example of why it's not a simple task\n> would be useful.\n> \n> Ross\n> \n> On Mon, Dec 04, 2000 at 12:20:20AM -0800, Myron Scott wrote:\n> > I maybe wrong but I think that PGSQL is not threaded mostly due to\n> > historical reasons. It looks to me like the source has developed over\n> > time where much of the source is not reentrant with many global variables\n> > throughout. In addition, the parser is generated by flex which\n> > can be made to generate reentrant code but is still not thread safe b/c\n> > global variables are used.\n> > \n> > That being said, I experimented with the 7.0.2 source and came up with a\n> > multithreaded backend for PGSQL which uses Solaris Threads. It seems to\n> > work, but I drifted very far from the original source. I\n> > had to hack flex to generate threadsafe code as well. I use it as a\n> > linked library with my own fe<->be protocol. This ended up being much much\n> > more than I bargained for and looking back would probably not have tried\n> > had I known any better.\n> > \n> > \n> > Myron Scott\n> > \n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Mon, 4 Dec 2000 14:59:52 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Using Threads?" }, { "msg_contents": "On Mon, Nov 27, 2000 at 11:42:24PM -0600, Junfeng Zhang wrote:\n> I am new to postgreSQL. When I read the documents, I find out the Postmaster\n> daemon actual spawns a new backend server process to serve a new client\n> request. Why not use threads instead? Is that just for a historical reason,\n> or some performance/implementation concern?\n\nOnce all the questions regarding \"why not\" have been answered, it would\nbe good to also ask \"why use threads?\" Do they simplify the code? Do\nthey offer significant performance or efficiency gains? What do they\ngive, other than being buzzword compliant?\n-- \nBruce Guenter <[email protected]> http://em.ca/~bruceg/", "msg_date": "Mon, 4 Dec 2000 14:28:10 -0600", "msg_from": "Bruce Guenter <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Using Threads?" }, { "msg_contents": "The Hermit Hacker <[email protected]> writes:\n>> Why not use threads instead? Is that just for a\n>> historical reason, or some performance/implementation concern?\n\n> Several reasons, 'historical' probably being the strongest right now\n> ... since PostgreSQL was never designed for threading, its about as\n> 'un-thread-safe' as they come, and cleaning that up will/would be a\n> complete nightmare (should eventually be done, mind you) ...\n\n> The other is stability ... right now, if one backend drops away, for\n> whatever reason, it doesn't take down the whole system ... if you ran\n> things as one process, and that one process died, you just lost your whole\n> system ...\n\nPortability is another big reason --- using threads would create lots\nof portability headaches for platforms that had no threads or an\nincompatible threads library. (Not to mention buggy threads libraries,\nnot-quite-thread-safe libc routines, yadda yadda.)\n\nThe amount of work required looks far out of proportion to the payoff...\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 04 Dec 2000 15:29:19 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Using Threads? " }, { "msg_contents": "Adam Haberlach writes:\n> Typically (on a well-written OS, at least), the spawning of a thread\n> is much cheaper then the creation of a new process (via fork()).\n\nThis would be well worth testing on some representative sample\nsystems.\n\nWithin the past year and a half at one of my gigs some coworkers did\ntests on various platforms (Irix, Solaris, a few variations of Linux\nand *BSDs) and concluded that in fact the threads implementations were\noften *slower* than using processes for moving and distributing the\nsorts of data that they were playing with.\n\nWith copy-on-write and interprocess pipes that are roughly equivalent\nto memcpy() speeds it was determined for that application that the\nbest way to split up tasks was fork()ing and dup().\n\nAs always, your mileage will vary, but the one thing that consistently\namazes me on the Un*x like operating systems is that usually the\nprogrammatically simplest way to implement something has been\noptimized all to heck.\n\nA lesson that comes hard to those of us who grew up on MS systems.\n\nDan\n", "msg_date": "Mon, 4 Dec 2000 14:30:31 -0800 (PST)", "msg_from": "Dan Lyke <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Using Threads?" }, { "msg_contents": "On Mon, Dec 04, 2000 at 02:28:10PM -0600, Bruce Guenter wrote:\n> On Mon, Nov 27, 2000 at 11:42:24PM -0600, Junfeng Zhang wrote:\n> > I am new to postgreSQL. When I read the documents, I find out the Postmaster\n> > daemon actual spawns a new backend server process to serve a new client\n> > request. Why not use threads instead? Is that just for a historical reason,\n> > or some performance/implementation concern?\n> \n> Once all the questions regarding \"why not\" have been answered, it would\n> be good to also ask \"why use threads?\" Do they simplify the code? Do\n> they offer significant performance or efficiency gains? What do they\n> give, other than being buzzword compliant?\n\n\tTypically (on a well-written OS, at least), the spawning of a thread\nis much cheaper then the creation of a new process (via fork()). Also,\nsince everything in a group of threads (I'll call 'em a team) shares the\nsame address space, there can be some memory overhead savings.\n\n-- \nAdam Haberlach |\"California's the big burrito, Texas is the big\[email protected] | taco ... and following that theme, Florida is\nhttp://www.newsnipple.com| the big tamale ... and the only tamale that \n'88 EX500 | counts any more.\" -- Dan Rather \n", "msg_date": "Mon, 4 Dec 2000 15:17:00 -0800", "msg_from": "Adam Haberlach <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Using Threads?" }, { "msg_contents": "On Mon, Dec 04, 2000 at 03:17:00PM -0800, Adam Haberlach wrote:\n> \tTypically (on a well-written OS, at least), the spawning of a thread\n> is much cheaper then the creation of a new process (via fork()).\n\nUnless I'm mistaken, the back-end is only forked when starting a new\nconnection, in which case the latency of doing the initial TCP tri-state\nand start-up queries is much larger than any process creation cost. On\nLinux 2.2.16 on a 500MHz PIII, I can do the fork/exit/wait sequence in\nabout 164us. On the same server, I can make/break a PostgreSQL\nconnection in about 19,000us (with 0% CPU idle, about 30% CPU system).\nEven if we can manage to get a thread for free, and assume that the fork\nfrom postmaster takes more than 164us, it won't make a big difference\nonce the other latencies are worked out.\n\n> Also, since everything in a group of threads (I'll call 'em a team)\n\nActually, you call them a process. That is the textbook definition.\n\n> shares the\n> same address space, there can be some memory overhead savings.\n\nOnly slightly. All of the executable and libraries should already be\nshared, as will all non-modified data. If the data is modified by the\nthreads, you'll need seperate copies for each thread anyways, so the net\ndifference is small.\n\nI'm not denying there would be a difference. Compared to seperate\nprocesses, threads are more efficient. Doing a context switch between\nthreads means there is no PTE invalidations, which makes them quicker\nthan between processes. Creation would be a bit faster due to just\nlinking in the VM to a new thread rather than marking it all as COW.\nThe memory savings would come from reduced fragmentation of the modified\ndata (if you have 1 byte modified on each of 100 pages, the thread would\ngrow by a few K, compared to 400K for processes). I'm simply arguing\nthat the differences don't appear to be significant compared to the\nother costs involved.\n-- \nBruce Guenter <[email protected]> http://em.ca/~bruceg/", "msg_date": "Mon, 4 Dec 2000 17:17:04 -0600", "msg_from": "Bruce Guenter <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Using Threads?" }, { "msg_contents": "On Mon, Dec 04, 2000 at 02:30:31PM -0800, Dan Lyke wrote:\n> Adam Haberlach writes:\n> > Typically (on a well-written OS, at least), the spawning of a thread\n> > is much cheaper then the creation of a new process (via fork()).\n> This would be well worth testing on some representative sample\n> systems.\n\nUsing the following program for timing process creation and cleanup:\n\nmain() {\n int i;\n int pid;\n for (i=0; i<100000; ++i) {\n pid=fork();\n if(pid==-1) exit(1);\n if(!pid) _exit(0);\n waitpid(pid,0,0);\n }\n exit(0);\n} \n\nAnd using the following program for timing thread creation and cleanup:\n\n#include <pthread.h>\n\nthreadfn() { pthread_exit(0); }\n\nmain() {\n int i;\n pthread_t thread;\n for (i=0; i<100000; ++i) {\n if (pthread_create(&thread, 0, threadfn, 0)) exit(1);\n if (pthread_join(thread, 0)) exit(1);\n }\n exit(0);\n} \n\nOn a relatively unloaded 500MHz PIII running Linux 2.2, the fork test\nprogram took a minimum of 16.71 seconds to run (167us per\nfork/exit/wait), and the thread test program took a minimum of 12.10\nseconds to run (121us per pthread_create/exit/join). I use the minimums\nbecause those would be the runs where the tasks were least interfered\nwith by other tasks. This amounts to a roughly 25% speed improvement\nfor threads over processes, for the null-process case.\n\nIf I add the following lines before the for loop:\n char* m;\n m=malloc(1024*1024);\n memset(m,0,1024,1024);\nThe cost for doing the fork balloons to 240us, whereas the cost for\ndoing the thread is constant. So, the cost of marking the pages as COW\nis quite significant (using those numbers, 73us/MB).\n\nSo, forking a process with lots of data is expensive. However, most of\nthe PostgreSQL data is in a SysV IPC shared memory segment, which\nshouldn't affect the fork numbers.\n-- \nBruce Guenter <[email protected]> http://em.ca/~bruceg/", "msg_date": "Mon, 4 Dec 2000 17:57:29 -0600", "msg_from": "Bruce Guenter <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Using Threads?" }, { "msg_contents": "I would love to distribute this code to anybody who wants it. Any\nsuggestions for a good place? However, calling the\nwork a code redesign is a bit generous. This was more like a\nbrute force hack. I just moved all the connection related global\nvariables to\na thread local \"environment variable\" and bypassed much of the postmaster\ncode. \n\nI did this so I could port my app which was originally designed for\nOracle OCI and Java. My app uses very few SQL statements but uses them\nover and over. I wanted true prepared statements linked to Java with JNI.\nI got both as well as batched transaction writes ( which was more relevant\nbefore WAL). \n\nIn my situation, threads seemed much more flexible to implement, and I\nprobably could\nnot have done the port without it.\n\n\nMyron \n\nOn Mon, 4 Dec 2000, Ross J. Reedstrom wrote:\n\n> Myron - \n> Putting aside the fork/threads discussion for a moment (the reasons,\n> both historical and other, such as inter-backend protection, are well\n> covered in the archives), the work you did sounds like an interesting\n> experiment in code redesign. Would you be willing to release the hacked\n> code somewhere for others to learn from? Hacking flex to generate\n> thread-safe code is of itself interesting, and the question about PG and\n> threads comes up so often, that an example of why it's not a simple task\n> would be useful.\n> \n> Ross\n> \n\n", "msg_date": "Mon, 4 Dec 2000 20:06:55 -0800 (PST)", "msg_from": "Myron Scott <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Using Threads?" }, { "msg_contents": "\nOn Mon, 4 Dec 2000, Junfeng Zhang wrote:\n\n> All the major operating systems should have POSIX threads implemented.\n> Actually this can be configurable--multithreads or one thread.\n\n I don't understand this. The OS can be configured for one thread? How\nwould that be any of use?\n\n> Thread-only server is unsafe, I agree. Maybe the following model can be a\n> little better. Several servers, each is multi-threaded. Every server can\n> support a maximum number of requests simultaneously. If anything bad\n> happends, it is limited to that server. \n\n There is no difference. If anything bad happens with the current\nmulti-process server, all the postgres backends shutdown because the\nshared memory may be corrupted.\n\n> The cons side of processes model is not the startup time. It is about\n> kernel resource and context-switch cost. Processes consume much more\n> kernel resource than threads, and have a much higher cost for context\n> switch. The scalability of threads model is much better than that of\n> processes model.\n\n What kernel resources do a process use? There is some VM mapping\noverhead, a process table entry, and a file descriptor table. It is\npossible to support thousands of processes today. For instance,\nftp.freesoftware.com supports up to 5000 FTP connections using a slightly\nmodified ftpd (doesn't use inetd anymore). That means with 5000 users\nconnected, that works out to 5000 processes active. Amazing but true.\n\n Some OSes (Linux is the main one) implement threads as pseudo processes.\nLinux threads are processes with a shared address space and file\ndescriptor table.\n\n Context switch cost for threads can be lower if you are switching to a\nthread in the same process. That of course assumes that all context\nswitches will occur within the same process, or the Linux\neverything-is-a-process model isn't used.\n\n> -Junfeng\n\nTom\n\n", "msg_date": "Mon, 4 Dec 2000 20:43:24 -0800 (PST)", "msg_from": "Tom Samplonius <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Using Threads?" }, { "msg_contents": "Bruce Guenter <[email protected]> writes:\n> [ some very interesting datapoints ]\n>\n> So, forking a process with lots of data is expensive. However, most of\n> the PostgreSQL data is in a SysV IPC shared memory segment, which\n> shouldn't affect the fork numbers.\n\nI believe (but don't have numbers to prove it) that most of the present\nbackend startup time has *nothing* to do with thread vs process\noverhead. Rather, the primary startup cost has to do with initializing\ndatastructures, particularly the system-catalog caches. A backend isn't\ngoing to get much real work done until it's slurped in a useful amount\nof catalog cache --- for example, until it's got the cache entries for\npg_class and the indexes thereon, it's not going to accomplish anything\nat all.\n\nSwitching to a thread model wouldn't help this cost a bit, unless\nwe also switch to a shared cache model. That's not necessarily a win\nwhen you consider the increased costs associated with cross-backend\nor cross-thread synchronization needed to access or update the cache.\nAnd if it *is* a win, we could get most of the same benefit in the\nmultiple-process model by keeping the cache in shared memory.\n\nThe reason that a new backend has to do all this setup work for itself,\nrather than inheriting preloaded cache entries via fork/copy-on-write\nfrom the postmaster, is that the postmaster isn't part of the ring of\nprocesses that can access the database files directly. That was done\noriginally for robustness reasons: since the PM doesn't have to deal\nwith database access, cache invalidation messages, etc etc yadda yadda,\nit is far simpler and less likely to crash than a real backend. If we\nconclude that shared syscache is not a reasonable idea, it might be\ninteresting to look into making the PM into a full-fledged backend\nthat maintains a basic set of cache entries, so that these entries are\nimmediately available to new backends. But we'd have to take a real\nhard look at the implications for system robustness/crash recovery.\n\nIn any case I think we're a long way away from the point where switching\nto threads would make a big difference in connection startup time.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 05 Dec 2000 00:06:41 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Using Threads? " }, { "msg_contents": "On Mon, Dec 04, 2000 at 08:43:24PM -0800, Tom Samplonius wrote:\n> Some OSes (Linux is the main one) implement threads as pseudo processes.\n> Linux threads are processes with a shared address space and file\n> descriptor table.\n> \n> Context switch cost for threads can be lower if you are switching to a\n> thread in the same process. That of course assumes that all context\n> switches will occur within the same process, or the Linux\n> everything-is-a-process model isn't used.\n\nActually, context switch cost between threads is low on Linux as well,\nsince the CPU's VM mappings don't get invalidated. This means that its\npage tables won't get reloaded, which is one of the large costs involved\nin context switches. Context switches between processes takes (with no\nsignificant VM) about 900 cycles (1.8us) on a 450MHz Celery. I would\nexpect thread switch time to be slightly lower than that, and context\nswitches between processes with large VMs would be much larger just due\nto the cost of reloading the page tables.\n-- \nBruce Guenter <[email protected]> http://em.ca/~bruceg/", "msg_date": "Tue, 5 Dec 2000 10:09:25 -0600", "msg_from": "Bruce Guenter <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Using Threads?" }, { "msg_contents": "> All the major operating systems should have POSIX threads implemented.\n> Actually this can be configurable--multithreads or one thread.\n> \n> Thread-only server is unsafe, I agree. Maybe the following model can be a\n> little better. Several servers, each is multi-threaded. Every server can\n> support a maximum number of requests simultaneously. If anything bad\n> happends, it is limited to that server. \n> \n> The cons side of processes model is not the startup time. It is about\n> kernel resource and context-switch cost. Processes consume much more\n> kernel resource than threads, and have a much higher cost for context\n> switch. The scalability of threads model is much better than that of\n> processes model.\n\nMy question here is how much do we really context switch. We do quite a\nbit of work for each query, and I don't see us giving up the CPU very\noften, as would be the case for a GUI where each thread does a little\nwork and goes to sleep.\n\nAlso, as someone pointed out, the postmaster doesn't connect to the\ndatabase, so there isn't much COW overhead. The big win is that all the\ntext page are already in memory and shared by all backends. They can't\nmodify those.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 9 Dec 2000 00:19:27 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Using Threads?" }, { "msg_contents": "-- Start of PGP signed section.\n> On Mon, Nov 27, 2000 at 11:42:24PM -0600, Junfeng Zhang wrote:\n> > I am new to postgreSQL. When I read the documents, I find out the Postmaster\n> > daemon actual spawns a new backend server process to serve a new client\n> > request. Why not use threads instead? Is that just for a historical reason,\n> > or some performance/implementation concern?\n> \n> Once all the questions regarding \"why not\" have been answered, it would\n> be good to also ask \"why use threads?\" Do they simplify the code? Do\n> they offer significant performance or efficiency gains? What do they\n> give, other than being buzzword compliant?\n\nGood question. I have added this to the developers FAQ:\n\n---------------------------------------------------------------------------\n\n14) Why don't we use threads in the backend?\n\nThere are several reasons threads are not used:\n\n Historically, threads were unsupported and buggy. \n An error in one backend can corrupt other backends. \n Speed improvements using threads are small compared to the\n remaining backend startup time. \n The backend code would be more complex. \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 9 Dec 2000 00:20:33 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Using Threads?" }, { "msg_contents": "> Adam Haberlach writes:\n> > Typically (on a well-written OS, at least), the spawning of a thread\n> > is much cheaper then the creation of a new process (via fork()).\n> \n> This would be well worth testing on some representative sample\n> systems.\n> \n> Within the past year and a half at one of my gigs some coworkers did\n> tests on various platforms (Irix, Solaris, a few variations of Linux\n> and *BSDs) and concluded that in fact the threads implementations were\n> often *slower* than using processes for moving and distributing the\n> sorts of data that they were playing with.\n> \n> With copy-on-write and interprocess pipes that are roughly equivalent\n> to memcpy() speeds it was determined for that application that the\n> best way to split up tasks was fork()ing and dup().\n\nThis brings up a good point. Threads are mostly useful when you have\nmultiple processes that need to share lots of data, and the interprocess\noverhead is excessive. Because we already have that shared memory area,\nthis benefit of threads doesn't buy us much. We sort of already have\ndone the _shared_ part, and the addition of sharing our data pages is\nnot much of a win.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 9 Dec 2000 00:22:34 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Using Threads?" }, { "msg_contents": "> \n> On Mon, 4 Dec 2000, Junfeng Zhang wrote:\n> \n> > All the major operating systems should have POSIX threads implemented.\n> > Actually this can be configurable--multithreads or one thread.\n> \n> I don't understand this. The OS can be configured for one thread? How\n> would that be any of use?\n> \n> > Thread-only server is unsafe, I agree. Maybe the following model can be a\n> > little better. Several servers, each is multi-threaded. Every server can\n> > support a maximum number of requests simultaneously. If anything bad\n> > happends, it is limited to that server. \n> \n> There is no difference. If anything bad happens with the current\n> multi-process server, all the postgres backends shutdown because the\n> shared memory may be corrupted.\n\nYes. Are we adding reliability with per-process backends. I think so\nbecause things are less likely to go haywire, and we are more likely to\nbe able to clean things up in a failure.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 9 Dec 2000 00:25:16 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Using Threads?" }, { "msg_contents": "> Bruce Guenter <[email protected]> writes:\n> > [ some very interesting datapoints ]\n> >\n> > So, forking a process with lots of data is expensive. However, most of\n> > the PostgreSQL data is in a SysV IPC shared memory segment, which\n> > shouldn't affect the fork numbers.\n> \n> I believe (but don't have numbers to prove it) that most of the present\n> backend startup time has *nothing* to do with thread vs process\n> overhead. Rather, the primary startup cost has to do with initializing\n> datastructures, particularly the system-catalog caches. A backend isn't\n> going to get much real work done until it's slurped in a useful amount\n> of catalog cache --- for example, until it's got the cache entries for\n> pg_class and the indexes thereon, it's not going to accomplish anything\n> at all.\n> \n> Switching to a thread model wouldn't help this cost a bit, unless\n> we also switch to a shared cache model. That's not necessarily a win\n> when you consider the increased costs associated with cross-backend\n> or cross-thread synchronization needed to access or update the cache.\n> And if it *is* a win, we could get most of the same benefit in the\n> multiple-process model by keeping the cache in shared memory.\n\nOf course, we would also have to know which database was being used\nnext. Each database's system catalog can be different.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 9 Dec 2000 00:26:15 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Using Threads?" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n>> There is no difference. If anything bad happens with the current\n>> multi-process server, all the postgres backends shutdown because the\n>> shared memory may be corrupted.\n\n> Yes. Are we adding reliability with per-process backends.\n\nYes, we are: the postmaster forces a system-wide restart only if a\nbackend actually coredumps, or exits with elog(STOP). If a backend\ncurls up and dies with elog(FATAL), we assume it's a positive sign\nthat it was able to detect the error ;-), and keep plugging.\n\nNow you might argue about whether any particular error case has been\nmisclassified, and I'm sure some are. But my point is that in a\nmultithread environment we couldn't risk allowing the other threads to\nkeep running after an elog(FATAL), either. Threads don't have *any*\nprotection against screwups in other threads.\n\nA closely related point: how often do you hear of postmaster crashes?\nThey don't happen, as a rule. That's because the postmaster is (a)\nsimple and (b) decoupled from the backends. To convert backend process\nlaunch into backend thread launch, the postmaster would have to live\nin the same process space as the backends, and that means any backend\ncoredump would take down the postmaster too.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 09 Dec 2000 00:59:31 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Using Threads? " }, { "msg_contents": "> Bruce Momjian <[email protected]> writes:\n> >> There is no difference. If anything bad happens with the current\n> >> multi-process server, all the postgres backends shutdown because the\n> >> shared memory may be corrupted.\n> \n> > Yes. Are we adding reliability with per-process backends.\n> \n> Yes, we are: the postmaster forces a system-wide restart only if a\n> backend actually coredumps, or exits with elog(STOP). If a backend\n> curls up and dies with elog(FATAL), we assume it's a positive sign\n> that it was able to detect the error ;-), and keep plugging.\n\nIt would be interesting to have one backend per database, and have\nall threads for that database running as threads. That would remove the\nstartup problem, but each thread would have to have private copies of\nsome system tuples it modifies.\n\nAnother idea would be to have a file that holds the standard system\ntuples normally loaded. These files would sit in each database, and be\nread on startup. If one backend modifies one of these tuples, it can\ndelete the file and have the next backend recreate it. We already have\nsome of these for hard-wired tuples. This would improve startup time.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 9 Dec 2000 10:18:17 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Using Threads?" }, { "msg_contents": "For anyone interested,\n\nI have posted my multi-threaded version of PostgreSQL here.\n\nhttp://www.sacadia.com/mtpg.html\n\nIt is based on 7.0.2 and the TAO CORBA ORB which is here.\n\nhttp://www.cs.wustl.edu/~schmidt/TAO.html\n\nMyron Scott\[email protected]\n\n\n", "msg_date": "Mon, 01 Jan 2001 21:32:11 -0800", "msg_from": "Myron Scott <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Using Threads?" }, { "msg_contents": "\nOn Mon, 1 Jan 2001, Myron Scott wrote:\n\n> For anyone interested,\n> \n> I have posted my multi-threaded version of PostgreSQL here.\n> \n> http://www.sacadia.com/mtpg.html\n\n How you solve locks? Via original IPC or you rewrite it to mutex (etc).\n\n\t\t\t\tKarel \n\n", "msg_date": "Tue, 2 Jan 2001 08:58:07 +0100 (CET)", "msg_from": "Karel Zak <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Using Threads?" }, { "msg_contents": "spinlocks rewritten to mutex_\nlocktable uses sema_\nsome cond_ in bufmgr.c\n\nMyron\n\n\nKarel Zak wrote:\n\n> On Mon, 1 Jan 2001, Myron Scott wrote:\n> \n> \n>> For anyone interested,\n>> \n>> I have posted my multi-threaded version of PostgreSQL here.\n>> \n>> http://www.sacadia.com/mtpg.html\n> \n> \n> How you solve locks? Via original IPC or you rewrite it to mutex (etc).\n> \n> \t\t\t\tKarel \n\n", "msg_date": "Tue, 02 Jan 2001 00:09:01 -0800", "msg_from": "Myron Scott <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Using Threads?" }, { "msg_contents": "Myron Scott wrote:\n> \n> For anyone interested,\n> \n> I have posted my multi-threaded version of PostgreSQL here.\n> \n> http://www.sacadia.com/mtpg.html\n> \n> It is based on 7.0.2 and the TAO CORBA ORB which is here.\n> \n> http://www.cs.wustl.edu/~schmidt/TAO.html\n> \n> Myron Scott\n> [email protected]\n\nSounds cool. Have you done any benchmarking? I would love to compare the\nthreaded version against the process version and see if there is any\ndifference. I suspect there will not be much, but I have been surprised\nmore than once by unexpected behavior of IPC.\n\nI have not used Solaris threads, are they substantially different from\npthreads? (Pthreads, on solaris, should be based on solaris threads,\ncorrect?)\n\nI would love to see the #ifdef __cplusplus put in all the postgres\nheaders.\n\n-- \nhttp://www.mohawksoft.com\n", "msg_date": "Tue, 02 Jan 2001 08:18:20 -0500", "msg_from": "mlw <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Using Threads?" } ]
[ { "msg_contents": "Hi,\n\nDoes anyone have a snippet of postgres SQL that will create a database with\n_everything_ that postgres supports? (ie. types, functions, constraints,\noperators, everything...)\n\nI just need it for testing SQL dump code.\n\nIf not, then I'll create one myself and post it back here.\n\nThanks,\n\nChris\n\n--\nChristopher Kings-Lynne\nFamily Health Network (ACN 089 639 243)\n\n", "msg_date": "Tue, 28 Nov 2000 14:11:08 +0800", "msg_from": "\"Christopher Kings-Lynne\" <[email protected]>", "msg_from_op": true, "msg_subject": "Example Database Script" }, { "msg_contents": "At 14:11 28/11/00 +0800, Christopher Kings-Lynne wrote:\n>Hi,\n>\n>Does anyone have a snippet of postgres SQL that will create a database with\n>_everything_ that postgres supports? (ie. types, functions, constraints,\n>operators, everything...)\n\nI tend to use my own databases (because the have lots of data), and the\nregression database (because it has a lot of PG things). When you find\nsomething missing from the regression DB, you might consider adding it...\n\nAlso, bear in mind that pg_dump does not work properly with the regression\nDB - something about column order in the CREATE TABLE vs. COPY statements,\nbut there is only the one error.\n\n\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Tue, 28 Nov 2000 17:47:17 +1100", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Example Database Script" }, { "msg_contents": "> At 14:11 28/11/00 +0800, Christopher Kings-Lynne wrote:\n> >Hi,\n> >\n> >Does anyone have a snippet of postgres SQL that will create a database with\n> >_everything_ that postgres supports? (ie. types, functions, constraints,\n> >operators, everything...)\n> \n> I tend to use my own databases (because the have lots of data), and the\n> regression database (because it has a lot of PG things). When you find\n> something missing from the regression DB, you might consider adding it...\n> \n> Also, bear in mind that pg_dump does not work properly with the regression\n> DB - something about column order in the CREATE TABLE vs. COPY statements,\n> but there is only the one error.\n\nThe real test is to run the regression test, dump the database, reload\nit into another db, do a dump of that db, and compare the original\nregression dump with that one.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 28 Nov 2000 02:00:09 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Example Database Script" } ]
[ { "msg_contents": "I have Red Hat Linux 6.2 , PostgreSQL 7.0.2.\nCould anybody help me to configure ident daemon using the file\npg_ident.conf\n\n\n\nThanks in advance,\n\nanuradha\n\n\n", "msg_date": "Tue, 28 Nov 2000 13:51:20 +0530", "msg_from": "anuradha <[email protected]>", "msg_from_op": true, "msg_subject": "pg_ident.conf" } ]
[ { "msg_contents": "Hi,\n\n how long is PG7.1 already in beta testing? can it be released before Christmas day?\n can PG7.1 will recover database from system crash?\n \n Thanks,\n \n XuYifeng\n \n \n", "msg_date": "Tue, 28 Nov 2000 16:36:29 +0800", "msg_from": "\"xuyifeng\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: beta testing version" } ]
[ { "msg_contents": "It's been committed into the cvs repository. The easiest thing to do is to\nuse CVS. I can't remember if it was posted direct to me, or to the patches\nlist.\n\nPeter\n\n-- \nPeter Mount\nEnterprise Support Officer, Maidstone Borough Council\nEmail: [email protected]\nWWW: http://www.maidstone.gov.uk\nAll views expressed within this email are not the views of Maidstone Borough\nCouncil\n\n\n-----Original Message-----\nFrom: Dave [mailto:[email protected]]\nSent: Monday, November 27, 2000 8:47 PM\nTo: [email protected]\nSubject: [HACKERS] JDBC charSet patch\n\n\nHi all,\n\nI heard there is a patch which can assign encoding other the database\ndefault. Can anyone tell me where to get it, or where can I get more\ninformation.\n\nThanks\nDave\n", "msg_date": "Tue, 28 Nov 2000 08:52:11 -0000", "msg_from": "Peter Mount <[email protected]>", "msg_from_op": true, "msg_subject": "RE: JDBC charSet patch" } ]
[ { "msg_contents": "\n-----Original Message-----\n发件人: xuyifeng <[email protected]>\n收件人: [email protected] <[email protected]>\n日期: 2000年11月28日 16:22\n主题: [HACKERS] beta testing version\n\n\n>Hi,\n>\n> how long is PG7.1 already in beta testing? can it be released before\nChristmas day?\n\nyou may get pre-beta version via cvs.\npretty stable, I've tested.\n\n> can PG7.1 will recover database from system crash?\nyes, I've tested via kill use -KILL signal. but no more.\n>\n> Thanks,\n>\n>XuYifeng\n>\n\n", "msg_date": "Tue, 28 Nov 2000 16:54:42 +0800", "msg_from": "\"He weiping\" <[email protected]>", "msg_from_op": true, "msg_subject": "\n =?hz-gb-2312?B?fns7WDg0fn06IFtIQUNLRVJTXSBiZXRhIHRlc3RpbmcgdmVyc2lvbg==?=" } ]
[ { "msg_contents": "I'm using cvs-current, and testing those build-in function\naccording to the docs. \nbut it seems the \"lpad\", \"rpad\" don't work,\nwhen I type:\nselect lpad('laser', 4, 'a');\nin psql, the result is still \n'laser', the same with 'rpad',\nIs it a bug or I'm mis-understaning the lpad and/or rpad functions?\n\nRegards\n\nLaser\n\n\n\n\n\n\n\nI'm using cvs-current, and testing those \nbuild-in function\naccording to the docs. \nbut it seems the \"lpad\", \n\"rpad\" don't work,\nwhen I type:\nselect lpad('laser', 4, 'a');\nin psql, the result is still \n'laser', the same with 'rpad',\nIs it a bug or I'm mis-understaning the lpad and/or rpad \nfunctions?\n \nRegards\n \nLaser", "msg_date": "Tue, 28 Nov 2000 16:58:44 +0800", "msg_from": "\"He weiping\" <[email protected]>", "msg_from_op": true, "msg_subject": "is it a bug?" }, { "msg_contents": "> ... it seems the \"lpad\", \"rpad\" don't work,\n> when I type:\n> select lpad('laser', 4, 'a');\n> in psql, the result is still\n> 'laser', the same with 'rpad',\n> Is it a bug or I'm mis-understaning the lpad and/or rpad functions?\n\nA simple misunderstanding. The length argument is for the *total*\nlength. So padding a 5 character string to a length of 4 will do\nnothing. But padding to a length of 6 will add a single \"a\" to the\nstring.\n\n - Thomas\n", "msg_date": "Tue, 28 Nov 2000 14:48:21 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] is it a bug?" }, { "msg_contents": "\"He weiping\" <[email protected]> writes:\n> but it seems the \"lpad\", \"rpad\" don't work,\n> when I type:\n> select lpad('laser', 4, 'a');\n> in psql, the result is still=20\n> 'laser', the same with 'rpad',\n> Is it a bug or I'm mis-understaning the lpad and/or rpad functions?\n\nlpad and rpad never truncate, they only pad.\n\nPerhaps they *should* truncate if the specified length is less than\nthe original string length. Does Oracle do that?\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 28 Nov 2000 10:09:57 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: is it a bug? " }, { "msg_contents": "Hi,\n\nI think you've misunderstood the purpose of the functions.\nThey exist to *pad* the strings, not to truncate them.\nYour examples will both return 'laser' because char_length('laser') = 5\nand you asked for a padded length of 4.\n\nHad you done this: select lpad('laser', 8, '*');\nYou would get this: ***laser\n\n... and obviously with rpad() you would have seen 'laser***' instead.\n\nIf you want to truncate strings, try this:\nselect substring('laser' from 1 for 4);\n... which will truncate to length 4, i.e. 'lase'\n\nI couldn't find a combination function that will perform both of these\nfunctions in one. However, you could try a construct like this:\n\nselect rpad(substring('laser' from 1 for xx), xx, '*');\n\n... where 'xx' is the number of characters you want in the final string.\nI'm sure you could wrap a user-defined function around this to that\nyou'd only have to feed in the number of characters once instead of\ntwice. Perhaps someone else knows a better way of doing this?\n\nHope this helps\n\nFrancis Solomon\n\n>I'm using cvs-current, and testing those build-in function\n>according to the docs.\n>but it seems the \"lpad\", \"rpad\" don't work,\n>when I type:\n>select lpad('laser', 4, 'a');\n>in psql, the result is still\n>'laser', the same with 'rpad',\n>Is it a bug or I'm mis-understaning the lpad and/or rpad functions?\n>\n>Regards\n>\n>Laser\n\n", "msg_date": "Wed, 29 Nov 2000 12:32:09 -0000", "msg_from": "\"Francis Solomon\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: is it a bug?" } ]
[ { "msg_contents": "\n> I don't believe it's a performance issue, I believe it's that \n> writes to\n> blocks greater than 8k cannot be guaranteed 'atomic' by the operating\n> system. Hence, 32k blocks would break the transactions system. (Or\n> something like that - am I correct?)\n\nFirst, 8k are not atomic eighter. Second, the page layout in PostgreSQL has been\ndesigned to not care about the atomicity of IO. This design might have been \ncompromised for index pages recently, to optimize index performance, \nbut data pages are perfectly safe.\n\nAndreas\n", "msg_date": "Tue, 28 Nov 2000 10:49:09 +0100", "msg_from": "Zeugswetter Andreas SB <[email protected]>", "msg_from_op": true, "msg_subject": "AW: 8192 BLCKSZ ?" } ]
[ { "msg_contents": "\n> 8k is the standard Unix file system disk transfer size.\n\nAre you sure ? I thought it was 4k on AIX and 2k on Sun.\n\nAndreas\n", "msg_date": "Tue, 28 Nov 2000 10:51:28 +0100", "msg_from": "Zeugswetter Andreas SB <[email protected]>", "msg_from_op": true, "msg_subject": "AW: 8192 BLCKSZ ?" } ]
[ { "msg_contents": "> > b) Check out MSSQL 7's capabilities and weep.\n> \n> BTW, have you studied MSSQL enough to tell me if it has a\n> separate/standalone \n> (as a process) fti engine or just another index type.\nIt is standalone - separate process, data is stored in separate files (not\nin db).\n\nIn SQL Server 7.0, you also have to manually update the index. Just updating\nthe values in the table does *NOT* update the index. (Can be scheduled, of\ncourse, but not live)\nIn SQL Server 2000 the index can be auto-updated when rows change, but it's\nnot default.\n\n\n//Magnus\n", "msg_date": "Tue, 28 Nov 2000 10:52:43 +0100", "msg_from": "Magnus Hagander <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Full text Indexing -out of contrib and into main.." } ]
[ { "msg_contents": "The pg_options.sample that is included in 7.0.x cannot actually be used\nbecause of bugs in the routine that reads it. First, it reads only 4095\nbytes and second it does not cope with white space within lines. The\nattached patch cures the problem.\n\nIt seems to be relevant only to 7.0.x because the entire code has been\nremoved from 7.1.\n\nIndex: src/backend/utils/misc/trace.c\n===================================================================\nRCS file: /cvs/pgsql-deb/postgresql/src/backend/utils/misc/trace.c,v\nretrieving revision 1.1.1.2\ndiff -c -b -r1.1.1.2 trace.c\n*** src/backend/utils/misc/trace.c\t2000/11/14 10:40:10\t1.1.1.2\n--- src/backend/utils/misc/trace.c\t2000/11/28 07:43:13\n***************\n*** 438,444 ****\n--- 438,446 ----\n \tint\t\t\tfd;\n \tint\t\t\tn;\n \tint\t\t\tverbose;\n+ \tint\t\tincomment = 0;\n \tchar\t\tbuffer[BUF_SIZE];\n+ \tchar\t\toptbuf[BUF_SIZE];\n \tchar\t\tc;\n \tchar\t *s,\n \t\t\t *p;\n***************\n*** 455,478 ****\n #else\n \tif ((fd = open(buffer, O_RDONLY | O_BINARY)) < 0)\n #endif\n \t\treturn;\n \n! \tif ((n = read(fd, buffer, BUF_SIZE - 1)) > 0)\n \t{\n! \t\t/* collpse buffer in place removing comments and spaces */\n! \t\tfor (s = buffer, p = buffer, c = '\\0'; s < (buffer + n);)\n \t\t{\n \t\t\tswitch (*s)\n \t\t\t{\n \t\t\t\tcase '#':\n \t\t\t\t\twhile ((s < (buffer + n)) && (*s++ != '\\n'));\n \t\t\t\t\tbreak;\n- \t\t\t\tcase ' ':\n- \t\t\t\tcase '\\t':\n- \t\t\t\tcase '\\n':\n \t\t\t\tcase '\\r':\n \t\t\t\t\tif (c != ',')\n \t\t\t\t\t\tc = *p++ = ',';\n \t\t\t\t\ts++;\n \t\t\t\t\tbreak;\n \t\t\t\tdefault:\n--- 457,490 ----\n #else\n \tif ((fd = open(buffer, O_RDONLY | O_BINARY)) < 0)\n #endif\n+ \t{\n+ \t\tfprintf(stderr,\"Couldn't open %s\\n\",buffer);\n \t\treturn;\n+ \t}\n \n! \tp = optbuf;\n! \tc = '\\0';\n! \twhile ((n = read(fd, buffer, BUF_SIZE - 1)) > 0)\n \t{\n! \t\tif (incomment && (*buffer != '\\n'))\n! \t\t\t*buffer = '#';\n! \n! \t\t/* read in options removing comments and spaces */\n! \t\tfor (s = buffer; s < (buffer + n);)\n \t\t{\n \t\t\tswitch (*s)\n \t\t\t{\n \t\t\t\tcase '#':\n+ \t\t\t\t\tincomment = 1;\n \t\t\t\t\twhile ((s < (buffer + n)) && (*s++ != '\\n'));\n \t\t\t\t\tbreak;\n \t\t\t\tcase '\\r':\n+ \t\t\t\tcase '\\n':\n+ \t\t\t\t\tincomment = 0;\n \t\t\t\t\tif (c != ',')\n \t\t\t\t\t\tc = *p++ = ',';\n+ \t\t\t\tcase ' ':\n+ \t\t\t\tcase '\\t':\n \t\t\t\t\ts++;\n \t\t\t\t\tbreak;\n \t\t\t\tdefault:\n***************\n*** 480,494 ****\n \t\t\t\t\tbreak;\n \t\t\t}\n \t\t}\n \t\tif (c == ',')\n \t\t\tp--;\n \t\t*p = '\\0';\n \t\tverbose = pg_options[TRACE_VERBOSE];\n! \t\tparse_options(buffer, true);\n \t\tverbose |= pg_options[TRACE_VERBOSE];\n \t\tif (verbose || postgres_signal_arg == SIGHUP)\n! \t\t\ttprintf(TRACE_ALL, \"read_pg_options: %s\", buffer);\n! \t}\n \n \tclose(fd);\n }\n--- 492,506 ----\n \t\t\t\t\tbreak;\n \t\t\t}\n \t\t}\n+ \t}\n \tif (c == ',')\n \t\tp--;\n \t*p = '\\0';\n \tverbose = pg_options[TRACE_VERBOSE];\n! \tparse_options(optbuf, true);\n \tverbose |= pg_options[TRACE_VERBOSE];\n \tif (verbose || postgres_signal_arg == SIGHUP)\n! \t\ttprintf(TRACE_ALL, \"read_pg_options: %s\", optbuf);\n \n \tclose(fd);\n }\n\n-- \nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\nPGP: 1024R/32B8FAA1: 97 EA 1D 47 72 3F 28 47 6B 7E 39 CC 56 E4 C1 47\nGPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n ========================================\n \"The earth is the LORD'S, and the fullness thereof; the\n world, and they that dwell therein.\" Psalms 24:1\n\n\n", "msg_date": "Tue, 28 Nov 2000 10:47:28 +0000", "msg_from": "\"Oliver Elphick\" <[email protected]>", "msg_from_op": true, "msg_subject": "Patch for 7.0.3 code to read pg_options" } ]
[ { "msg_contents": "\n> > pjw=# create table pk1(f1 integer, constraint zzz primary key(f1));\n> > NOTICE: CREATE TABLE/PRIMARY KEY will create implicit \n> index 'zzz' for\n> > table 'pk1'\n> > CREATE\n> > pjw=# create table zzz(f1 integer);\n> > ERROR: Relation 'zzz' already exists\n> \n> > Is there a good reason why the automatically created items \n> do not have a\n> > 'pg_' in front of their names?\n> \n> Not a good idea. I think it should probably be pk1_zzz in this case.\n> \n> If we do either, it will break the recently submitted pg_dump \n> patch that\n> uses the index name as the constraint name. I thought that patch was\n> wrongheaded anyway, and would recommend reversing it...\n\nI rather think, that having index names clash with table names is the bogus part.\nThat the index gets the specified name from the constraint clause is more\nor less expected behavior (Informix, Oracle ...).\n\nAndreas\n", "msg_date": "Tue, 28 Nov 2000 12:03:28 +0100", "msg_from": "Zeugswetter Andreas SB <[email protected]>", "msg_from_op": true, "msg_subject": "AW: Constraint names using 'user namespace'? " } ]
[ { "msg_contents": "\n> > ... it seems the \"lpad\", \"rpad\" don't work,\n> > when I type:\n> > select lpad('laser', 4, 'a');\n> > in psql, the result is still\n> > 'laser', the same with 'rpad',\n> > Is it a bug or I'm mis-understaning the lpad and/or rpad functions?\n> \n> A simple misunderstanding. The length argument is for the *total*\n> length. So padding a 5 character string to a length of 4 will do\n> nothing. But padding to a length of 6 will add a single \"a\" to the\n> string.\n\nSeems the implementor made a mistake, since this is supposed to be oracle \ncompat stuff it should behave like Oracle, and thus trim the string to 4 chars.\n\nAndreas \n", "msg_date": "Tue, 28 Nov 2000 15:40:15 +0100", "msg_from": "Zeugswetter Andreas SB <[email protected]>", "msg_from_op": true, "msg_subject": "AW: [HACKERS] is it a bug?" } ]
[ { "msg_contents": "At 10:52 AM 11/28/00 +0100, Magnus Hagander wrote:\n>> > b) Check out MSSQL 7's capabilities and weep.\n>> \n>> BTW, have you studied MSSQL enough to tell me if it has a\n>> separate/standalone \n>> (as a process) fti engine or just another index type.\n>It is standalone - separate process, data is stored in separate files (not\n>in db).\n>\n>In SQL Server 7.0, you also have to manually update the index. Just updating\n>the values in the table does *NOT* update the index. (Can be scheduled, of\n>course, but not live)\n>In SQL Server 2000 the index can be auto-updated when rows change, but it's\n>not default.\n\nThis is similar to Oracle's InterMedia. In practice, using auto-update on a\nbusy, live website is impractical, though how much this is due to InterMedia's\nbeing flakey and how much due to the computational expense isn't clear (or rather\nIM's so flakey one can't really explore enough to see how expensive\nauto-update on a busy site would be).\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n", "msg_date": "Tue, 28 Nov 2000 06:41:53 -0800", "msg_from": "Don Baccus <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Full text Indexing -out of contrib and into\n main.." } ]
[ { "msg_contents": "\n> lpad and rpad never truncate, they only pad.\n> \n> Perhaps they *should* truncate if the specified length is less than\n> the original string length. Does Oracle do that?\n\nYes, it truncates, same as Informix.\n\nAndreas\n", "msg_date": "Tue, 28 Nov 2000 16:30:49 +0100", "msg_from": "Zeugswetter Andreas SB <[email protected]>", "msg_from_op": true, "msg_subject": "AW: [HACKERS] Re: is it a bug? " }, { "msg_contents": "Zeugswetter Andreas SB <[email protected]> writes:\n>> lpad and rpad never truncate, they only pad.\n>> \n>> Perhaps they *should* truncate if the specified length is less than\n>> the original string length. Does Oracle do that?\n\n> Yes, it truncates, same as Informix.\n\nI went to fix this and then realized I still don't have an adequate spec\nof how Oracle defines these functions. It would seem logical, for\nexample, that lpad might truncate on the left instead of the right,\nie lpad('abcd', 3, 'whatever') might yield 'bcd' not 'abc'. Would\nsomeone check?\n\nAlso, what happens if the specified length is less than zero? Error,\nor is it treated as zero?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 07 Dec 2000 14:51:30 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Oracle-compatible lpad/rpad behavior" }, { "msg_contents": "> I went to fix this and then realized I still don't have an adequate spec\n> of how Oracle defines these functions. It would seem logical, for\n> example, that lpad might truncate on the left instead of the right,\n> ie lpad('abcd', 3, 'whatever') might yield 'bcd' not 'abc'. Would\n> someone check?\n\nSQL> select lpad('abcd', 3, 'foobar') from dual;\n\nLPA\n---\nabc\n\n> Also, what happens if the specified length is less than zero? Error,\n> or is it treated as zero?\n\nSQL> select ':' || lpad('abcd', -1, 'foobar') || ':' from dual;\n\n':\n--\n::\n\n(colons added so it's obvious that it's a zero-length string)\n\n-Jonathan\n\n", "msg_date": "Thu, 7 Dec 2000 14:30:24 -0700", "msg_from": "\"Jonathan Ellis\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Oracle-compatible lpad/rpad behavior" }, { "msg_contents": "Jonathan Ellis wrote:\n: > Also, what happens if the specified length is less than zero? Error,\n: > or is it treated as zero?\n: \n: SQL> select ':' || lpad('abcd', -1, 'foobar') || ':' from dual;\n: \n: ':\n: --\n: ::\n: \n: (colons added so it's obvious that it's a zero-length string)\n\nReturns not empty string but NULL:\n\nSQL> select nvl(lpad('abcd', -1, 'foobar'), 'Null') from dual;\n\nNVL(\n----\nNull\n\n-- \nAndrew W. Nosenko ([email protected])\n", "msg_date": "Fri, 8 Dec 2000 09:33:46 +0200", "msg_from": "Andrew Nosenko <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Oracle-compatible lpad/rpad behavior" }, { "msg_contents": "> Returns not empty string but NULL:\n\nThe two are equivalent in Oracle. Try select 'a' || null || 'b' from dual\nand compare it to postgres.\n\n-Jonathan\n\n", "msg_date": "Fri, 8 Dec 2000 08:46:19 -0700", "msg_from": "\"Jonathan Ellis\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Oracle-compatible lpad/rpad behavior" }, { "msg_contents": "Jonathan Ellis wrote:\n> \n> > I went to fix this and then realized I still don't have an adequate spec\n> > of how Oracle defines these functions. It would seem logical, for\n> > example, that lpad might truncate on the left instead of the right,\n> > ie lpad('abcd', 3, 'whatever') might yield 'bcd' not 'abc'. Would\n> > someone check?\n> \n> SQL> select lpad('abcd', 3, 'foobar') from dual;\n> \n> LPA\n> ---\n> abc\n> \n> > Also, what happens if the specified length is less than zero? Error,\n> > or is it treated as zero?\n> \n> SQL> select ':' || lpad('abcd', -1, 'foobar') || ':' from dual;\n> \n> ':\n> --\n> ::\n> \n> (colons added so it's obvious that it's a zero-length string)\n\nAFAIK Oracle is unable to distinguish NULL and zero-length string ;(\n\n--------------\nHannu\n", "msg_date": "Tue, 12 Dec 2000 19:00:39 +0000", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: Oracle-compatible lpad/rpad behavior" } ]
[ { "msg_contents": "\nBrowsing through backend/commands/command.c I noticed the following code:\n\n if (indexStruct->indisunique)\n {\n List *attrl;\n\n /* go through the fkconstraint->pk_attrs list */\n foreach(attrl, fkconstraint->pk_attrs)\n {\n Ident *attr=lfirst(attrl);\n found = false;\n for (i = 0; i < INDEX_MAX_KEYS && indexStruct->indkey[i] != 0;\ni++)\n {\n int pkattno = indexStruct->indkey[i];\n if (pkattno>0)\n {\n char *name = NameStr(rel_attrs[pkattno-1]->attname);\n if (strcmp(name, attr->name)==0)\n {\n found = true;\n break;\n }\n }\n }\n if (!found)\n break;\n }\n }\n\nwhich is (I think) supposed to be checking for a unique index on the FK\nfields in the referenced table.\n\nUnfortunately, my reading of this code suggests it is doing the following:\n\n for each column in the FK\n\n see if we can find the column in the index\n if not, then die\n\n next FK column\n\nThe problem with this is that it needs to ensure a 1:1 match between\ncolumns for the UNIQUE constraint requirement to be satisfied...I think.\n\nTo give an example,\n\n create table c2(f1 integer, f2 integer, unique(f1,f2));\n create table c1(f1 integer, f2 integer, foreign key(f1) references\nc2(f1));\n\nis allowed with current sources.\n\nI'd guess that adding code to ensure the column lists are the same size\nwould fix the problem.\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Wed, 29 Nov 2000 02:49:11 +1100", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": true, "msg_subject": "Problem in AlterTableAddConstraint?" }, { "msg_contents": "\nAssuming the silence is agreement, does this look like the right solution\n(I assume looping through the index is the only way to count the segments):\n\n if (indexStruct->indisunique)\n {\n List *attrl;\n\n /* go through the fkconstraint->pk_attrs list */\n foreach(attrl, fkconstraint->pk_attrs)\n {\n Ident *attr=lfirst(attrl);\n found = false;\n for (i = 0; i < INDEX_MAX_KEYS && indexStruct->indkey[i] != 0;\ni++)\n {\n int pkattno = indexStruct->indkey[i];\n if (pkattno>0)\n {\n char *name = NameStr(rel_attrs[pkattno-1]->attname);\n if (strcmp(name, attr->name)==0)\n {\n found = true;\n break;\n }\n }\n }\n\n if (!found)\n+ {\n break;\n+ } else {\n+\n+ /* Require same number of segments */ \n+ if (i != length(fkconstraint->pk_attrs))\n+ {\n+ found = false;\n+ break;\n+ }\n+ }\n }\n }\n\n\nAt 02:49 29/11/00 +1100, Philip Warner wrote:\n>\n> create table c2(f1 integer, f2 integer, unique(f1,f2));\n> create table c1(f1 integer, f2 integer, \n> foreign key(f1) references c2(f1));\n>\n> is allowed with current sources.\n>\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Wed, 29 Nov 2000 13:40:21 +1100", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Problem in AlterTableAddConstraint?" }, { "msg_contents": "\nCan anyone comment on this?\n\n> \n> Browsing through backend/commands/command.c I noticed the following code:\n> \n> if (indexStruct->indisunique)\n> {\n> List *attrl;\n> \n> /* go through the fkconstraint->pk_attrs list */\n> foreach(attrl, fkconstraint->pk_attrs)\n> {\n> Ident *attr=lfirst(attrl);\n> found = false;\n> for (i = 0; i < INDEX_MAX_KEYS && indexStruct->indkey[i] != 0;\n> i++)\n> {\n> int pkattno = indexStruct->indkey[i];\n> if (pkattno>0)\n> {\n> char *name = NameStr(rel_attrs[pkattno-1]->attname);\n> if (strcmp(name, attr->name)==0)\n> {\n> found = true;\n> break;\n> }\n> }\n> }\n> if (!found)\n> break;\n> }\n> }\n> \n> which is (I think) supposed to be checking for a unique index on the FK\n> fields in the referenced table.\n> \n> Unfortunately, my reading of this code suggests it is doing the following:\n> \n> for each column in the FK\n> \n> see if we can find the column in the index\n> if not, then die\n> \n> next FK column\n> \n> The problem with this is that it needs to ensure a 1:1 match between\n> columns for the UNIQUE constraint requirement to be satisfied...I think.\n> \n> To give an example,\n> \n> create table c2(f1 integer, f2 integer, unique(f1,f2));\n> create table c1(f1 integer, f2 integer, foreign key(f1) references\n> c2(f1));\n> \n> is allowed with current sources.\n> \n> I'd guess that adding code to ensure the column lists are the same size\n> would fix the problem.\n> \n> \n> ----------------------------------------------------------------\n> Philip Warner | __---_____\n> Albatross Consulting Pty. Ltd. |----/ - \\\n> (A.B.N. 75 008 659 498) | /(@) ______---_\n> Tel: (+61) 0500 83 82 81 | _________ \\\n> Fax: (+61) 0500 83 82 82 | ___________ |\n> Http://www.rhyme.com.au | / \\|\n> | --________--\n> PGP key available upon request, | /\n> and from pgp5.ai.mit.edu:11371 |/\n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 22 Jan 2001 21:32:53 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Problem in AlterTableAddConstraint?" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> Can anyone comment on this?\n\nThe code seems to have been changed as per Philip's suggestion.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 22 Jan 2001 21:56:59 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Problem in AlterTableAddConstraint? " }, { "msg_contents": "\nShould be fixed now. I had sent a patch in a while ago about it\nand the code does seem to do an extra step now which gets the length\nof the index and compares it to the length of the attribute list.\n\nOn Mon, 22 Jan 2001, Bruce Momjian wrote:\n\n> \n> Can anyone comment on this?\n> \n> > \n> > Browsing through backend/commands/command.c I noticed the following code:\n> > \n> > if (indexStruct->indisunique)\n> > {\n> > List *attrl;\n> > \n> > /* go through the fkconstraint->pk_attrs list */\n> > foreach(attrl, fkconstraint->pk_attrs)\n> > {\n> > Ident *attr=lfirst(attrl);\n> > found = false;\n> > for (i = 0; i < INDEX_MAX_KEYS && indexStruct->indkey[i] != 0;\n> > i++)\n> > {\n> > int pkattno = indexStruct->indkey[i];\n> > if (pkattno>0)\n> > {\n> > char *name = NameStr(rel_attrs[pkattno-1]->attname);\n> > if (strcmp(name, attr->name)==0)\n> > {\n> > found = true;\n> > break;\n> > }\n> > }\n> > }\n> > if (!found)\n> > break;\n> > }\n> > }\n> > \n> > which is (I think) supposed to be checking for a unique index on the FK\n> > fields in the referenced table.\n> > \n> > Unfortunately, my reading of this code suggests it is doing the following:\n> > \n> > for each column in the FK\n> > \n> > see if we can find the column in the index\n> > if not, then die\n> > \n> > next FK column\n> > \n> > The problem with this is that it needs to ensure a 1:1 match between\n> > columns for the UNIQUE constraint requirement to be satisfied...I think.\n> > \n> > To give an example,\n> > \n> > create table c2(f1 integer, f2 integer, unique(f1,f2));\n> > create table c1(f1 integer, f2 integer, foreign key(f1) references\n> > c2(f1));\n> > \n> > is allowed with current sources.\n> > \n> > I'd guess that adding code to ensure the column lists are the same size\n> > would fix the problem.\n> > \n> > \n> > ----------------------------------------------------------------\n> > Philip Warner | __---_____\n> > Albatross Consulting Pty. Ltd. |----/ - \\\n> > (A.B.N. 75 008 659 498) | /(@) ______---_\n> > Tel: (+61) 0500 83 82 81 | _________ \\\n> > Fax: (+61) 0500 83 82 82 | ___________ |\n> > Http://www.rhyme.com.au | / \\|\n> > | --________--\n> > PGP key available upon request, | /\n> > and from pgp5.ai.mit.edu:11371 |/\n> > \n> \n> \n> -- \n> Bruce Momjian | http://candle.pha.pa.us\n> [email protected] | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> \n\n", "msg_date": "Mon, 22 Jan 2001 19:01:24 -0800 (PST)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Problem in AlterTableAddConstraint?" } ]
[ { "msg_contents": "\n> > This is a summary of replies.\n> > \n> > 1. Calculated fields in table definitions . eg.\n> > \n> > Create table test (\n> > A Integer,\n> > B integer,\n> > the_sum As (A+B),\n> > );\n> > \n> > This functionality can be achieved through the use of views.\n> \n> Using a view for this isn't quite the same functionality as a computed\n> field, from what I understand, since the calculation will be done at\n> SELECT time, rather than INSERT/UPDATE.\n\nI would expect the calculated field from above example to be calculated\nduring select time also, no ? You don't want to waste disk space with something \nyou can easily compute at runtime.\n\nAndreas\n", "msg_date": "Tue, 28 Nov 2000 16:50:59 +0100", "msg_from": "Zeugswetter Andreas SB <[email protected]>", "msg_from_op": true, "msg_subject": "AW: Please advise features in 7.1 (SUMMARY)" }, { "msg_contents": "I guess it depends on what you're using it for -- disk space is cheap and\nabundant anymore, I can see some advantages of having it computed only once\nrather than X times, where X is the number of SELECTs as that could get\ncostly on really high traffic servers.. Costly not so much for simple\ncomputations like that but more complex ones.\n\nJust playing the devil's advocate a bit.\n\n-Mitch\n\n----- Original Message -----\nFrom: \"Zeugswetter Andreas SB\" <[email protected]>\nTo: \"'Ross J. Reedstrom'\" <[email protected]>;\n<[email protected]>\nSent: Tuesday, November 28, 2000 7:50 AM\nSubject: AW: [HACKERS] Please advise features in 7.1 (SUMMARY)\n\n\n>\n> > > This is a summary of replies.\n> > >\n> > > 1. Calculated fields in table definitions . eg.\n> > >\n> > > Create table test (\n> > > A Integer,\n> > > B integer,\n> > > the_sum As (A+B),\n> > > );\n> > >\n> > > This functionality can be achieved through the use of views.\n> >\n> > Using a view for this isn't quite the same functionality as a computed\n> > field, from what I understand, since the calculation will be done at\n> > SELECT time, rather than INSERT/UPDATE.\n>\n> I would expect the calculated field from above example to be calculated\n> during select time also, no ? You don't want to waste disk space with\nsomething\n> you can easily compute at runtime.\n>\n> Andreas\n>\n\n", "msg_date": "Tue, 28 Nov 2000 09:05:29 -0800", "msg_from": "\"Mitch Vincent\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Please advise features in 7.1 (SUMMARY)" } ]
[ { "msg_contents": "> I guess it depends on what you're using it for -- disk space \n> is cheap and\n> abundant anymore, I can see some advantages of having it \n> computed only once\n> rather than X times, where X is the number of SELECTs as that \n> could get\n> costly on really high traffic servers.. Costly not so much for simple\n> computations like that but more complex ones.\n\nOnce and for all forget the argument in database technology, that disk space \nis cheap in regard to $/Mb. That is not the question. The issue is:\n\t1. amout of rows you can cache\n\t2. number of rows you can read from disk per second\n\t (note that it is not pages/sec)\n\t3. how many rows you can sort in memory\n\nIn the above sence disk space is one of the most expensive things in a\ndatabase system. Saving disk space where possible will gain you drastic\nperformance advantages.\n\nAndreas\n", "msg_date": "Tue, 28 Nov 2000 17:19:45 +0100", "msg_from": "Zeugswetter Andreas SB <[email protected]>", "msg_from_op": true, "msg_subject": "AW: Please advise features in 7.1 (SUMMARY)" }, { "msg_contents": "On Tue, Nov 28, 2000 at 05:19:45PM +0100, Zeugswetter Andreas SB wrote:\n> > I guess it depends on what you're using it for -- disk space \n> > is cheap and\n> > abundant anymore, I can see some advantages of having it \n> > computed only once\n> > rather than X times, where X is the number of SELECTs as that \n> > could get\n> > costly on really high traffic servers.. Costly not so much for simple\n> > computations like that but more complex ones.\n> \n\n<snip good arguments about disk space>\n\nAs I said in my original post, my understanding of computed fields may\nbe in error. If they're computed at SELECT time, to avoid creating table\nspace, then a VIEW is exacly the right solution. However, it's easy to\ncome up with examples of complex calculations that it would be useful\nto cache the results of, in the table. Then, computing at INSERT/UPDATE\nis clearly the way to go.\n\nSo, having _both_ is the best thing.\n\nRoss\n", "msg_date": "Tue, 28 Nov 2000 10:54:54 -0600", "msg_from": "\"Ross J. Reedstrom\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Please advise features in 7.1 (SUMMARY)" }, { "msg_contents": "> So, having _both_ is the best thing.\n\nAbsolutely, that's always what I meant -- we already have views and views\ncan do this type of stuff at SELECT time can't they? So it's not a change,\njust an addition....\n\n-Mitch\n\n", "msg_date": "Tue, 28 Nov 2000 10:07:45 -0800", "msg_from": "\"Mitch Vincent\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Please advise features in 7.1 (SUMMARY)" }, { "msg_contents": "From: \"Ross J. Reedstrom\" <[email protected]>\n> On Tue, Nov 28, 2000 at 05:19:45PM +0100, Zeugswetter Andreas SB wrote:\n> > > I guess it depends on what you're using it for -- disk space \n> > > is cheap and\n> > > abundant anymore, I can see some advantages of having it \n> > > computed only once\n> > > rather than X times, where X is the number of SELECTs as that \n> > > could get\n> > > costly on really high traffic servers.. Costly not so much for simple\n> > > computations like that but more complex ones.\n> > \n> \n> <snip good arguments about disk space>\n> \n> As I said in my original post, my understanding of computed fields may\n> be in error. If they're computed at SELECT time, to avoid creating table\n> space, then a VIEW is exacly the right solution. However, it's easy to\n> come up with examples of complex calculations that it would be useful\n> to cache the results of, in the table. Then, computing at INSERT/UPDATE\n> is clearly the way to go.\n> \n> So, having _both_ is the best thing.\n> \n> Ross\n> \n\nI'm new at this, but the view thing?\nIsn't that just the same as:\n\ncreate table test2 ( i1 int4, i2 int4);\n...insert...\nselect i1,i2,i1+i2 from test2;\n\nMagnus\n\n-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-\n Programmer/Networker [|] Magnus Naeslund\n PGP Key: http://www.genline.nu/mag_pgp.txt\n-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-\n\n\n", "msg_date": "Wed, 29 Nov 2000 12:07:45 +0100", "msg_from": "\"Magnus Naeslund\\(f\\)\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Please advise features in 7.1 (SUMMARY)" } ]
[ { "msg_contents": "\n> > So, having _both_ is the best thing.\n> \n> Absolutely, that's always what I meant -- we already have views and views\n> can do this type of stuff at SELECT time can't they? So it's not a change,\n> just an addition....\n\nAnd the precalculated and stored on disk thing can be done with triggers.\n\nAndreas\n", "msg_date": "Tue, 28 Nov 2000 18:11:53 +0100", "msg_from": "Zeugswetter Andreas SB <[email protected]>", "msg_from_op": true, "msg_subject": "AW: Please advise features in 7.1 (SUMMARY)" } ]
[ { "msg_contents": "\nTom Samplonius wrote:\n\n> On Tue, 28 Nov 2000, mlw wrote:\n>\n> > Tom Samplonius wrote:\n> > >\n> > > On Mon, 27 Nov 2000, mlw wrote:\n> > >\n> > > > This is just a curiosity.\n> > > >\n> > > > Why is the default postgres block size 8192? These days, with caching\n> > > > file systems, high speed DMA disks, hundreds of megabytes of RAM, maybe\n> > > > even gigabytes. Surely, 8K is inefficient.\n> > >\n> > > I think it is a pretty wild assumption to say that 32k is more efficient\n> > > than 8k. Considering how blocks are used, 32k may be in fact quite a bit\n> > > slower than 8k blocks.\n> >\n> > I'm not so sure I agree. Perhaps I am off base here, but I did a bit of\n> > OS profiling a while back when I was doing a DICOM server. I\n> > experimented with block sizes and found that the best throughput on\n> > Linux and Windows NT was at 32K. The graph I created showed a steady\n> > increase in performance and a drop just after 32K, then steady from\n> > there. In Windows NT it was more pronounced than it was in Linux, but\n> > Linux still exhibited a similar trait.\n>\n> You are a bit off base here. The typical access pattern is random IO,\n> not sequentional. If you use a large block size in Postgres, Postgres\n> will read and write more data than necessary. Which is faster? 1000 x 8K\n> IOs? Or 1000 x 32K IOs\n\nI can sort of see your point, but the 8K vs 32K is not a linear\nrelationship.\nThe big hit is the disk I/O operation, more so than just the data size. \nIt may\nbe almost as efficient to write 32K as it is to write 8K. While I do not\nknow the\nexact numbers, and it varies by OS and disk subsystem, I am sure that\nwriting\n32K is not even close to 4x more expensive than 8K. Think about seek\ntimes,\nwriting anything to the disk is expensive regardless of the amount of\ndata. Most\ndisks today have many heads, and are RL encoded. It may only add 10us\n(approx.\n1-2 sectors of a 64 sector drive spinning 7200 rpm) to a disk operation\nwhich\ntakes an order of magnitude longer positioning the heads.\n\nThe overhead of an additional 24K is minute compared to the cost of a\ndisk\noperation. So if any measurable benefit can come from having bigger\nbuffers, i.e.\nhaving more data available per disk operation, it will probably be\nfaster.\n", "msg_date": "Tue, 28 Nov 2000 13:20:34 -0500", "msg_from": "mlw <[email protected]>", "msg_from_op": true, "msg_subject": "[Fwd: Re: 8192 BLCKSZ ?]" }, { "msg_contents": "Kevin O'Gorman wrote:\n> \n> mlw wrote:\n> >\n> > Tom Samplonius wrote:\n> >\n> > > On Tue, 28 Nov 2000, mlw wrote:\n> > >\n> > > > Tom Samplonius wrote:\n> > > > >\n> > > > > On Mon, 27 Nov 2000, mlw wrote:\n> > > > >\n> > > > > > This is just a curiosity.\n> > > > > >\n> > > > > > Why is the default postgres block size 8192? These days, with caching\n> > > > > > file systems, high speed DMA disks, hundreds of megabytes of RAM, maybe\n> > > > > > even gigabytes. Surely, 8K is inefficient.\n> > > > >\n> > > > > I think it is a pretty wild assumption to say that 32k is more efficient\n> > > > > than 8k. Considering how blocks are used, 32k may be in fact quite a bit\n> > > > > slower than 8k blocks.\n> > > >\n> > > > I'm not so sure I agree. Perhaps I am off base here, but I did a bit of\n> > > > OS profiling a while back when I was doing a DICOM server. I\n> > > > experimented with block sizes and found that the best throughput on\n> > > > Linux and Windows NT was at 32K. The graph I created showed a steady\n> > > > increase in performance and a drop just after 32K, then steady from\n> > > > there. In Windows NT it was more pronounced than it was in Linux, but\n> > > > Linux still exhibited a similar trait.\n> > >\n> > > You are a bit off base here. The typical access pattern is random IO,\n> > > not sequentional. If you use a large block size in Postgres, Postgres\n> > > will read and write more data than necessary. Which is faster? 1000 x 8K\n> > > IOs? Or 1000 x 32K IOs\n> >\n> > I can sort of see your point, but the 8K vs 32K is not a linear\n> > relationship.\n> > The big hit is the disk I/O operation, more so than just the data size.\n> > It may\n> > be almost as efficient to write 32K as it is to write 8K. While I do not\n> > know the\n> > exact numbers, and it varies by OS and disk subsystem, I am sure that\n> > writing\n> > 32K is not even close to 4x more expensive than 8K. Think about seek\n> > times,\n> > writing anything to the disk is expensive regardless of the amount of\n> > data. Most\n> > disks today have many heads, and are RL encoded. It may only add 10us\n> > (approx.\n> > 1-2 sectors of a 64 sector drive spinning 7200 rpm) to a disk operation\n> > which\n> > takes an order of magnitude longer positioning the heads.\n> >\n> > The overhead of an additional 24K is minute compared to the cost of a\n> > disk\n> > operation. So if any measurable benefit can come from having bigger\n> > buffers, i.e.\n> > having more data available per disk operation, it will probably be\n> > faster.\n> \n> This is only part of the story. It applies best when you're going\n> to use sequential scans, for instance, or otherwise use all the info\n> in any block that you fetch. However, when your blocks are 8x bigger,\n> your number of blocks in the disk cache is 8x fewer. If you're\n> accessing random blocks, your hopes of finding the block in the\n> cache are affected (probably not 8x, but there is an effect).\n> \n> So don't just blindly think that bigger blocks are better. It\n> ain't necessarily so.\n> \n\nFirst, the difference between 8K and 32K is 4 not 8.\n\nThe problem is you are looking at these numbers as if there is a linear\nrelationship between the 8 and the 32. You are thinking 8 is 1/4 the\nsize of 32, so it must be 1/4 the amount of work. This is not true at\nall.\n\nMany operating systems used a fixed memory block size allocation for\ntheir disk cache. They do not allocate a new block for every disk\nrequest, they maintain a pool of fixed sized buffer blocks. So if you\nuse fewer bytes than the OS block size you waste the difference between\nyour block size and the block size of the OS cache entry.\n\nI'm pretty sure Linux uses a 32K buffer size in its cache, and I'm\npretty confident that NT does as well from my previous tests.\n\nSo, in effect, an 8K block may waste 3/4 of the memory in the disk\ncache.\n\n \nhttp://www.mohawksoft.com\n", "msg_date": "Wed, 29 Nov 2000 07:25:46 -0500", "msg_from": "mlw <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [Fwd: Re: 8192 BLCKSZ ?]" }, { "msg_contents": "Kevin O'Gorman wrote:\n> \n> mlw wrote:\n> > Many operating systems used a fixed memory block size allocation for\n> > their disk cache. They do not allocate a new block for every disk\n> > request, they maintain a pool of fixed sized buffer blocks. So if you\n> > use fewer bytes than the OS block size you waste the difference between\n> > your block size and the block size of the OS cache entry.\n> >\n> > I'm pretty sure Linux uses a 32K buffer size in its cache, and I'm\n> > pretty confident that NT does as well from my previous tests.\n> \n> I dunno about NT, but here's a quote from \"Linux Kernel Internals\"\n> 2nd Ed, page 92-93:\n> .. The block size for any given device may be 512, 1024, 2048 or\n> 4096 bytes....\n> \n> ... the buffer cache manages individual block buffers of\n> varying size. For this, every block is given a 'buffer_head' data\n> structure. ... The definition of the buffer head is in linux/fs.h\n> \n> ... the size of this area exactly matches the block size 'b_size'...\n> \n> The quote goes on to describe how the data structures are designed to\n> be processor-cache-aware.\n> \n\nI double checked the kernel source, and you are right. I stand corrected\nabout the disk caching.\n\nMy assertion stands, it is a neglagable difference to read 32K vs 8K\nfrom a disk, and the probability of data being within a 4 times larger\nblock is 4 times better, even though the probability of having the\ncorrect block in memory is 4 times less. So, I don't think it is a\nnumerically significant issue.\n\n\n-- \nhttp://www.mohawksoft.com\n", "msg_date": "Wed, 29 Nov 2000 17:07:19 -0500", "msg_from": "mlw <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [Fwd: Re: 8192 BLCKSZ ?]" }, { "msg_contents": "Kevin O'Gorman wrote:\n> \n> mlw wrote:\n> >\n> > Kevin O'Gorman wrote:\n> > >\n> > > mlw wrote:\n> > > > Many operating systems used a fixed memory block size allocation for\n> > > > their disk cache. They do not allocate a new block for every disk\n> > > > request, they maintain a pool of fixed sized buffer blocks. So if you\n> > > > use fewer bytes than the OS block size you waste the difference between\n> > > > your block size and the block size of the OS cache entry.\n> > > >\n> > > > I'm pretty sure Linux uses a 32K buffer size in its cache, and I'm\n> > > > pretty confident that NT does as well from my previous tests.\n> > >\n> > > I dunno about NT, but here's a quote from \"Linux Kernel Internals\"\n> > > 2nd Ed, page 92-93:\n> > > .. The block size for any given device may be 512, 1024, 2048 or\n> > > 4096 bytes....\n> > >\n> > > ... the buffer cache manages individual block buffers of\n> > > varying size. For this, every block is given a 'buffer_head' data\n> > > structure. ... The definition of the buffer head is in linux/fs.h\n> > >\n> > > ... the size of this area exactly matches the block size 'b_size'...\n> > >\n> > > The quote goes on to describe how the data structures are designed to\n> > > be processor-cache-aware.\n> > >\n> >\n> > I double checked the kernel source, and you are right. I stand corrected\n> > about the disk caching.\n> >\n> > My assertion stands, it is a neglagable difference to read 32K vs 8K\n> > from a disk, and the probability of data being within a 4 times larger\n> > block is 4 times better, even though the probability of having the\n> > correct block in memory is 4 times less. So, I don't think it is a\n> > numerically significant issue.\n> >\n> \n> My point is that it's going to depend strongly on what you're doing.\n> If you're getting only one item from each block, you pay a cost in cache\n> flushing even if the disk I/O time isn't much different. You're carrying\n> 3x unused bytes and displacing other, possibly useful, things from the\n> cache.\n> \n> So whether it's a good thing or not is something you have to measure, not\n> argue about. Because it will vary depending on your workload. That's\n> where a DBA begins to earn his/her pay.\n\nI would tend to disagree \"in general.\" One can always find more optimal\nways to search data if one knows the nature of the data and the nature\nof the search before hand. The nature of the data could be knowledge of\nwhether it is sorted along the lines of the type of search you want to\ndo. It could be knowledge of the entirety of the data, and so on.\n\nThe cost difference between 32K vs 8K disk reads/writes are so small\nthese days when compared with overall cost of the disk operation itself,\nthat you can even measure it, well below 1%. Remember seek times\nadvertised on disks are an average. \n\nSQL itself is a compromise between a hand coded search program and a\ngeneral purpose solution. As a general purpose search system, one can\nnot conclude that data is less likely to be in a larger block vs more\nlikely to be in a smaller block that remains in cache.\n\nThere are just as many cases where one could make an argument about one\nverses the other based on the nature of data and the nature of the\nsearch.\n\nHowever, that being said, memory DIMMS are 256M for $100 and time is\npriceless. The 8K default has been there as long I can remember having\nto think about it, and only recently did I learn it can be changed. I\nhave been using Postgres since about 1996.\n\nI argue that reading 32K is, for all practical purposes, not measurably\ndifferent to read or write to disk than is 8K. The sole point in your\nargument is that with a 4x larger block you have a 1/4 chance that the\nblock will be in memory. \n\nI argue that with a 4x greater block size, you have 4x greater chance\nthat data will be in a block, and that this offsets the 1/4 chance of\nsomething being in cache. \n\nThe likelihood of something being in a cache is directly proportional to\nthe ratio of the size of whole object being cached vs size of the cache\nitself, and the algorithms used to calculate what remains in cache.\nTypically this is a combination of LRU, frequency, and some predictive\nanalysis.\n\nSmall databases may, in fact, reside entirely in disk cache because of\nthe amount of RAM on modern machines. Large databases can not be\nentirely cached and some small percentage of them will be in cache.\nDepending on the \"randomness\" of the search criteria, the probability of\nthe item which you wish to locate being in cache has, as far as I can\nsee, little to do with the block size.\n\nI am going to see if I can get some time together this weekend and see\nif the benchmark programs measure a difference in block sizes, and if\nso, compare. I will try to test 8K, 16K, 24K, 32K.\n\n\n\n\n\n\n\n-- \nhttp://www.mohawksoft.com\n", "msg_date": "Fri, 01 Dec 2000 09:25:41 -0500", "msg_from": "mlw <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [Fwd: Re: 8192 BLCKSZ ?]" }, { "msg_contents": "\n\n> The cost difference between 32K vs 8K disk reads/writes are so small\n> these days when compared with overall cost of the disk operation itself,\n> that you can even measure it, well below 1%. Remember seek times\n> advertised on disks are an average.\n\nIt has been said how small the difference is - therefore in my opinion it\nshould remain at 8KB to maintain best average performance with all existing\nplatforms.\n\nI say its best let the OS and mass storage subsystem worry about read-ahead\ncaching and whether they actually read 8KB off the disk, or 32KB or 64KB\nwhen we ask for 8.\n\n\n- Andrew\n\n\n", "msg_date": "Sat, 2 Dec 2000 10:52:44 +1100", "msg_from": "\"Andrew Snow\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: 8192 BLCKSZ ?]" }, { "msg_contents": "At 10:52 AM 12/2/00 +1100, Andrew Snow wrote:\n>\n>\n>> The cost difference between 32K vs 8K disk reads/writes are so small\n>> these days when compared with overall cost of the disk operation itself,\n>> that you can even measure it, well below 1%. Remember seek times\n>> advertised on disks are an average.\n>\n>It has been said how small the difference is - therefore in my opinion it\n>should remain at 8KB to maintain best average performance with all existing\n>platforms.\n\nWith versions <= PG 7.0, the motivation that's been stated isn't performance\nbased as much as an option to let you stick relatively big chunks of text\n(~40k-ish+ for lzText) in a single row without resorting to classic PG's\nugly LOB interface or something almost as ugly as the built-in LOB handler\nI did for AOLserver many months ago. The performance arguments have mostly\nbeen of the form \"it won't really cost you much and you can use rows that\nare so much longer ...\"\n\nI think there's been recognition that 8KB is a reasonable default, along with\nlamenting (at least on my part) that the fact that this is just a DEFAULT hasn't\nbeen well-communicated, leading many casual surveyors of DB alternatives to\nbelieve that it is truly a hard-wired limitation. Causing PG's reputation to\nsuffer as a result. One could argue that PG\"s reputation would've been\nenhanced in past years if a 32KB block size limit rather than 8KB block size\ndefault had been emphasized.\n\nBut you wouldn't have to change the DEFAULT in order to make this claim! It\nwould've been just a matter of emphasizing the limit rather than the default.\n\nPG 7.1 will pretty much end any confusion. The segmented approach used by\nTOAST should work well (the AOLserver LOB handler I wrote months ago works\nwell in the OpenACS context, and uses a very similar segmentation scheme, so\nI expect TOAST to work even better). Users will still be able change to\nlarger blocksizes (perhaps a wise thing to do if a large percentage of their\ndata won't fit into a single PG block). Users using the default will\nbe able to store rows of *awesome* length, efficiently.\n\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n", "msg_date": "Sun, 03 Dec 2000 23:01:44 -0800", "msg_from": "Don Baccus <[email protected]>", "msg_from_op": false, "msg_subject": "RE: 8192 BLCKSZ ?]" }, { "msg_contents": "Don Baccus wrote:\n>\n> ...\n> I expect TOAST to work even better). Users will still be able change to\n> larger blocksizes (perhaps a wise thing to do if a large percentage of their\n> data won't fit into a single PG block). Users using the default will\n> be able to store rows of *awesome* length, efficiently.\n\n Depends...\n\n Actually the toaster already jumps in if your tuples exceed\n BLKSZ/4, so with the default of 8K blocks it tries to keep\n all tuples smaller than 2K. The reasons behind that are:\n\n 1. An average tuple size of 8K means an average of 4K unused\n space at the end of each block. Wasting space means to\n waste IO bandwidth.\n\n 2. Since big items are unlikely to be search criteria,\n needing to read them into memory for every chech for a\n match on other columns is a waste again. So the more big\n items are off from the main tuple, the smaller the main\n table becomes, the more likely it is that the main tuples\n (holding the keys) are cached and the cheaper a\n sequential scan becomes.\n\n Of course, especially for 2. there is a break even point.\n That is when the extra fetches to send toast values to the\n client cost more than there was saved from not doing it\n during the main scan already. A full table SELECT *\n definitely costs more if TOAST is involved. But who does\n unqualified SELECT * from a multi-gig table without problems\n anyway? Usually you pick a single or a few based on some\n other key attributes - don't you?\n\n Let's make an example. You have a forum server that displays\n one article plus the date and sender of all follow-ups. The\n article bodies are usually big (1-10K). So you do a SELECT *\n to fetch the actually displayed article, and another SELECT\n sender, date_sent just to get the info for the follow-ups. If\n we assume a uniform distribution of body size and an average\n of 10 follow-ups, that'd mean that we save 52K of IO and\n cache usage for each article displayed.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n\n\n", "msg_date": "Mon, 4 Dec 2000 18:01:46 -0500 (EST)", "msg_from": "Jan Wieck <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 8192 BLCKSZ ?]" } ]
[ { "msg_contents": "While testing interlocking of multiple postmasters, I discovered that\nthe HAVE_FCNTL_SETLK interlock code we have in StreamServerPort()\ndoes not work at all on HPUX 10.20. This platform has F_SETLK according\nto configure, but:\n\n1. The lock is never applied to a socket, because the open() on the\nnewly-created socket (at line 303 of pqcomm.c) fails with EOPNOTSUPP,\nOperation not supported.\n\n2. If a postmaster finds a socket file in its way, it is unable to\nremove it despite the lack of any lock, because the open() at line\n230 fails with EADDRINUSE, Address already in use.\n\nI have no idea whether the fcntl(F_SETLK) call would succeed if control\ndid get to it, but these results don't leave me very hopeful.\n\nBetween this and the already-known result that F_SETLK doesn't work on\nsockets in shipping Linux kernels, I'm pretty unimpressed with the\nusefulness of this interlock method.\n\nWe talked before about flushing the F_SETLK technique and using good\nold interlock files containing PIDs, same method that we use for\ninterlocking the data directory. That is, if the socket file name is\n/tmp/.s.PGSQL.5432, we'd create a plain file /tmp/.s.PGSQL.5432.lock\ncontaining the owning process's PID. The code would insist on getting\nthis interlock file first, and if successful would just unconditionally\nremove any existing socket file before doing the bind().\n\nI can only think of one scenario where this is worse than what we have\nnow: if someone is running a /tmp-directory-sweeper that is bright\nenough not to remove socket files, it would still zap the interlock\nfile, thus potentially allowing a second postmaster to take over the\nsocket file. This doesn't seem like a mainstream problem though.\n\nBTW, it also seems like a good idea to reorder the postmaster's\nstartup operations so that the data-directory lockfile is checked\nbefore trying to acquire the port lockfile, instead of after. That\nway, in the common scenario where you're trying to start a second\npostmaster in the same directory + same port, it'd fail cleanly\neven if /tmp/.s.PGSQL.5432.lock had disappeared.\n\nComments?\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 28 Nov 2000 19:16:28 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "F_SETLK is looking worse and worse..." }, { "msg_contents": "On Tue, 28 Nov 2000, Tom Lane wrote:\n\n> That is, if the socket file name is /tmp/.s.PGSQL.5432, we'd create a\n> plain file /tmp/.s.PGSQL.5432.lock\n\n> I can only think of one scenario where this is worse than what we have\n> now: if someone is running a /tmp-directory-sweeper that is bright\n> enough not to remove socket files, it would still zap the interlock\n> file, thus potentially allowing a second postmaster to take over the\n> socket file. This doesn't seem like a mainstream problem though.\n\nSurely the lock file could easily go somewhere other than\n/tmp, since it won't be breaking existing setups?\n\nMatthew.\n\n", "msg_date": "Wed, 29 Nov 2000 13:13:56 +0000 (GMT)", "msg_from": "Matthew Kirkwood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: F_SETLK is looking worse and worse..." }, { "msg_contents": "Matthew Kirkwood <[email protected]> writes:\n> Surely the lock file could easily go somewhere other than\n> /tmp, since it won't be breaking existing setups?\n\nSuch as where?\n\nGiven the fact that the recent UUNET patch allows the DBA to put the\nsocket files anywhere, it seems simplest to say that the lockfiles go\nin the same directory as the socket files. Anything else is going to\nbe mighty confusing and probably unworkable. For example, it's not\na good idea to say we'll use a fixed directory for lockfiles regardless\nof where the socket file is --- that would prevent people from starting\nmultiple postmasters with the same logical port number and different\nsocket directories, something that's really perfectly reasonable (at\nleast in UUNET's view of the world ;-)).\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 29 Nov 2000 10:55:36 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: F_SETLK is looking worse and worse... " }, { "msg_contents": "Tom Lane writes:\n\n> I can only think of one scenario where this is worse than what we have\n> now: if someone is running a /tmp-directory-sweeper that is bright\n> enough not to remove socket files, it would still zap the interlock\n> file, thus potentially allowing a second postmaster to take over the\n> socket file. This doesn't seem like a mainstream problem though.\n\nRed Hat by default cleans out all files under /tmp and subdirectories that\nhaven't been accesses for 10 days. I assume other Linux distributions do\nsimilar things. Red Hat's tmpwatch doesn't ever follow symlinks, though. \nThat means you could make /tmp/.s.PGSQL.5432.lock a symlink to\nPGDATA/postmaster.pid. That might be a good idea in general, since\nestablishes an easy to examine correspondence between data directory and\nport number.\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n\n", "msg_date": "Wed, 29 Nov 2000 17:37:31 +0100 (CET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: F_SETLK is looking worse and worse..." }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> Red Hat by default cleans out all files under /tmp and subdirectories that\n> haven't been accesses for 10 days. I assume other Linux distributions do\n> similar things. Red Hat's tmpwatch doesn't ever follow symlinks, though. \n\nNor remove them?\n\n> That means you could make /tmp/.s.PGSQL.5432.lock a symlink to\n> PGDATA/postmaster.pid. That might be a good idea in general, since\n> establishes an easy to examine correspondence between data directory and\n> port number.\n\nI think this is a bad idea, because it assumes that the would-be\nexaminer (a) has read access to someone else's data directory, and\n(b) has the same chroot setting as the someone else does (else the\nsymlink won't mean the same thing to both of them). UUNET was planning\nto run postmasters chrooted into various subdirectories, IIRC, so\npoint (b) isn't hypothetical.\n\nHowever, I have no objection to writing the value of DataDir into\nthe socket lockfile (along with the owner's PID) if that seems like\na worthwhile bit of info.\n\nWould there be any value in having a postmaster re-read its own socket\nlockfile every so often, to keep it looking active to /tmp sweepers?\nOr is that too much of a kluge?\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 29 Nov 2000 11:53:13 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: F_SETLK is looking worse and worse... " }, { "msg_contents": "> However, I have no objection to writing the value of DataDir into\n> the socket lockfile (along with the owner's PID) if that seems like\n> a worthwhile bit of info.\n> \n> Would there be any value in having a postmaster re-read its own socket\n> lockfile every so often, to keep it looking active to /tmp sweepers?\n> Or is that too much of a kluge?\n\nRemoving 10-day-old files from /tmp seems pretty broken to me, and I\nhate to code around broken stuff.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 9 Dec 2000 15:04:13 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: F_SETLK is looking worse and worse..." }, { "msg_contents": "Bruce Momjian writes:\n\n> Removing 10-day-old files from /tmp seems pretty broken to me, and I\n> hate to code around broken stuff.\n\n(It's not 10-day-old files, it's files that have not been used for 10\ndays.)\n\nBut both the Linux file system standard and POSIX 2 have requirements\nand/or recommendations that call for /tmp to be cleaned out once in a\nwhile. If you don't like that, put your files elsewhere. We're not in a\nposition to dictate system administration procedures.\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n", "msg_date": "Sun, 10 Dec 2000 19:10:29 +0100 (CET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: F_SETLK is looking worse and worse..." } ]
[ { "msg_contents": "The last batch of commits break on FreeBSD 4.2-STABLE. \n$ uname -a\nFreeBSD lerbsd.lerctr.org 4.2-STABLE FreeBSD 4.2-STABLE #90: Tue Nov\n28 04:07:50 CST 2000\[email protected]:/usr/src/sys/compile/LERBSD i386\n$ \n\nConfigure:\n\n./configure --prefix=/home/ler/pg-test --enable-syslog \\\n\t--with-CXX --with-perl --enable-multibyte --enable-cassert \\\n\t--with-openssl \\\n\t--with-includes=\"/usr/local/include/tcl8.3 /usr/local/include/tk8.3\" \\\n\t--with-tcl \\\n\t--with-tclconfig=/usr/local/lib/tcl8.3 \\\n\t--with-tkconfig=/usr/local/lib/tk8.3\n\t\n\nLast 50 lines of make output:\n\t\nranlib libplpgsql.a\n/usr/libexec/elf/ld -x -shared -soname libplpgsql.so.1 -o libplpgsql.so.1 pl_parse.o pl_handler.o pl_comp.o pl_exec.o pl_funcs.o -R/home/ler/pg-test/lib\nrm -f libplpgsql.so\nln -s libplpgsql.so.1 libplpgsql.so\ngmake[4]: Leaving directory `/home/ler/pg-dev/pgsql/src/pl/plpgsql/src'\ngmake[3]: Leaving directory `/home/ler/pg-dev/pgsql/src/pl/plpgsql'\ngmake[3]: Entering directory `/home/ler/pg-dev/pgsql/src/pl/tcl'\n/bin/sh mkMakefile.tcldefs.sh '/usr/local/lib/tcl8.3/tclConfig.sh' 'Makefile.tcldefs'\ngmake[3]: Leaving directory `/home/ler/pg-dev/pgsql/src/pl/tcl'\ngmake[3]: Entering directory `/home/ler/pg-dev/pgsql/src/pl/tcl'\ncc -O -fPIC -I/usr/local/include/tcl8.3 -I/usr/local/include/tk8.3 -I../../../src/include -DHAVE_UNISTD_H=1 -DHAVE_LIMITS_H=1 -DHAVE_GETCWD=1 -DHAVE_OPENDIR=1 -DHAVE_STRSTR=1 -DHAVE_STRTOL=1 -DHAVE_TMPNAM=1 -DHAVE_WAITPID=1 -DNO_VALUES_H=1 -DHAVE_UNISTD_H=1 -DHAVE_SYS_PARAM_H=1 -DUSE_TERMIOS=1 -DHAVE_SYS_TIME_H=1 -DTIME_WITH_SYS_TIME=1 -DHAVE_TM_ZONE=1 -DHAVE_TM_GMTOFF=1 -DHAVE_ST_BLKSIZE=1 -DSTDC_HEADERS=1 -DNEED_MATHERR=1 -DHAVE_SIGNED_CHAR=1 -DHAVE_SYS_IOCTL_H=1 -DHAVE_SYS_FILIO_H=1 -c -o pltcl.o pltcl.c\nld -Bshareable -x -o pltcl.so pltcl.o -L/usr/local/lib -ltcl83 \nrm pltcl.o\ngmake[3]: Leaving directory `/home/ler/pg-dev/pgsql/src/pl/tcl'\ngmake[3]: Entering directory `/home/ler/pg-dev/pgsql/src/pl/plperl'\nplperl_installdir='/home/ler/pg-test/lib' \\\nEXTRA_INCLUDES='-I/usr/local/include/tcl8.3 -I/usr/local/include/tk8.3 -I../../../src/include' \\\nperl Makefile.PL\nWriting Makefile for plperl\ngmake -f Makefile all\ngmake[4]: Entering directory `/home/ler/pg-dev/pgsql/src/pl/plperl'\nmkdir blib\nmkdir blib/lib\nmkdir blib/arch\nmkdir blib/arch/auto\nmkdir blib/arch/auto/plperl\nmkdir blib/lib/auto\nmkdir blib/lib/auto/plperl\ncc -c -I/usr/local/include/tcl8.3 -I/usr/local/include/tk8.3 -I../../../src/include -DVERSION=\\\"0.10\\\" -DXS_VERSION=\\\"0.10\\\" -DPIC -fpic -I/usr/libdata/perl/5.00503/mach/CORE plperl.c\nIn file included from plperl.c:80:\n/usr/libdata/perl/5.00503/mach/CORE/perl.h:1483: warning: `DEBUG' redefined\n../../../src/include/utils/elog.h:22: warning: this is the location of the previous definition\nIn file included from /usr/include/sys/lock.h:45,\n from /usr/include/sys/mount.h:49,\n from /usr/libdata/perl/5.00503/mach/CORE/perl.h:376,\n from plperl.c:80:\n/usr/include/machine/lock.h:148: conflicting types for `s_lock'\n../../../src/include/storage/s_lock.h:402: previous declaration of `s_lock'\ngmake[4]: *** [plperl.o] Error 1\ngmake[4]: Leaving directory `/home/ler/pg-dev/pgsql/src/pl/plperl'\ngmake[3]: *** [all] Error 2\ngmake[3]: Leaving directory `/home/ler/pg-dev/pgsql/src/pl/plperl'\ngmake[2]: *** [all] Error 2\ngmake[2]: Leaving directory `/home/ler/pg-dev/pgsql/src/pl'\ngmake[1]: *** [all] Error 2\ngmake[1]: Leaving directory `/home/ler/pg-dev/pgsql/src'\ngmake: *** [all] Error 2\n$ ^D\n\nScript done on Tue Nov 28 19:32:31 2000\nBTW: this is the same configure I was using after Peter_E fixed the \nTCL / --with-includes stuff.\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: [email protected]\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n", "msg_date": "Tue, 28 Nov 2000 19:35:58 -0600", "msg_from": "Larry Rosenman <[email protected]>", "msg_from_op": true, "msg_subject": "LOCK Fixes/Break on FreeBSD 4.2-STABLE" }, { "msg_contents": "BTW, it compiles fine on UnixWare 7.1.1....\n* Larry Rosenman <[email protected]> [001128 19:36]:\n> The last batch of commits break on FreeBSD 4.2-STABLE. \n> $ uname -a\n> FreeBSD lerbsd.lerctr.org 4.2-STABLE FreeBSD 4.2-STABLE #90: Tue Nov\n> 28 04:07:50 CST 2000\n> [email protected]:/usr/src/sys/compile/LERBSD i386\n> $ \n> \n> Configure:\n> \n> ./configure --prefix=/home/ler/pg-test --enable-syslog \\\n> \t--with-CXX --with-perl --enable-multibyte --enable-cassert \\\n> \t--with-openssl \\\n> \t--with-includes=\"/usr/local/include/tcl8.3 /usr/local/include/tk8.3\" \\\n> \t--with-tcl \\\n> \t--with-tclconfig=/usr/local/lib/tcl8.3 \\\n> \t--with-tkconfig=/usr/local/lib/tk8.3\n> \t\n> \n> Last 50 lines of make output:\n> \t\n> ranlib libplpgsql.a\n> /usr/libexec/elf/ld -x -shared -soname libplpgsql.so.1 -o libplpgsql.so.1 pl_parse.o pl_handler.o pl_comp.o pl_exec.o pl_funcs.o -R/home/ler/pg-test/lib\n> rm -f libplpgsql.so\n> ln -s libplpgsql.so.1 libplpgsql.so\n> gmake[4]: Leaving directory `/home/ler/pg-dev/pgsql/src/pl/plpgsql/src'\n> gmake[3]: Leaving directory `/home/ler/pg-dev/pgsql/src/pl/plpgsql'\n> gmake[3]: Entering directory `/home/ler/pg-dev/pgsql/src/pl/tcl'\n> /bin/sh mkMakefile.tcldefs.sh '/usr/local/lib/tcl8.3/tclConfig.sh' 'Makefile.tcldefs'\n> gmake[3]: Leaving directory `/home/ler/pg-dev/pgsql/src/pl/tcl'\n> gmake[3]: Entering directory `/home/ler/pg-dev/pgsql/src/pl/tcl'\n> cc -O -fPIC -I/usr/local/include/tcl8.3 -I/usr/local/include/tk8.3 -I../../../src/include -DHAVE_UNISTD_H=1 -DHAVE_LIMITS_H=1 -DHAVE_GETCWD=1 -DHAVE_OPENDIR=1 -DHAVE_STRSTR=1 -DHAVE_STRTOL=1 -DHAVE_TMPNAM=1 -DHAVE_WAITPID=1 -DNO_VALUES_H=1 -DHAVE_UNISTD_H=1 -DHAVE_SYS_PARAM_H=1 -DUSE_TERMIOS=1 -DHAVE_SYS_TIME_H=1 -DTIME_WITH_SYS_TIME=1 -DHAVE_TM_ZONE=1 -DHAVE_TM_GMTOFF=1 -DHAVE_ST_BLKSIZE=1 -DSTDC_HEADERS=1 -DNEED_MATHERR=1 -DHAVE_SIGNED_CHAR=1 -DHAVE_SYS_IOCTL_H=1 -DHAVE_SYS_FILIO_H=1 -c -o pltcl.o pltcl.c\n> ld -Bshareable -x -o pltcl.so pltcl.o -L/usr/local/lib -ltcl83 \n> rm pltcl.o\n> gmake[3]: Leaving directory `/home/ler/pg-dev/pgsql/src/pl/tcl'\n> gmake[3]: Entering directory `/home/ler/pg-dev/pgsql/src/pl/plperl'\n> plperl_installdir='/home/ler/pg-test/lib' \\\n> EXTRA_INCLUDES='-I/usr/local/include/tcl8.3 -I/usr/local/include/tk8.3 -I../../../src/include' \\\n> perl Makefile.PL\n> Writing Makefile for plperl\n> gmake -f Makefile all\n> gmake[4]: Entering directory `/home/ler/pg-dev/pgsql/src/pl/plperl'\n> mkdir blib\n> mkdir blib/lib\n> mkdir blib/arch\n> mkdir blib/arch/auto\n> mkdir blib/arch/auto/plperl\n> mkdir blib/lib/auto\n> mkdir blib/lib/auto/plperl\n> cc -c -I/usr/local/include/tcl8.3 -I/usr/local/include/tk8.3 -I../../../src/include -DVERSION=\\\"0.10\\\" -DXS_VERSION=\\\"0.10\\\" -DPIC -fpic -I/usr/libdata/perl/5.00503/mach/CORE plperl.c\n> In file included from plperl.c:80:\n> /usr/libdata/perl/5.00503/mach/CORE/perl.h:1483: warning: `DEBUG' redefined\n> ../../../src/include/utils/elog.h:22: warning: this is the location of the previous definition\n> In file included from /usr/include/sys/lock.h:45,\n> from /usr/include/sys/mount.h:49,\n> from /usr/libdata/perl/5.00503/mach/CORE/perl.h:376,\n> from plperl.c:80:\n> /usr/include/machine/lock.h:148: conflicting types for `s_lock'\n> ../../../src/include/storage/s_lock.h:402: previous declaration of `s_lock'\n> gmake[4]: *** [plperl.o] Error 1\n> gmake[4]: Leaving directory `/home/ler/pg-dev/pgsql/src/pl/plperl'\n> gmake[3]: *** [all] Error 2\n> gmake[3]: Leaving directory `/home/ler/pg-dev/pgsql/src/pl/plperl'\n> gmake[2]: *** [all] Error 2\n> gmake[2]: Leaving directory `/home/ler/pg-dev/pgsql/src/pl'\n> gmake[1]: *** [all] Error 2\n> gmake[1]: Leaving directory `/home/ler/pg-dev/pgsql/src'\n> gmake: *** [all] Error 2\n> $ ^D\n> \n> Script done on Tue Nov 28 19:32:31 2000\n> BTW: this is the same configure I was using after Peter_E fixed the \n> TCL / --with-includes stuff.\n> \n> -- \n> Larry Rosenman http://www.lerctr.org/~ler\n> Phone: +1 972-414-9812 E-Mail: [email protected]\n> US Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: [email protected]\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n", "msg_date": "Tue, 28 Nov 2000 19:37:22 -0600", "msg_from": "Larry Rosenman <[email protected]>", "msg_from_op": true, "msg_subject": "Re: LOCK Fixes/Break on FreeBSD 4.2-STABLE" }, { "msg_contents": "Larry Rosenman <[email protected]> writes:\n> The last batch of commits break on FreeBSD 4.2-STABLE. \n> /usr/include/machine/lock.h:148: conflicting types for `s_lock'\n> ../../../src/include/storage/s_lock.h:402: previous declaration of `s_lock'\n\nThat's odd. s_lock has been declared the same way right along in our\ncode; I didn't change it. Can you see what's changed to cause a\nconflict where there was none before?\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 28 Nov 2000 23:31:17 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: LOCK Fixes/Break on FreeBSD 4.2-STABLE " }, { "msg_contents": "* Tom Lane <[email protected]> [001128 22:31]:\n> Larry Rosenman <[email protected]> writes:\n> > The last batch of commits break on FreeBSD 4.2-STABLE. \n> > /usr/include/machine/lock.h:148: conflicting types for `s_lock'\n> > ../../../src/include/storage/s_lock.h:402: previous declaration of `s_lock'\n> \n> That's odd. s_lock has been declared the same way right along in our\n> code; I didn't change it. Can you see what's changed to cause a\n> conflict where there was none before?\nThis maybe Matt Dillon's recent commit to FBSD then. Either way, it's \na problem on -STABLE 4.2 of FreeBSD. \n\nHere is the \"Current\" /usr/include/machine/lock.h:\n\n/*\n * Copyright (c) 1997, by Steve Passe\n * All rights reserved.\n *\n * Redistribution and use in source and binary forms, with or without\n * modification, are permitted provided that the following conditions\n * are met:\n * 1. Redistributions of source code must retain the above copyright\n * notice, this list of conditions and the following disclaimer.\n * 2. The name of the developer may NOT be used to endorse or promote products\n * derived from this software without specific prior written permission.\n *\n * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND\n * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\n * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE\n * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE\n * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL\n * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS\n * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)\n * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT\n * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY\n * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF\n * SUCH DAMAGE.\n *\n * $FreeBSD: src/sys/i386/include/lock.h,v 1.11.2.2 2000/09/30 02:49:34 ps Exp $\n */\n\n\n#ifndef _MACHINE_LOCK_H_\n#define _MACHINE_LOCK_H_\n\n\n#ifdef LOCORE\n\n#ifdef SMP\n\n#define\tMPLOCKED\tlock ;\n\n/*\n * Some handy macros to allow logical organization.\n */\n\n#define MP_LOCK\t\tcall\t_get_mplock\n\n#define MP_TRYLOCK\t\t\t\t\t\t\t\\\n\tpushl\t$_mp_lock ;\t\t\t/* GIANT_LOCK */\t\\\n\tcall\t_MPtrylock ;\t\t\t/* try to get lock */\t\\\n\tadd\t$4, %esp\n\n#define MP_RELLOCK\t\t\t\t\t\t\t\\\n\tmovl\t$_mp_lock,%edx ;\t\t/* GIANT_LOCK */\t\\\n\tcall\t_MPrellock_edx\n\n/*\n * Protects the IO APIC and apic_imen as a critical region.\n */\n#define IMASK_LOCK\t\t\t\t\t\t\t\\\n\tpushl\t$_imen_lock ;\t\t\t/* address of lock */\t\\\n\tcall\t_s_lock ;\t\t\t/* MP-safe */\t\t\\\n\taddl\t$4, %esp\n\n#define IMASK_UNLOCK\t\t\t\t\t\t\t\\\n\tmovl\t$0, _imen_lock\n\n#else /* SMP */\n\n#define\tMPLOCKED\t\t\t\t/* NOP */\n\n#define MP_LOCK\t\t\t\t\t/* NOP */\n\n#endif /* SMP */\n\n#else /* LOCORE */\n\n#ifdef SMP\n\n#include <machine/smptests.h>\t\t\t/** xxx_LOCK */\n\n/*\n * Locks regions protected in UP kernel via cli/sti.\n */\n#ifdef USE_MPINTRLOCK\n#define MPINTR_LOCK()\ts_lock(&mpintr_lock)\n#define MPINTR_UNLOCK()\ts_unlock(&mpintr_lock)\n#else\n#define MPINTR_LOCK()\n#define MPINTR_UNLOCK()\n#endif /* USE_MPINTRLOCK */\n\n/*\n * sio/cy lock.\n * XXX should rc (RISCom/8) use this?\n */\n#ifdef USE_COMLOCK\n#define COM_LOCK() \ts_lock(&com_lock)\n#define COM_UNLOCK() \ts_unlock(&com_lock)\n#define COM_DISABLE_INTR() \\\n\t\t{ __asm __volatile(\"cli\" : : : \"memory\"); COM_LOCK(); }\n#define COM_ENABLE_INTR() \\\n\t\t{ COM_UNLOCK(); __asm __volatile(\"sti\"); }\n#else\n#define COM_LOCK()\n#define COM_UNLOCK()\n#define COM_DISABLE_INTR()\tdisable_intr()\n#define COM_ENABLE_INTR()\tenable_intr()\n#endif /* USE_COMLOCK */\n\n/* \n * Clock hardware/struct lock.\n * XXX pcaudio and friends still need this lock installed.\n */\n#ifdef USE_CLOCKLOCK\n#define CLOCK_LOCK()\ts_lock(&clock_lock)\n#define CLOCK_UNLOCK()\ts_unlock(&clock_lock)\n#define CLOCK_DISABLE_INTR() \\\n\t\t{ __asm __volatile(\"cli\" : : : \"memory\"); CLOCK_LOCK(); }\n#define CLOCK_ENABLE_INTR() \\\n\t\t{ CLOCK_UNLOCK(); __asm __volatile(\"sti\"); }\n#else\n#define CLOCK_LOCK()\n#define CLOCK_UNLOCK()\n#define CLOCK_DISABLE_INTR()\tdisable_intr()\n#define CLOCK_ENABLE_INTR()\tenable_intr()\n#endif /* USE_CLOCKLOCK */\n\n#else /* SMP */\n\n#define MPINTR_LOCK()\n#define MPINTR_UNLOCK()\n\n#define COM_LOCK()\n#define COM_UNLOCK()\n#define CLOCK_LOCK()\n#define CLOCK_UNLOCK()\n\n#endif /* SMP */\n\n/*\n * Simple spin lock.\n * It is an error to hold one of these locks while a process is sleeping.\n */\nstruct simplelock {\n\tvolatile int\tlock_data;\n};\n\n/* functions in simplelock.s */\nvoid\ts_lock_init\t\t__P((struct simplelock *));\nvoid\ts_lock\t\t\t__P((struct simplelock *));\nint\ts_lock_try\t\t__P((struct simplelock *));\nvoid\tss_lock\t\t\t__P((struct simplelock *));\nvoid\tss_unlock\t\t__P((struct simplelock *));\nvoid\ts_lock_np\t\t__P((struct simplelock *));\nvoid\ts_unlock_np\t\t__P((struct simplelock *));\n\n/* inline simplelock functions */\nstatic __inline void\ns_unlock(struct simplelock *lkp)\n{\n\tlkp->lock_data = 0;\n}\n\n/* global data in mp_machdep.c */\nextern struct simplelock\timen_lock;\nextern struct simplelock\tcpl_lock;\nextern struct simplelock\tfast_intr_lock;\nextern struct simplelock\tintr_lock;\nextern struct simplelock\tclock_lock;\nextern struct simplelock\tcom_lock;\nextern struct simplelock\tmpintr_lock;\nextern struct simplelock\tmcount_lock;\n\n#if !defined(SIMPLELOCK_DEBUG) && MAXCPU > 1\n/*\n * This set of defines turns on the real functions in i386/isa/apic_ipl.s.\n */\n#define\tsimple_lock_init(alp)\ts_lock_init(alp)\n#define\tsimple_lock(alp)\ts_lock(alp)\n#define\tsimple_lock_try(alp)\ts_lock_try(alp)\n#define\tsimple_unlock(alp)\ts_unlock(alp)\n\n#endif /* !SIMPLELOCK_DEBUG && MAXCPU > 1 */\n\n#endif /* LOCORE */\n\n#endif /* !_MACHINE_LOCK_H_ */\n> \n> \t\t\tregards, tom lane\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: [email protected]\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n", "msg_date": "Tue, 28 Nov 2000 22:33:10 -0600", "msg_from": "Larry Rosenman <[email protected]>", "msg_from_op": true, "msg_subject": "Re: LOCK Fixes/Break on FreeBSD 4.2-STABLE" }, { "msg_contents": "* Larry Rosenman <[email protected]> [001128 22:33]:\n> * Tom Lane <[email protected]> [001128 22:31]:\n> > Larry Rosenman <[email protected]> writes:\n> > > The last batch of commits break on FreeBSD 4.2-STABLE. \n> > > /usr/include/machine/lock.h:148: conflicting types for `s_lock'\n> > > ../../../src/include/storage/s_lock.h:402: previous declaration of `s_lock'\n> > \n> > That's odd. s_lock has been declared the same way right along in our\n> > code; I didn't change it. Can you see what's changed to cause a\n> > conflict where there was none before?\n> This maybe Matt Dillon's recent commit to FBSD then. Either way, it's \n> a problem on -STABLE 4.2 of FreeBSD. \nNope, I just checked, and that hadn't changed either:\n$ ls -l sys/i386/include/lock.h sys/sys/lock.h \n-rw-r--r-- 1 root wheel 4981 Oct 3 21:43 sys/i386/include/lock.h\n-rw-r--r-- 1 root wheel 9365 Oct 3 21:43 sys/sys/lock.h\n$ ls -l /usr/include/machine/lock.h\n-r--r--r-- 1 root wheel 4981 Oct 4 00:24\n/usr/include/machine/lock.h\n$ \n\n\n> \n> Here is the \"Current\" /usr/include/machine/lock.h:\n> \n> /*\n> * Copyright (c) 1997, by Steve Passe\n> * All rights reserved.\n> *\n> * Redistribution and use in source and binary forms, with or without\n> * modification, are permitted provided that the following conditions\n> * are met:\n> * 1. Redistributions of source code must retain the above copyright\n> * notice, this list of conditions and the following disclaimer.\n> * 2. The name of the developer may NOT be used to endorse or promote products\n> * derived from this software without specific prior written permission.\n> *\n> * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND\n> * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\n> * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE\n> * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE\n> * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL\n> * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS\n> * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)\n> * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT\n> * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY\n> * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF\n> * SUCH DAMAGE.\n> *\n> * $FreeBSD: src/sys/i386/include/lock.h,v 1.11.2.2 2000/09/30 02:49:34 ps Exp $\n> */\n> \n> \n> #ifndef _MACHINE_LOCK_H_\n> #define _MACHINE_LOCK_H_\n> \n> \n> #ifdef LOCORE\n> \n> #ifdef SMP\n> \n> #define\tMPLOCKED\tlock ;\n> \n> /*\n> * Some handy macros to allow logical organization.\n> */\n> \n> #define MP_LOCK\t\tcall\t_get_mplock\n> \n> #define MP_TRYLOCK\t\t\t\t\t\t\t\\\n> \tpushl\t$_mp_lock ;\t\t\t/* GIANT_LOCK */\t\\\n> \tcall\t_MPtrylock ;\t\t\t/* try to get lock */\t\\\n> \tadd\t$4, %esp\n> \n> #define MP_RELLOCK\t\t\t\t\t\t\t\\\n> \tmovl\t$_mp_lock,%edx ;\t\t/* GIANT_LOCK */\t\\\n> \tcall\t_MPrellock_edx\n> \n> /*\n> * Protects the IO APIC and apic_imen as a critical region.\n> */\n> #define IMASK_LOCK\t\t\t\t\t\t\t\\\n> \tpushl\t$_imen_lock ;\t\t\t/* address of lock */\t\\\n> \tcall\t_s_lock ;\t\t\t/* MP-safe */\t\t\\\n> \taddl\t$4, %esp\n> \n> #define IMASK_UNLOCK\t\t\t\t\t\t\t\\\n> \tmovl\t$0, _imen_lock\n> \n> #else /* SMP */\n> \n> #define\tMPLOCKED\t\t\t\t/* NOP */\n> \n> #define MP_LOCK\t\t\t\t\t/* NOP */\n> \n> #endif /* SMP */\n> \n> #else /* LOCORE */\n> \n> #ifdef SMP\n> \n> #include <machine/smptests.h>\t\t\t/** xxx_LOCK */\n> \n> /*\n> * Locks regions protected in UP kernel via cli/sti.\n> */\n> #ifdef USE_MPINTRLOCK\n> #define MPINTR_LOCK()\ts_lock(&mpintr_lock)\n> #define MPINTR_UNLOCK()\ts_unlock(&mpintr_lock)\n> #else\n> #define MPINTR_LOCK()\n> #define MPINTR_UNLOCK()\n> #endif /* USE_MPINTRLOCK */\n> \n> /*\n> * sio/cy lock.\n> * XXX should rc (RISCom/8) use this?\n> */\n> #ifdef USE_COMLOCK\n> #define COM_LOCK() \ts_lock(&com_lock)\n> #define COM_UNLOCK() \ts_unlock(&com_lock)\n> #define COM_DISABLE_INTR() \\\n> \t\t{ __asm __volatile(\"cli\" : : : \"memory\"); COM_LOCK(); }\n> #define COM_ENABLE_INTR() \\\n> \t\t{ COM_UNLOCK(); __asm __volatile(\"sti\"); }\n> #else\n> #define COM_LOCK()\n> #define COM_UNLOCK()\n> #define COM_DISABLE_INTR()\tdisable_intr()\n> #define COM_ENABLE_INTR()\tenable_intr()\n> #endif /* USE_COMLOCK */\n> \n> /* \n> * Clock hardware/struct lock.\n> * XXX pcaudio and friends still need this lock installed.\n> */\n> #ifdef USE_CLOCKLOCK\n> #define CLOCK_LOCK()\ts_lock(&clock_lock)\n> #define CLOCK_UNLOCK()\ts_unlock(&clock_lock)\n> #define CLOCK_DISABLE_INTR() \\\n> \t\t{ __asm __volatile(\"cli\" : : : \"memory\"); CLOCK_LOCK(); }\n> #define CLOCK_ENABLE_INTR() \\\n> \t\t{ CLOCK_UNLOCK(); __asm __volatile(\"sti\"); }\n> #else\n> #define CLOCK_LOCK()\n> #define CLOCK_UNLOCK()\n> #define CLOCK_DISABLE_INTR()\tdisable_intr()\n> #define CLOCK_ENABLE_INTR()\tenable_intr()\n> #endif /* USE_CLOCKLOCK */\n> \n> #else /* SMP */\n> \n> #define MPINTR_LOCK()\n> #define MPINTR_UNLOCK()\n> \n> #define COM_LOCK()\n> #define COM_UNLOCK()\n> #define CLOCK_LOCK()\n> #define CLOCK_UNLOCK()\n> \n> #endif /* SMP */\n> \n> /*\n> * Simple spin lock.\n> * It is an error to hold one of these locks while a process is sleeping.\n> */\n> struct simplelock {\n> \tvolatile int\tlock_data;\n> };\n> \n> /* functions in simplelock.s */\n> void\ts_lock_init\t\t__P((struct simplelock *));\n> void\ts_lock\t\t\t__P((struct simplelock *));\n> int\ts_lock_try\t\t__P((struct simplelock *));\n> void\tss_lock\t\t\t__P((struct simplelock *));\n> void\tss_unlock\t\t__P((struct simplelock *));\n> void\ts_lock_np\t\t__P((struct simplelock *));\n> void\ts_unlock_np\t\t__P((struct simplelock *));\n> \n> /* inline simplelock functions */\n> static __inline void\n> s_unlock(struct simplelock *lkp)\n> {\n> \tlkp->lock_data = 0;\n> }\n> \n> /* global data in mp_machdep.c */\n> extern struct simplelock\timen_lock;\n> extern struct simplelock\tcpl_lock;\n> extern struct simplelock\tfast_intr_lock;\n> extern struct simplelock\tintr_lock;\n> extern struct simplelock\tclock_lock;\n> extern struct simplelock\tcom_lock;\n> extern struct simplelock\tmpintr_lock;\n> extern struct simplelock\tmcount_lock;\n> \n> #if !defined(SIMPLELOCK_DEBUG) && MAXCPU > 1\n> /*\n> * This set of defines turns on the real functions in i386/isa/apic_ipl.s.\n> */\n> #define\tsimple_lock_init(alp)\ts_lock_init(alp)\n> #define\tsimple_lock(alp)\ts_lock(alp)\n> #define\tsimple_lock_try(alp)\ts_lock_try(alp)\n> #define\tsimple_unlock(alp)\ts_unlock(alp)\n> \n> #endif /* !SIMPLELOCK_DEBUG && MAXCPU > 1 */\n> \n> #endif /* LOCORE */\n> \n> #endif /* !_MACHINE_LOCK_H_ */\n> > \n> > \t\t\tregards, tom lane\n> -- \n> Larry Rosenman http://www.lerctr.org/~ler\n> Phone: +1 972-414-9812 E-Mail: [email protected]\n> US Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: [email protected]\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n", "msg_date": "Tue, 28 Nov 2000 22:36:29 -0600", "msg_from": "Larry Rosenman <[email protected]>", "msg_from_op": true, "msg_subject": "Re: LOCK Fixes/Break on FreeBSD 4.2-STABLE" }, { "msg_contents": "* Tom Lane <[email protected]> [001128 22:31]:\n> Larry Rosenman <[email protected]> writes:\n> > The last batch of commits break on FreeBSD 4.2-STABLE. \n> > /usr/include/machine/lock.h:148: conflicting types for `s_lock'\n> > ../../../src/include/storage/s_lock.h:402: previous declaration of `s_lock'\n> \n> That's odd. s_lock has been declared the same way right along in our\n> code; I didn't change it. Can you see what's changed to cause a\n> conflict where there was none before?\n> \n> \t\t\tregards, tom lane\nOther things that may be an issue:\n\n1) BINUTILS 2.10.1\n2) OPENSSL 0.9.6 \n\nboth just MFC'd into FreeBSD recently, but I believe we built until\ntonite. \n\nI can make you an account on the box if you'd like....\n\nLER\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: [email protected]\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n", "msg_date": "Tue, 28 Nov 2000 22:40:44 -0600", "msg_from": "Larry Rosenman <[email protected]>", "msg_from_op": true, "msg_subject": "Re: LOCK Fixes/Break on FreeBSD 4.2-STABLE" }, { "msg_contents": "* Larry Rosenman <[email protected]> [001128 20:44] wrote:\n> * Tom Lane <[email protected]> [001128 22:31]:\n> > Larry Rosenman <[email protected]> writes:\n> > > The last batch of commits break on FreeBSD 4.2-STABLE. \n> > > /usr/include/machine/lock.h:148: conflicting types for `s_lock'\n> > > ../../../src/include/storage/s_lock.h:402: previous declaration of `s_lock'\n> > \n> > That's odd. s_lock has been declared the same way right along in our\n> > code; I didn't change it. Can you see what's changed to cause a\n> > conflict where there was none before?\n> > \n> > \t\t\tregards, tom lane\n> Other things that may be an issue:\n> \n> 1) BINUTILS 2.10.1\n> 2) OPENSSL 0.9.6 \n> \n> both just MFC'd into FreeBSD recently, but I believe we built until\n> tonite. \n> \n> I can make you an account on the box if you'd like....\n\nMy signifigant other just installed a fresh copy of 4.2 last night,\nunfortunetly the poor box is only a 233mhz, it'll be a while before\nwe build -stable on it.\n\nHowever I'm confident I can have a fix within a couple of days.\n\n-- \n-Alfred Perlstein - [[email protected]|[email protected]]\n\"I have the heart of a child; I keep it in a jar on my desk.\"\n", "msg_date": "Tue, 28 Nov 2000 20:46:43 -0800", "msg_from": "Alfred Perlstein <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: LOCK Fixes/Break on FreeBSD 4.2-STABLE" }, { "msg_contents": "My offer stands for you as well, if you'd like an account\non this P-III 600E, you are welcome to one...\n\nLER\n* Alfred Perlstein <[email protected]> [001128 22:46]:\n> * Larry Rosenman <[email protected]> [001128 20:44] wrote:\n> > * Tom Lane <[email protected]> [001128 22:31]:\n> > > Larry Rosenman <[email protected]> writes:\n> > > > The last batch of commits break on FreeBSD 4.2-STABLE. \n> > > > /usr/include/machine/lock.h:148: conflicting types for `s_lock'\n> > > > ../../../src/include/storage/s_lock.h:402: previous declaration of `s_lock'\n> > > \n> > > That's odd. s_lock has been declared the same way right along in our\n> > > code; I didn't change it. Can you see what's changed to cause a\n> > > conflict where there was none before?\n> > > \n> > > \t\t\tregards, tom lane\n> > Other things that may be an issue:\n> > \n> > 1) BINUTILS 2.10.1\n> > 2) OPENSSL 0.9.6 \n> > \n> > both just MFC'd into FreeBSD recently, but I believe we built until\n> > tonite. \n> > \n> > I can make you an account on the box if you'd like....\n> \n> My signifigant other just installed a fresh copy of 4.2 last night,\n> unfortunetly the poor box is only a 233mhz, it'll be a while before\n> we build -stable on it.\n> \n> However I'm confident I can have a fix within a couple of days.\n> \n> -- \n> -Alfred Perlstein - [[email protected]|[email protected]]\n> \"I have the heart of a child; I keep it in a jar on my desk.\"\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: [email protected]\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n", "msg_date": "Tue, 28 Nov 2000 22:48:51 -0600", "msg_from": "Larry Rosenman <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Re: LOCK Fixes/Break on FreeBSD 4.2-STABLE" }, { "msg_contents": "* Larry Rosenman <[email protected]> [001128 20:52] wrote:\n> My offer stands for you as well, if you'd like an account\n> on this P-III 600E, you are welcome to one...\n\nI just remebered my laptop in the other room, it's a pretty recent 4.2.\n\nI'll give it shot.\n\nYes, it's possible to forget about a computer...\n http://people.freebsd.org/~alfred/images/lab.jpg\n\n:)\n\n-- \n-Alfred Perlstein - [[email protected]|[email protected]]\n\"I have the heart of a child; I keep it in a jar on my desk.\"\n", "msg_date": "Tue, 28 Nov 2000 20:55:37 -0800", "msg_from": "Alfred Perlstein <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: LOCK Fixes/Break on FreeBSD 4.2-STABLE" }, { "msg_contents": "Larry Rosenman <[email protected]> writes:\n>> Here is the \"Current\" /usr/include/machine/lock.h:\n>> ...\n>> void\ts_lock\t\t\t__P((struct simplelock *));\n>> ...\n\nIck. Seems like the relevant question is not so much \"why did it break\"\nas \"how did it ever manage to work\"?\n\nI have no problem with renaming our s_lock, if that's what it takes,\nbut I'm curious to know why there is a problem now and not before.\nWe've called that routine s_lock for a *long* time, so it seems\nlike there must be some factor involved that I don't see just yet...\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 28 Nov 2000 23:55:48 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: LOCK Fixes/Break on FreeBSD 4.2-STABLE " }, { "msg_contents": "* Tom Lane <[email protected]> [001128 22:55]:\n> Larry Rosenman <[email protected]> writes:\n> >> Here is the \"Current\" /usr/include/machine/lock.h:\n> >> ...\n> >> void\ts_lock\t\t\t__P((struct simplelock *));\n> >> ...\n> \n> Ick. Seems like the relevant question is not so much \"why did it break\"\n> as \"how did it ever manage to work\"?\n> \n> I have no problem with renaming our s_lock, if that's what it takes,\n> but I'm curious to know why there is a problem now and not before.\n> We've called that routine s_lock for a *long* time, so it seems\n> like there must be some factor involved that I don't see just yet...\nDidn't your commit message say something about the TAS and NON-TAS\npaths being the same now? \n\n\n> \n> \t\t\tregards, tom lane\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: [email protected]\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n", "msg_date": "Tue, 28 Nov 2000 22:56:49 -0600", "msg_from": "Larry Rosenman <[email protected]>", "msg_from_op": true, "msg_subject": "Re: LOCK Fixes/Break on FreeBSD 4.2-STABLE" }, { "msg_contents": "Larry Rosenman <[email protected]> writes:\n>> We've called that routine s_lock for a *long* time, so it seems\n>> like there must be some factor involved that I don't see just yet...\n\n> Didn't your commit message say something about the TAS and NON-TAS\n> paths being the same now? \n\nYeah, but don't tell me you were running on a non-TAS platform...\nthat stuff didn't work...\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 29 Nov 2000 00:02:41 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: LOCK Fixes/Break on FreeBSD 4.2-STABLE " }, { "msg_contents": "* Tom Lane <[email protected]> [001128 23:03]:\n> Larry Rosenman <[email protected]> writes:\n> >> We've called that routine s_lock for a *long* time, so it seems\n> >> like there must be some factor involved that I don't see just yet...\n> \n> > Didn't your commit message say something about the TAS and NON-TAS\n> > paths being the same now? \n> \n> Yeah, but don't tell me you were running on a non-TAS platform...\n> that stuff didn't work...\nThe configure stuff used tas/dummy.s, so I'm not sure.... \n\n\n> \n> \t\t\tregards, tom lane\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: [email protected]\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n", "msg_date": "Tue, 28 Nov 2000 23:07:17 -0600", "msg_from": "Larry Rosenman <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Re: LOCK Fixes/Break on FreeBSD 4.2-STABLE" }, { "msg_contents": "* Alfred Perlstein <[email protected]> [001128 22:55]:\n> * Larry Rosenman <[email protected]> [001128 20:52] wrote:\n> > My offer stands for you as well, if you'd like an account\n> > on this P-III 600E, you are welcome to one...\n> \n> I just remebered my laptop in the other room, it's a pretty recent 4.2.\n> \n> I'll give it shot.\n> \n> Yes, it's possible to forget about a computer...\n> http://people.freebsd.org/~alfred/images/lab.jpg\n> \n> :)\nI've got to go to bed now, but the offer stands. If y'all need an \naccount, peter e's got one already, and I can make more tomorrow.\n\nGood luck, all. \n\nLER\n\n> \n> -- \n> -Alfred Perlstein - [[email protected]|[email protected]]\n> \"I have the heart of a child; I keep it in a jar on my desk.\"\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: [email protected]\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n", "msg_date": "Tue, 28 Nov 2000 23:16:34 -0600", "msg_from": "Larry Rosenman <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Re: LOCK Fixes/Break on FreeBSD 4.2-STABLE" }, { "msg_contents": "* Tom Lane <[email protected]> [001128 23:03]:\n> Larry Rosenman <[email protected]> writes:\n> >> We've called that routine s_lock for a *long* time, so it seems\n> >> like there must be some factor involved that I don't see just yet...\n> \n> > Didn't your commit message say something about the TAS and NON-TAS\n> > paths being the same now? \n> \n> Yeah, but don't tell me you were running on a non-TAS platform...\n> that stuff didn't work...\nTom's commit from tonite fixed it. Regression running as I type... \n\nThanks, Tom!\n\nLER\n\n> \n> \t\t\tregards, tom lane\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: [email protected]\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n", "msg_date": "Wed, 29 Nov 2000 19:53:37 -0600", "msg_from": "Larry Rosenman <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Re: LOCK Fixes/Break on FreeBSD 4.2-STABLE" }, { "msg_contents": "* Larry Rosenman <[email protected]> [001129 19:54]:\n> * Tom Lane <[email protected]> [001128 23:03]:\n> > Larry Rosenman <[email protected]> writes:\n> > >> We've called that routine s_lock for a *long* time, so it seems\n> > >> like there must be some factor involved that I don't see just yet...\n> > \n> > > Didn't your commit message say something about the TAS and NON-TAS\n> > > paths being the same now? \n> > \n> > Yeah, but don't tell me you were running on a non-TAS platform...\n> > that stuff didn't work...\n> Tom's commit from tonite fixed it. Regression running as I type... \nand passed. :-) \n\n\n> \n> Thanks, Tom!\n> \n> LER\n> \n> > \n> > \t\t\tregards, tom lane\n> -- \n> Larry Rosenman http://www.lerctr.org/~ler\n> Phone: +1 972-414-9812 E-Mail: [email protected]\n> US Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: [email protected]\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n", "msg_date": "Wed, 29 Nov 2000 19:55:19 -0600", "msg_from": "Larry Rosenman <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Re: LOCK Fixes/Break on FreeBSD 4.2-STABLE" }, { "msg_contents": "* Larry Rosenman <[email protected]> [001128 20:44] wrote:\n> * Tom Lane <[email protected]> [001128 22:31]:\n> > Larry Rosenman <[email protected]> writes:\n> > > The last batch of commits break on FreeBSD 4.2-STABLE. \n> > > /usr/include/machine/lock.h:148: conflicting types for `s_lock'\n> > > ../../../src/include/storage/s_lock.h:402: previous declaration of `s_lock'\n> > \n> > That's odd. s_lock has been declared the same way right along in our\n> > code; I didn't change it. Can you see what's changed to cause a\n> > conflict where there was none before?\n> > \n> > \t\t\tregards, tom lane\n> Other things that may be an issue:\n> \n> 1) BINUTILS 2.10.1\n> 2) OPENSSL 0.9.6 \n> \n> both just MFC'd into FreeBSD recently, but I believe we built until\n> tonite. \n> \n> I can make you an account on the box if you'd like....\n\nGrr, couldn't find the original message. I think you thought\nyou solved your problem with building on FreeBSD, however I\nthink you just forgot to compile with perl support enabled.\n\nWhen I compiled with perl support it broke. This isn't a \npostgresql bug, nor a FreeBSD bug although the fault lies\nmostly with FreeBSD for polluting the C namespace a _lot_\nwhen sys/mount.h is included.\n\nWhat happens is the the perl code brings in perl.h which brings\nin sys/mount.h, sys/mount.h includes sys/lock.h because our\nkernel structure \"mount\" has a VFS lock in it, VFS locks happen\nto contain spinlocks (simplelocks) and one of our our functions\nto manipulate the simplelocks is called s_lock(). This causes\na namespace conflict which causes the compile error.\n\nAnyhow, to address the problem I've removed struct mount from\nuserland visibility in both FreeBSD 5.x (current) and FreeBSD 4.x\n(stable).\n\nThings should work now but let me know if you have any other\nproblems.\n\nAnd thanks for pointing it out and offering to help track it\ndown.\n\nhere's the patch if you don't want to cvsup your machine all the\nway.\n\nIndex: sys/sys/mount.h\n===================================================================\nRCS file: /home/ncvs/src/sys/sys/mount.h,v\nretrieving revision 1.89\ndiff -u -r1.89 mount.h\n--- sys/sys/mount.h\t2000/01/19 06:07:34\t1.89\n+++ sys/sys/mount.h\t2000/12/04 20:00:54\n@@ -46,7 +46,9 @@\n #endif /* !_KERNEL */\n \n #include <sys/queue.h>\n+#ifdef _KERNEL\n #include <sys/lock.h>\n+#endif\n \n typedef struct fsid { int32_t val[2]; } fsid_t;\t/* file system id type */\n \n@@ -99,6 +101,7 @@\n \tlong f_spare[2];\t\t/* unused spare */\n };\n \n+#ifdef _KERNEL\n /*\n * Structure per mounted file system. Each mounted file system has an\n * array of operations and an instance record. The file systems are\n@@ -122,6 +125,7 @@\n \ttime_t\t\tmnt_time;\t\t/* last time written*/\n \tu_int\t\tmnt_iosize_max;\t\t/* max IO request size */\n };\n+#endif /* _KERNEL */\n \n /*\n * User specifiable flags.\nIndex: usr.bin/fstat/cd9660.c\n===================================================================\nRCS file: /home/ncvs/src/usr.bin/fstat/cd9660.c,v\nretrieving revision 1.1.2.1\ndiff -u -r1.1.2.1 cd9660.c\n--- usr.bin/fstat/cd9660.c\t2000/07/02 10:20:24\t1.1.2.1\n+++ usr.bin/fstat/cd9660.c\t2000/12/04 23:35:21\n@@ -46,7 +46,9 @@\n #include <sys/stat.h>\n #include <sys/time.h>\n #include <sys/vnode.h>\n+#define _KERNEL\n #include <sys/mount.h>\n+#undef _KERNEL\n \n #include <isofs/cd9660/cd9660_node.h>\n \nIndex: usr.bin/fstat/fstat.c\n===================================================================\nRCS file: /home/ncvs/src/usr.bin/fstat/fstat.c,v\nretrieving revision 1.21.2.2\ndiff -u -r1.21.2.2 fstat.c\n--- usr.bin/fstat/fstat.c\t2000/07/02 10:28:38\t1.21.2.2\n+++ usr.bin/fstat/fstat.c\t2000/12/04 20:01:08\n@@ -66,8 +66,8 @@\n #include <sys/file.h>\n #include <ufs/ufs/quota.h>\n #include <ufs/ufs/inode.h>\n-#undef _KERNEL\n #include <sys/mount.h>\n+#undef _KERNEL\n #include <nfs/nfsproto.h>\n #include <nfs/rpcv2.h>\n #include <nfs/nfs.h>\n\n-- \n-Alfred Perlstein - [[email protected]|[email protected]]\n\"I have the heart of a child; I keep it in a jar on my desk.\"\n", "msg_date": "Tue, 5 Dec 2000 02:00:02 -0800", "msg_from": "Alfred Perlstein <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: LOCK Fixes/Break on FreeBSD 4.2-STABLE" }, { "msg_contents": "* Alfred Perlstein <[email protected]> [001205 04:00]:\n> * Larry Rosenman <[email protected]> [001128 20:44] wrote:\n> > * Tom Lane <[email protected]> [001128 22:31]:\n> > > Larry Rosenman <[email protected]> writes:\n> > > > The last batch of commits break on FreeBSD 4.2-STABLE. \n> > > > /usr/include/machine/lock.h:148: conflicting types for `s_lock'\n> > > > ../../../src/include/storage/s_lock.h:402: previous declaration of `s_lock'\n> > > \n> > > That's odd. s_lock has been declared the same way right along in our\n> > > code; I didn't change it. Can you see what's changed to cause a\n> > > conflict where there was none before?\n> > > \n> > > \t\t\tregards, tom lane\n> > Other things that may be an issue:\n> > \n> > 1) BINUTILS 2.10.1\n> > 2) OPENSSL 0.9.6 \n> > \n> > both just MFC'd into FreeBSD recently, but I believe we built until\n> > tonite. \n> > \n> > I can make you an account on the box if you'd like....\n> \n> Grr, couldn't find the original message. I think you thought\n> you solved your problem with building on FreeBSD, however I\n> think you just forgot to compile with perl support enabled.\n> \n> When I compiled with perl support it broke. This isn't a \n> postgresql bug, nor a FreeBSD bug although the fault lies\n> mostly with FreeBSD for polluting the C namespace a _lot_\n> when sys/mount.h is included.\nActually, perl support was included, and Tom's fix in the\ncvs DID fix it, but fixing FreeBSD doesn't hurt. Here is\nwhat I've been configuring with:\n\n\n./configure --prefix=/home/ler/pg-test --enable-syslog \\\n\t--with-CXX --with-perl --enable-multibyte --enable-cassert \\\n\t--with-openssl \\\n\t--with-includes=\"/usr/local/include/tcl8.3 /usr/local/include/tk8.3\" \\\n\t--with-tcl \\\n\t--with-tclconfig=/usr/local/lib/tcl8.3 \\\n\t--with-tkconfig=/usr/local/lib/tk8.3\n\t\n\t\n\n> \n> What happens is the the perl code brings in perl.h which brings\n> in sys/mount.h, sys/mount.h includes sys/lock.h because our\n> kernel structure \"mount\" has a VFS lock in it, VFS locks happen\n> to contain spinlocks (simplelocks) and one of our our functions\n> to manipulate the simplelocks is called s_lock(). This causes\n> a namespace conflict which causes the compile error.\n> \n> Anyhow, to address the problem I've removed struct mount from\n> userland visibility in both FreeBSD 5.x (current) and FreeBSD 4.x\n> (stable).\n> \n> Things should work now but let me know if you have any other\n> problems.\n> \n> And thanks for pointing it out and offering to help track it\n> down.\n> \n> here's the patch if you don't want to cvsup your machine all the\n> way.\n> \n> Index: sys/sys/mount.h\n> ===================================================================\n> RCS file: /home/ncvs/src/sys/sys/mount.h,v\n> retrieving revision 1.89\n> diff -u -r1.89 mount.h\n> --- sys/sys/mount.h\t2000/01/19 06:07:34\t1.89\n> +++ sys/sys/mount.h\t2000/12/04 20:00:54\n> @@ -46,7 +46,9 @@\n> #endif /* !_KERNEL */\n> \n> #include <sys/queue.h>\n> +#ifdef _KERNEL\n> #include <sys/lock.h>\n> +#endif\n> \n> typedef struct fsid { int32_t val[2]; } fsid_t;\t/* file system id type */\n> \n> @@ -99,6 +101,7 @@\n> \tlong f_spare[2];\t\t/* unused spare */\n> };\n> \n> +#ifdef _KERNEL\n> /*\n> * Structure per mounted file system. Each mounted file system has an\n> * array of operations and an instance record. The file systems are\n> @@ -122,6 +125,7 @@\n> \ttime_t\t\tmnt_time;\t\t/* last time written*/\n> \tu_int\t\tmnt_iosize_max;\t\t/* max IO request size */\n> };\n> +#endif /* _KERNEL */\n> \n> /*\n> * User specifiable flags.\n> Index: usr.bin/fstat/cd9660.c\n> ===================================================================\n> RCS file: /home/ncvs/src/usr.bin/fstat/cd9660.c,v\n> retrieving revision 1.1.2.1\n> diff -u -r1.1.2.1 cd9660.c\n> --- usr.bin/fstat/cd9660.c\t2000/07/02 10:20:24\t1.1.2.1\n> +++ usr.bin/fstat/cd9660.c\t2000/12/04 23:35:21\n> @@ -46,7 +46,9 @@\n> #include <sys/stat.h>\n> #include <sys/time.h>\n> #include <sys/vnode.h>\n> +#define _KERNEL\n> #include <sys/mount.h>\n> +#undef _KERNEL\n> \n> #include <isofs/cd9660/cd9660_node.h>\n> \n> Index: usr.bin/fstat/fstat.c\n> ===================================================================\n> RCS file: /home/ncvs/src/usr.bin/fstat/fstat.c,v\n> retrieving revision 1.21.2.2\n> diff -u -r1.21.2.2 fstat.c\n> --- usr.bin/fstat/fstat.c\t2000/07/02 10:28:38\t1.21.2.2\n> +++ usr.bin/fstat/fstat.c\t2000/12/04 20:01:08\n> @@ -66,8 +66,8 @@\n> #include <sys/file.h>\n> #include <ufs/ufs/quota.h>\n> #include <ufs/ufs/inode.h>\n> -#undef _KERNEL\n> #include <sys/mount.h>\n> +#undef _KERNEL\n> #include <nfs/nfsproto.h>\n> #include <nfs/rpcv2.h>\n> #include <nfs/nfs.h>\n> \n> -- \n> -Alfred Perlstein - [[email protected]|[email protected]]\n> \"I have the heart of a child; I keep it in a jar on my desk.\"\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: [email protected]\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n", "msg_date": "Tue, 5 Dec 2000 04:10:58 -0600", "msg_from": "Larry Rosenman <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Re: LOCK Fixes/Break on FreeBSD 4.2-STABLE" }, { "msg_contents": "Alfred Perlstein <[email protected]> writes:\n> Anyhow, to address the problem I've removed struct mount from\n> userland visibility in both FreeBSD 5.x (current) and FreeBSD 4.x\n> (stable).\n\nThat might fix things on your box, but we can hardly rely on it as an\nanswer for everyone running FreeBSD :-(.\n\nAnyway, I've already worked around the problem by rearranging the PG\nheaders so that plperl doesn't need to import s_lock.h ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 05 Dec 2000 10:14:27 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: LOCK Fixes/Break on FreeBSD 4.2-STABLE " }, { "msg_contents": "* Tom Lane <[email protected]> [001205 07:14] wrote:\n> Alfred Perlstein <[email protected]> writes:\n> > Anyhow, to address the problem I've removed struct mount from\n> > userland visibility in both FreeBSD 5.x (current) and FreeBSD 4.x\n> > (stable).\n> \n> That might fix things on your box, but we can hardly rely on it as an\n> answer for everyone running FreeBSD :-(.\n> \n> Anyway, I've already worked around the problem by rearranging the PG\n> headers so that plperl doesn't need to import s_lock.h ...\n\nWell I didn't say it was completely our fault, it's just that we\ntry pretty hard not to let those types of structs leak into userland\nand for us to \"steal\" something called s_lock from userland, well\nthat's no good. :)\n\n-- \n-Alfred Perlstein - [[email protected]|[email protected]]\n\"I have the heart of a child; I keep it in a jar on my desk.\"\n", "msg_date": "Tue, 5 Dec 2000 07:16:47 -0800", "msg_from": "Alfred Perlstein <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: LOCK Fixes/Break on FreeBSD 4.2-STABLE" }, { "msg_contents": "Actually, Alfred is a FreeBSD committer, and committed it \nto the FreeBSD source tree. \n\nIt's for ALL at FreeBSD 4-STABLE as of today. \n\nLER\n\n\n-----Original Message-----\nFrom: Tom Lane [mailto:[email protected]]\nSent: Tuesday, December 05, 2000 9:14 AM\nTo: Alfred Perlstein\nCc: Larry Rosenman; PostgreSQL Hackers List\nSubject: Re: [HACKERS] Re: LOCK Fixes/Break on FreeBSD 4.2-STABLE \n\n\nAlfred Perlstein <[email protected]> writes:\n> Anyhow, to address the problem I've removed struct mount from\n> userland visibility in both FreeBSD 5.x (current) and FreeBSD 4.x\n> (stable).\n\nThat might fix things on your box, but we can hardly rely on it as an\nanswer for everyone running FreeBSD :-(.\n\nAnyway, I've already worked around the problem by rearranging the PG\nheaders so that plperl doesn't need to import s_lock.h ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 5 Dec 2000 09:23:26 -0600", "msg_from": "\"Larry Rosenman\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: Re: LOCK Fixes/Break on FreeBSD 4.2-STABLE " } ]
[ { "msg_contents": "Hi,\n I have two questions\n\n1. Is it possible to set up a set of redundant disks for a database? one\n\nof them being remote from\n the database?\n\n\n2. If I want to use my i/o routines for disk i/o, is it possible?\n does postgres support such APIs?\n\n\n\n\nthanks,\nSandeep\n\n\n\n", "msg_date": "Tue, 28 Nov 2000 19:30:33 -0800", "msg_from": "Sandeep Joshi <[email protected]>", "msg_from_op": true, "msg_subject": "redundancy and disk i/o" }, { "msg_contents": "On Tue, 28 Nov 2000, Sandeep Joshi wrote:\n\n> 1. Is it possible to set up a set of redundant disks for a database? one \n> of them being remote from the database?\n\nCall IBM Global Services, and tell them you are interested in purchasing\nan RS/6000 with a 7133 SSA drives, one tray off-site using the fiber\nextenders.\n\nWith those, you can have your drives up to 2.4 km from the server they're\nconnected to, while they still are local to the machine. (And you still\nget 4 simultaneous reads/writes in each direction of the loop, for a total\nof 160 Mbyte/sec transfer.)\n\n\n-- \nDominic J. Eidson\n \"Baruk Khazad! Khazad ai-menu!\" - Gimli\n-------------------------------------------------------------------------------\nhttp://www.the-infinite.org/ http://www.the-infinite.org/~dominic/\n\n", "msg_date": "Mon, 4 Dec 2000 01:03:03 -0600 (CST)", "msg_from": "\"Dominic J. Eidson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: redundancy and disk i/o" }, { "msg_contents": "At 07:30 PM 11/28/00 -0800, Sandeep Joshi wrote:\n>Hi,\n> I have two questions\n>\n>1. Is it possible to set up a set of redundant disks for a database? one\n>of them being remote from the database?\n\nIf you're talking about replication, PostgreSQL, Inc. will be offering a\nsolution to its $19,000/yr Platinum Partners shortly. It will be released\nin open source form no more than two years after its release in proprietary\nform.\n\nCheck out http://www.erserver.com for more details, and http://www.pgsql.com\nfor more details on the PostgreSQL, Inc. partnership program.\n\nLocally, you can use RAID. Are there open-source journaling filesystems that\noffer filesystem-level replication out there?\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n", "msg_date": "Sun, 03 Dec 2000 23:18:32 -0800", "msg_from": "Don Baccus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: redundancy and disk i/o" }, { "msg_contents": "> If you're talking about replication, PostgreSQL, Inc. will be offering a\n> solution to its $19,000/yr Platinum Partners shortly. It will be released\n> in open source form no more than two years after its release in proprietary\n> form.\n> Check out http://www.erserver.com for more details, and http://www.pgsql.com\n> for more details on the PostgreSQL, Inc. partnership program.\n\nThanks Don for the reference. As you know, there will also be a \"roll\nyour own\" replication toolset contributed by PostgreSQL Inc. under the\nBSD license in the PostgreSQL contrib/ directory for the 7.1 release,\nassuming that this inclusion is acceptable to the community. Given the\ngeneral interest, I hope that this won't be an issue, and that the\nrecent flames will have died down enough to not be a continued\ndistraction.\n\n - Thomas\n", "msg_date": "Mon, 04 Dec 2000 18:09:28 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: redundancy and disk i/o" } ]
[ { "msg_contents": "\n> BTW, it also seems like a good idea to reorder the postmaster's\n> startup operations so that the data-directory lockfile is checked\n> before trying to acquire the port lockfile, instead of after. That\n> way, in the common scenario where you're trying to start a second\n> postmaster in the same directory + same port, it'd fail cleanly\n> even if /tmp/.s.PGSQL.5432.lock had disappeared.\n\nFine, sounds like reordering would eliminate the need for the socket lock \nanyway, no ?\n\nAndreas\n", "msg_date": "Wed, 29 Nov 2000 09:46:18 +0100", "msg_from": "Zeugswetter Andreas SB <[email protected]>", "msg_from_op": true, "msg_subject": "AW: F_SETLK is looking worse and worse..." }, { "msg_contents": "Zeugswetter Andreas SB <[email protected]> writes:\n>> BTW, it also seems like a good idea to reorder the postmaster's\n>> startup operations so that the data-directory lockfile is checked\n>> before trying to acquire the port lockfile, instead of after. That\n>> way, in the common scenario where you're trying to start a second\n>> postmaster in the same directory + same port, it'd fail cleanly\n>> even if /tmp/.s.PGSQL.5432.lock had disappeared.\n\n> Fine, sounds like reordering would eliminate the need for the socket lock \n> anyway, no ?\n\nNot at all. If you start two postmasters in different data directories\nbut with the same port number, you still have a socket-file conflict\nthat needs to be detected.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 29 Nov 2000 10:57:30 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: AW: F_SETLK is looking worse and worse... " } ]
[ { "msg_contents": "\n> NO, I just tested how solid PgSQL is, I run a program busy inserting record into PG table, when I \n> suddenly pulled out power from my machine and restarted PG, I can not insert any record into database\n> table, all backends are dead without any respone (not core dump), note that I am using FreeBSD 4.2, \n> it's rock solid, it's not OS crash, it just losted power.\n\nPostgreSQL Versions 7.0 and below have the potential of corruting indices when the system crashes.\nThe usual procedure would be to reindex the database. Hiroshi has written code to allow this.\nIt is weird, that your installation blocks. Have you checked the postmaster log ?\n\n> We use WindowsNT and MSSQL on our production\n> server, before we accept MSSQL, we use this method to test if MSSQL can endure this kind of strik,\n> it's OK, all databases are safely recovered, we can continue our work. we are a stock exchange company,\n> our server are storing millilion $ finance number, we don't hope there are any problems in this case, \n> we are using UPS, but UPS is not everything, it you bet everything on UPS, you must be idiot. \n> I know you must be an avocation of PG, but we are professional customer, corporation user, we store critical\n> data into database, not your garbage data.\n\nYes, this is a test I would also do before putting very sensitive data onto a particular brand of database.\n\nFortunately Version 7.1 of PostgreSQL will live up to your expectations in this area.\n\nAndreas\n", "msg_date": "Wed, 29 Nov 2000 09:55:10 +0100", "msg_from": "Zeugswetter Andreas SB <[email protected]>", "msg_from_op": true, "msg_subject": "AW: beta testing version" }, { "msg_contents": "\n----- Original Message ----- \nFrom: Zeugswetter Andreas SB <[email protected]>\nSubject: AW: [HACKERS] beta testing version\n\n> \n> > NO, I just tested how solid PgSQL is, I run a program busy inserting record into PG table, when I \n> > suddenly pulled out power from my machine and restarted PG, I can not insert any record into database\n> > table, all backends are dead without any respone (not core dump), note that I am using FreeBSD 4.2, \n> > it's rock solid, it's not OS crash, it just losted power.\n> \n> PostgreSQL Versions 7.0 and below have the potential of corruting indices when the system crashes.\n> The usual procedure would be to reindex the database. Hiroshi has written code to allow this.\n> It is weird, that your installation blocks. Have you checked the postmaster log ?\n> \n\nREINDEX command failed, I have an unique index, PG claimed there were two records have same key, \nbut obviously, it's not the fact.\n\n\n>[snip] \n> Yes, this is a test I would also do before putting very sensitive data onto a particular brand of database.\n> \n> Fortunately Version 7.1 of PostgreSQL will live up to your expectations in this area.\n> \n> Andreas\n> \n\nThanks, \n---\nXuYifeng\n\n\n", "msg_date": "Wed, 29 Nov 2000 20:27:37 +0800", "msg_from": "\"xuyifeng\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: beta testing version" } ]
[ { "msg_contents": "Why is a \"select * from table1 where field in (select field from table2\nwhere condition )\"\n\nis so dramatically bad compared to:\n\n\"select * from table1, table2 where table1.field = table2.field and\ncondition\"\n\nI can't understand why the first query isn't optimized better than the\nsecond one. The 'in' query forces a full table scan (it shouldn't) and\nthe second one uses the indexes. Does anyone know why?\n\nI know I am no SQL guru, but my gut tells me that the 'in' operator\nshould be far more efficient than a join. \n\nHere are the actual queries:\n\ncdinfo=# explain select trackid from zsong where muzenbr in (select\nmuzenbr from ztitles where title = 'Mulan') ;\nNOTICE: QUERY PLAN:\n \nSeq Scan on zsong (cost=100000000.00..219321449380756.66 rows=2193213\nwidth=4)\n SubPlan\n -> Materialize (cost=100000022.50..100000022.50 rows=10 width=4)\n -> Seq Scan on ztitles (cost=100000000.00..100000022.50\nrows=10 width=4) \n\ncdinfo=# explain select trackid from zsong, ztitles where\nztitles.muzenbr = zsong.muzenbr and title = 'Mulan' ;\nNOTICE: QUERY PLAN:\n \nMerge Join (cost=0.00..183664.10 rows=219321 width=12)\n -> Index Scan using zsong_muznbr on zsong (cost=0.00..156187.31\nrows=2193213 width=8)\n -> Index Scan using ztitles_pkey on ztitles (cost=0.00..61.50\nrows=10 width=4) \n\ncdinfo=# \\d zsong\n Table \"zsong\"\n Attribute | Type | Modifier\n-----------+-------------------+-------------------------------------------\n muzenbr | integer |\n disc | integer |\n trk | integer |\n song | character varying |\n trackid | integer | not null default\nnextval('trackid'::text)\n artistid | integer |\n acd | character varying |\nIndices: zsong_muznbr,\n zsong_pkey \n\ncdinfo=# \\d ztitles\n Table \"ztitles\"\n Attribute | Type | Modifier\n------------+-------------------+----------\n muzenbr | integer | not null\n artistid | integer |\n cat2 | character varying |\n cat3 | character varying |\n cat4 | character varying |\n performer | character varying |\n performer2 | character varying |\n title | character varying |\n artist1 | character varying |\n engineer | character varying |\n producer | character varying |\n labelname | character varying |\n catalog | character varying |\n distribut | character varying |\n released | character varying |\n origrel | character varying |\n nbrdiscs | character varying |\n spar | character varying |\n minutes | character varying |\n seconds | character varying |\n monostereo | character varying |\n studiolive | character varying |\n available | character(1) |\n previews | character varying |\n pnotes | character varying |\n acd | character varying |\nIndex: ztitles_pkey \n\n-- \nhttp://www.mohawksoft.com\n", "msg_date": "Wed, 29 Nov 2000 17:51:54 -0500", "msg_from": "mlw <[email protected]>", "msg_from_op": true, "msg_subject": "SQL 'in' vs join." }, { "msg_contents": "Hannu Krosing wrote:\n> \n> mlw wrote:\n> >\n> > Why is a \"select * from table1 where field in (select field from table2\n> > where condition )\"\n> >\n> > is so dramatically bad compared to:\n> >\n> > \"select * from table1, table2 where table1.field = table2.field and\n> > condition\"\n> >\n> > I can't understand why the first query isn't optimized better than the\n> > second one. The 'in' query forces a full table scan (it shouldn't) and\n> > the second one uses the indexes. Does anyone know why?\n> \n> Its not done yet, and probably hsomewhat difficult to do in a general\n> fashion\n> \n> > I know I am no SQL guru, but my gut tells me that the 'in' operator\n> > should be far more efficient than a join.\n> >\n> > Here are the actual queries:\n> >\n> > cdinfo=# explain select trackid from zsong where muzenbr in (select\n> > muzenbr from ztitles where title = 'Mulan') ;\n> \n> try\n> \n> explain\n> select trackid\n> from zsong\n> where muzenbr in (\n> select muzenbr\n> from ztitles\n> where title = 'Mulan'\n> and ztitles.muzenbr=zsong.muzenbr\n> );\n> \n> this should hint the current optimizer to do the right thing;\n> \n> -----------------\n> Hannu\n\nNope:\n\ncdinfo=# explain\ncdinfo-# select trackid\ncdinfo-# from zsong\ncdinfo-# where muzenbr in (\ncdinfo(# select muzenbr\ncdinfo(# from ztitles\ncdinfo(# where title = 'Mulan'\ncdinfo(# and ztitles.muzenbr=zsong.muzenbr\ncdinfo(# );\nNOTICE: QUERY PLAN:\n \nSeq Scan on zsong (cost=100000000.00..104474515.18 rows=2193213\nwidth=4)\n SubPlan\n -> Index Scan using ztitles_pkey on ztitles (cost=0.00..4.05\nrows=1 width=4) \n\n\nBut what I also find odd is, look at the components:\n\ncdinfo=# explain select muzenbr from ztitles where title = 'Mulan' ;\nNOTICE: QUERY PLAN:\n \nIndex Scan using ztitles_title_ndx on ztitles (cost=0.00..7.08 rows=1\nwidth=4) \n\ncdinfo=# explain select trackid from zsong where muzenbr in ( 1,2,3,4,5)\n;\nNOTICE: QUERY PLAN:\n \nIndex Scan using zsong_muzenbr_ndx, zsong_muzenbr_ndx,\nzsong_muzenbr_ndx, zsong_muzenbr_ndx, zsong_muzenbr_ndx on zsong \n(cost=0.00..392.66 rows=102 width=4) \n\n\nNow, given the two components, each with very low costs, it chooses to\ndo a sequential scan on the table. I don't get it. I have have been\nhaving no end of problems with Postgres' optimizer. It just seems to be\nbrain dead at times. It is a huge point of frustration to me. I am tied\nto postgres in my current project, and I fear that I will not be able to\nimplement certain features because of this sort of behavior.\n\n\n-- \nhttp://www.mohawksoft.com\n", "msg_date": "Thu, 30 Nov 2000 08:37:42 -0500", "msg_from": "mlw <[email protected]>", "msg_from_op": true, "msg_subject": "Re: SQL 'in' vs join." }, { "msg_contents": "\n> Now, given the two components, each with very low costs, it chooses to\n> do a sequential scan on the table. I don't get it. \n\n\nRead the FAQ?\n\nhttp://www.postgresql.org/docs/faq-english.html#4.23\n\"4.23) Why are my subqueries using IN so slow?\")\n\n\n- Andrew\n\n\n", "msg_date": "Fri, 1 Dec 2000 02:05:01 +1100", "msg_from": "\"Andrew Snow\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: SQL 'in' vs join." }, { "msg_contents": "At 08:37 AM 11/30/00 -0500, mlw wrote:\n>> mlw wrote:\n>> >\n>> > Why is a \"select * from table1 where field in (select field from table2\n>> > where condition )\"\n>> >\n>> > is so dramatically bad compared to:\n>> >\n>> > \"select * from table1, table2 where table1.field = table2.field and\n>> > condition\"\n\n>Now, given the two components, each with very low costs, it chooses to\n>do a sequential scan on the table. I don't get it. I have have been\n>having no end of problems with Postgres' optimizer. It just seems to be\n>brain dead at times. It is a huge point of frustration to me. I am tied\n>to postgres in my current project, and I fear that I will not be able to\n>implement certain features because of this sort of behavior.\n\nBut but but ...\n\nNot only is the join faster, but it is more readable and cleaner SQL as\nwell. I would never write the query in its first form. I'd change the\nsecond one slightly to \"select table1.* from ...\", though, since those\nare apparently the only fields you want.\n\nThe optimizer should do a better job on your first query, sure, but why\ndon't you like writing joins?\n\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n", "msg_date": "Thu, 30 Nov 2000 07:24:30 -0800", "msg_from": "Don Baccus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SQL 'in' vs join." }, { "msg_contents": "mlw wrote:\n> \n> Why is a \"select * from table1 where field in (select field from table2\n> where condition )\"\n> \n> is so dramatically bad compared to:\n> \n> \"select * from table1, table2 where table1.field = table2.field and\n> condition\"\n> \n> I can't understand why the first query isn't optimized better than the\n> second one. The 'in' query forces a full table scan (it shouldn't) and\n> the second one uses the indexes. Does anyone know why?\n\nIts not done yet, and probably hsomewhat difficult to do in a general\nfashion\n\n> I know I am no SQL guru, but my gut tells me that the 'in' operator\n> should be far more efficient than a join.\n> \n> Here are the actual queries:\n> \n> cdinfo=# explain select trackid from zsong where muzenbr in (select\n> muzenbr from ztitles where title = 'Mulan') ;\n\ntry\n\nexplain\n select trackid\n from zsong\n where muzenbr in (\n select muzenbr\n from ztitles\n where title = 'Mulan'\n and ztitles.muzenbr=zsong.muzenbr\n );\n\nthis should hint the current optimizer to do the right thing;\n\n-----------------\nHannu\n", "msg_date": "Thu, 30 Nov 2000 15:26:02 +0000", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SQL 'in' vs join." }, { "msg_contents": "mlw wrote:\n> \n> Hannu Krosing wrote:\n> >\n> > mlw wrote:\n> > >\n> > > Why is a \"select * from table1 where field in (select field from table2\n> > > where condition )\"\n> > >\n> > > is so dramatically bad compared to:\n> > >\n> > > \"select * from table1, table2 where table1.field = table2.field and\n> > > condition\"\n> > >\n> > > I can't understand why the first query isn't optimized better than the\n> > > second one. The 'in' query forces a full table scan (it shouldn't) and\n> > > the second one uses the indexes. Does anyone know why?\n> >\n> > Its not done yet, and probably hsomewhat difficult to do in a general\n> > fashion\n> >\n> > > I know I am no SQL guru, but my gut tells me that the 'in' operator\n> > > should be far more efficient than a join.\n> > >\n> > > Here are the actual queries:\n> > >\n> > > cdinfo=# explain select trackid from zsong where muzenbr in (select\n> > > muzenbr from ztitles where title = 'Mulan') ;\n> >\n> > try\n> >\n> > explain\n> > select trackid\n> > from zsong\n> > where muzenbr in (\n> > select muzenbr\n> > from ztitles\n> > where title = 'Mulan'\n> > and ztitles.muzenbr=zsong.muzenbr\n> > );\n> >\n> > this should hint the current optimizer to do the right thing;\n> >\n> > -----------------\n> > Hannu\n\ndid you have indexes on both ztitles.muzenbr and zsong.muzenbr ?\n\n--------------\nHannu\n", "msg_date": "Thu, 30 Nov 2000 15:52:39 +0000", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SQL 'in' vs join." }, { "msg_contents": "Don Baccus <[email protected]> writes:\n> The optimizer should do a better job on your first query, sure, but why\n> don't you like writing joins?\n\nThe join wouldn't give quite the same answers. If there are multiple\nrows in table2 matching a particular table1 row, then a join would give\nmultiple copies of the table1 row, whereas the WHERE foo IN (sub-select)\nway would give only one copy. SELECT DISTINCT can't be used to fix\nthis, because that would eliminate legitimate duplicates from identical\ntable1 rows.\n\nNow that the executor understands about multiple join rules (for\nOUTER JOIN support), I've been thinking about inventing a new join rule\nthat says \"at most one output row per left-hand row\" --- this'd be sort\nof the opposite of the LEFT OUTER JOIN rule, \"at least one output row\nper left-hand row\" --- and then transforming IN (sub-select) clauses \nthat appear at the top level of WHERE into this kind of join. Won't\nhappen for 7.1, though.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 30 Nov 2000 10:52:41 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SQL 'in' vs join. " }, { "msg_contents": "At 10:52 AM 11/30/00 -0500, Tom Lane wrote:\n>Don Baccus <[email protected]> writes:\n>> The optimizer should do a better job on your first query, sure, but why\n>> don't you like writing joins?\n>\n>The join wouldn't give quite the same answers. If there are multiple\n>rows in table2 matching a particular table1 row, then a join would give\n>multiple copies of the table1 row, whereas the WHERE foo IN (sub-select)\n>way would give only one copy. SELECT DISTINCT can't be used to fix\n>this, because that would eliminate legitimate duplicates from identical\n>table1 rows.\n\nHmmm...I was presuming that \"field\" was a primary key of table1, so\nsuch duplicates wouldn't exist (and SELECT DISTINCT would weed out\nduplicates from table2 if \"field\" isn't a primary key of table2, i.e.\nif table2 has a many-to-one relationship to table1). For many-to-many\nrelationships yes, you're right, the \"in\" version returns a different\nresult.\n\n>Now that the executor understands about multiple join rules (for\n>OUTER JOIN support), I've been thinking about inventing a new join rule\n>that says \"at most one output row per left-hand row\" --- this'd be sort\n>of the opposite of the LEFT OUTER JOIN rule, \"at least one output row\n>per left-hand row\" --- and then transforming IN (sub-select) clauses \n>that appear at the top level of WHERE into this kind of join. Won't\n>happen for 7.1, though.\n\nSame trick could be used for some classes of queries which do a SELECT DISTINCT\non the results of a join, too ...\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n", "msg_date": "Thu, 30 Nov 2000 11:59:47 -0800", "msg_from": "Don Baccus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SQL 'in' vs join. " }, { "msg_contents": "> Don Baccus <[email protected]> writes:\n> > The optimizer should do a better job on your first query, sure, but why\n> > don't you like writing joins?\n> \n> The join wouldn't give quite the same answers. If there are multiple\n> rows in table2 matching a particular table1 row, then a join would give\n> multiple copies of the table1 row, whereas the WHERE foo IN (sub-select)\n> way would give only one copy. SELECT DISTINCT can't be used to fix\n> this, because that would eliminate legitimate duplicates from identical\n> table1 rows.\n> \n> Now that the executor understands about multiple join rules (for\n> OUTER JOIN support), I've been thinking about inventing a new join rule\n> that says \"at most one output row per left-hand row\" --- this'd be sort\n> of the opposite of the LEFT OUTER JOIN rule, \"at least one output row\n> per left-hand row\" --- and then transforming IN (sub-select) clauses \n> that appear at the top level of WHERE into this kind of join. Won't\n> happen for 7.1, though.\n\nOf course, we will have the query tree redesign for 7.2, right, make\nthat unnecessary.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 10 Dec 2000 13:54:51 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SQL 'in' vs join." }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n>> Now that the executor understands about multiple join rules (for\n>> OUTER JOIN support), I've been thinking about inventing a new join rule\n>> that says \"at most one output row per left-hand row\" --- this'd be sort\n>> of the opposite of the LEFT OUTER JOIN rule, \"at least one output row\n>> per left-hand row\" --- and then transforming IN (sub-select) clauses \n>> that appear at the top level of WHERE into this kind of join. Won't\n>> happen for 7.1, though.\n\n> Of course, we will have the query tree redesign for 7.2, right, make\n> that unnecessary.\n\nNo, I see that as part of the query tree redesign. You'd still need\nexecutor support as above, but what remains to be seen is how hard is it\nfor the planner to do the transformation I so blithely posited ... and\ndo we need to change the querytree structure to make it easier?\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 10 Dec 2000 14:02:22 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SQL 'in' vs join. " } ]
[ { "msg_contents": "Am I misunderstanding how to use rule w/conditionals, or is there a \nbug in this?\n\n--\n\nI love to use Pgsql comments, but find the 'comment on field...' \nlanguage a bit of a pain for documenting a large database at the \nlast minute. So, I wrote a query that pulls together all the fields in a \ndatabase, w/descriptions (if any):\n\ncreate view dev_col_comments as \nselect a.oid as att_oid, \n\trelname, \n\tattname, \n\tdescription \nfrom pg_class c, \n\tpg_attribute a left outer join pg_description d on d.objoid=a.oid\nwhere c.oid=a.attrelid\nand (c.relkind='r' or c.relkind='v') and c.relname !~ '^pg_'\nand attname not in ('xmax','xmin','cmax','cmin','ctid','oid','tableoid')\norder by relname, attname;\n\n[This uses pg7.1 syntax; you could rewrite for 7.0 w/o the 'v' for \nviews, and using a union rather than outer join.]\n\nThis works great. Feeling clever, I wrote two rules, so I could \nupdate this and create comments. I need two rules, one if this is an \nexisting description (becoming an update to pg_description), one if \nthis not (becoming an insert to pg_description).\n\ncreate rule dev_ins as on update to dev_col_comments where \nold.description isnull do instead insert into pg_description ( objoid, \ndescription) values (old.att_oid, new.description);\n\ncreate rule dev_upd as on update to dev_col_comments where \nold.description notnull do instead update pg_description set \ndescription=new.description where objoid=old.att_oid;\n\nThis doesn't work: I get a \"cannot update view w/o rule\" error \nmessage, both for fields where description was null, and for fields \nwhere it wasn't null.\n\nIf I take out the \"where old.description isnull\" clause of dev_ins, it \nworks fine--but, only, of course, if I am sure to only pick new \ndescriptions. Or, if I take out the clause in dev_upd, it works too, \nwith the opposite caveat.\n\nIs this a bug? Am I misunderstanding something about the way that \nrule conditions should work? The docs are long but fuzzy on rules \n(they seem to suggest, for instance, that \"create rule foo on \nupdate to table.column\" will work, when this is not implemented yet, \nso perhaps the docs are ahead of the implementation?)\n\nAny help would be great!\n\nI do read the pgsql lists, but always appreciate a cc, so I don't miss \nany comments. TIA.\n\nThanks,\n\n--\nJoel Burton, Director of Information Systems -*- [email protected]\nSupport Center of Washington (www.scw.org)\n", "msg_date": "Wed, 29 Nov 2000 19:00:22 -0500", "msg_from": "\"Joel Burton\" <[email protected]>", "msg_from_op": true, "msg_subject": "Rules with Conditions: Bug, or Misunderstanding" }, { "msg_contents": "\"Joel Burton\" <[email protected]> writes:\n> create rule dev_ins as on update to dev_col_comments where \n> old.description isnull do instead insert into pg_description ( objoid, \n> description) values (old.att_oid, new.description);\n\n> create rule dev_upd as on update to dev_col_comments where \n> old.description notnull do instead update pg_description set \n> description=new.description where objoid=old.att_oid;\n\n> This doesn't work: I get a \"cannot update view w/o rule\" error \n> message, both for fields where description was null, and for fields \n> where it wasn't null.\n\nHm. Perhaps the \"cannot update view\" test is too strict --- it's not\nbright enough to realize that the two rules together cover all cases,\nso it complains that you *might* be trying to update the view. As the\ncode stands, you must provide an unconditional DO INSTEAD rule to\nimplement insertion or update of a view.\n\nI'm not sure this is a big problem, though, because the solution is\nsimple: provide an unconditional rule with multiple actions. For\nexample, I think this will work:\n\ncreate rule dev_upd as on update to dev_col_comments do instead\n(\n insert into pg_description (objoid, description)\n select old.att_oid, new.description WHERE old.description isnull;\n update pg_description set description=new.description\n where objoid = old.att_oid;\n)\n\nbut I haven't tried it...\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 29 Nov 2000 19:42:53 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [SQL] Rules with Conditions: Bug, or Misunderstanding " }, { "msg_contents": "On Wednesday 29 November 2000 19:42, Tom Lane wrote:\n>\n> Hm. Perhaps the \"cannot update view\" test is too strict --- it's not\n> bright enough to realize that the two rules together cover all cases,\n> so it complains that you *might* be trying to update the view. As the\n> code stands, you must provide an unconditional DO INSTEAD rule to\n> implement insertion or update of a view.\n\nThe idea was to check just before the update occurred to see if the \ndestination was view. Maybe the test is too high up, before all rewriting\noccurs.\n\nIt is in InitPlan, the same place we check to make sure that we are not \nchanging a sequence or a toast table. (actually initResultRelInfo called from \nInitPlan). I gathered from the backend flowchart that this wasn't called \nuntil all rewriting was done. Was I wrong?\n\nIf all rewriting _is_ done at that point, why is the view still in the \nResultRelInfo ?\n\n-- \nMark Hollomon\n", "msg_date": "Thu, 30 Nov 2000 22:07:22 -0500", "msg_from": "Mark Hollomon <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [SQL] Rules with Conditions: Bug, or Misunderstanding" }, { "msg_contents": "Mark Hollomon <[email protected]> writes:\n> On Wednesday 29 November 2000 19:42, Tom Lane wrote:\n>> Hm. Perhaps the \"cannot update view\" test is too strict --- it's not\n>> bright enough to realize that the two rules together cover all cases,\n>> so it complains that you *might* be trying to update the view. As the\n>> code stands, you must provide an unconditional DO INSTEAD rule to\n>> implement insertion or update of a view.\n\n> It is in InitPlan, the same place we check to make sure that we are\n> not changing a sequence or a toast table. (actually initResultRelInfo\n> called from InitPlan). I gathered from the backend flowchart that this\n> wasn't called until all rewriting was done. Was I wrong?\n\nThe rewriting is done, all right, but what's left afterward still has\nreferences to the view, because each rule is conditional. Essentially,\nthe rewriter output looks like\n\n\t-- rule 1\n\tif (rule1 condition holds)\n\t\t-- rule 2 applied to rule1 success case\n\t\tif (rule2 condition holds)\n\t\t\tapply rule 2's query\n\t\telse\n\t\t\tapply rule 1's query\n\telse\n\t\t-- rule 2 applied to rule1 failure case\n\t\tif (rule2 condition holds)\n\t\t\tapply rule 2's query\n\t\telse\n\t\t\tapply original query\n\nIf the system were capable of determining that either rule1 or rule2\ncondition will always hold, perhaps it could deduce that the original\nquery on the view will never be applied. However, I doubt that we\nreally want to let loose an automated theorem prover on the results\nof every rewrite ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 01 Dec 2000 00:33:22 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: Rules with Conditions: Bug, or Misunderstanding " }, { "msg_contents": "On 29 Nov 2000, at 19:42, Tom Lane wrote:\n\n> \"Joel Burton\" <[email protected]> writes:\n> > create rule dev_ins as on update to dev_col_comments where \n> > old.description isnull do instead insert into pg_description (\n> > objoid, description) values (old.att_oid, new.description);\n> \n> > create rule dev_upd as on update to dev_col_comments where \n> > old.description notnull do instead update pg_description set \n> > description=new.description where objoid=old.att_oid;\n> \n> > This doesn't work: I get a \"cannot update view w/o rule\" error\n> > message, both for fields where description was null, and for fields\n> > where it wasn't null.\n> \n\n> [... ] I think this will work:\n> \n> create rule dev_upd as on update to dev_col_comments do instead\n> (\n> insert into pg_description (objoid, description)\n> select old.att_oid, new.description WHERE old.description isnull;\n> update pg_description set description=new.description\n> where objoid = old.att_oid;\n> )\n\nTom --\n\nThanks for the help. I had assumed (wrongly) that one could have \nconditional rules, and only if all the conditions fail, that it would go \nto the \"cannot update view\" end, and didn't realize that there \n*had* to be a single do instead.\n\nIn any event, though, the rule above crashes my backend, as do \nsimpler versions I wrote that try your CREATE RULE DO INSTEAD ( \nINSERT; UPDATE; ) idea.\n\nWhat information can I provide to the list to troubleshoot this?\n\nThanks!\n\n\n--\nJoel Burton, Director of Information Systems -*- [email protected]\nSupport Center of Washington (www.scw.org)\n", "msg_date": "Fri, 1 Dec 2000 16:03:46 -0500", "msg_from": "\"Joel Burton\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Rules with Conditions: Bug, or Misunderstanding " }, { "msg_contents": "\"Joel Burton\" <[email protected]> writes:\n> In any event, though, the rule above crashes my backend, as do \n> simpler versions I wrote that try your CREATE RULE DO INSTEAD ( \n> INSERT; UPDATE; ) idea.\n\nUgh :-(\n\n> What information can I provide to the list to troubleshoot this?\n\nA gdb backtrace from the corefile, and/or a simple example to replicate\nthe problem...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 01 Dec 2000 17:07:42 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Rules with Conditions: Bug, or Misunderstanding " }, { "msg_contents": "On Friday 01 December 2000 00:33, Tom Lane wrote:\n> The rewriting is done, all right, but what's left afterward still has\n> references to the view, because each rule is conditional. Essentially,\n> the rewriter output looks like\n>\n> \t-- rule 1\n> \tif (rule1 condition holds)\n> \t\t-- rule 2 applied to rule1 success case\n> \t\tif (rule2 condition holds)\n> \t\t\tapply rule 2's query\n> \t\telse\n> \t\t\tapply rule 1's query\n> \telse\n> \t\t-- rule 2 applied to rule1 failure case\n> \t\tif (rule2 condition holds)\n> \t\t\tapply rule 2's query\n> \t\telse\n> \t\t\tapply original query\n>\n> If the system were capable of determining that either rule1 or rule2\n> condition will always hold, perhaps it could deduce that the original\n> query on the view will never be applied. However, I doubt that we\n> really want to let loose an automated theorem prover on the results\n> of every rewrite ...\n\nI think it would be better to move the test further down, to just before we \nactually try to do the update/insert. Maybe into the heap access routines as \nsuggested by Andreas.\n\n\n-- \nMark Hollomon\n", "msg_date": "Fri, 1 Dec 2000 21:47:51 -0500", "msg_from": "Mark Hollomon <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [SQL] Rules with Conditions: Bug, or Misunderstanding" }, { "msg_contents": "Mark Hollomon <[email protected]> writes:\n> I think it would be better to move the test further down, to just before we \n> actually try to do the update/insert. Maybe into the heap access routines as \n> suggested by Andreas.\n\nI'm worried about whether it'll be practical to generate a good error\nmessage from that low a level.\n\nLooking at it from the DBA's viewpoint rather than implementation\ndetails, I haven't seen a good reason *why* we should support\nconditional rules for views, as opposed to an unconditional rule with\nmultiple actions. Seems to me that writing independent rules that you\nhope will cover all cases is a great way to build an unreliable system.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 02 Dec 2000 00:18:39 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [SQL] Rules with Conditions: Bug, or Misunderstanding " }, { "msg_contents": "Tom Lane wrote:\n> \"Joel Burton\" <[email protected]> writes:\n> > create rule dev_ins as on update to dev_col_comments where\n> > old.description isnull do instead insert into pg_description ( objoid,\n> > description) values (old.att_oid, new.description);\n>\n> > create rule dev_upd as on update to dev_col_comments where\n> > old.description notnull do instead update pg_description set\n> > description=new.description where objoid=old.att_oid;\n>\n> > This doesn't work: I get a \"cannot update view w/o rule\" error\n> > message, both for fields where description was null, and for fields\n> > where it wasn't null.\n>\n> Hm. Perhaps the \"cannot update view\" test is too strict --- it's not\n> bright enough to realize that the two rules together cover all cases,\n> so it complains that you *might* be trying to update the view. As the\n> code stands, you must provide an unconditional DO INSTEAD rule to\n> implement insertion or update of a view.\n\n Disagree.\n\n A conditional rule splits the command into two, one with the\n rules action and the condition added, one which is the\n original statement plus the negated condition. So there are\n cases left where an INSERT can happen to the view relation\n and it's the job of this test to prevent it.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n\n\n", "msg_date": "Sun, 3 Dec 2000 18:17:48 -0500 (EST)", "msg_from": "Jan Wieck <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Rules with Conditions: Bug, or Misunderstanding" }, { "msg_contents": "Jan Wieck <[email protected]> writes:\n> Tom Lane wrote:\n>> Hm. Perhaps the \"cannot update view\" test is too strict --- it's not\n>> bright enough to realize that the two rules together cover all cases,\n>> so it complains that you *might* be trying to update the view. As the\n>> code stands, you must provide an unconditional DO INSTEAD rule to\n>> implement insertion or update of a view.\n\n> Disagree.\n\n> A conditional rule splits the command into two, one with the\n> rules action and the condition added, one which is the\n> original statement plus the negated condition. So there are\n> cases left where an INSERT can happen to the view relation\n> and it's the job of this test to prevent it.\n\nWell, in that case the present code is broken, because it's going to\nspit up if any part of the rewritten query shows the view as result\nrelation (cf. QueryRewrite() ... note that this logic no longer looks\nmuch like it did the last time you touched it ;-)). You'd have to\nconvert the existing rewrite-time test into a runtime test in order to\nsee whether the query actually tries to insert any tuples into the view.\n\nWhile that is maybe reasonable for insertions, it's totally silly\nfor update and delete queries. Since the view itself can never contain\nany tuples to be updated or deleted, a runtime test that errors out\nwhen one attempts to update or delete such a tuple could never fire.\nI don't think that means that we shouldn't complain about an update\nor delete on a view.\n\nI think the test is best left as-is...\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 03 Dec 2000 19:04:36 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Rules with Conditions: Bug, or Misunderstanding " }, { "msg_contents": "Tom Lane wrote:\n> Jan Wieck <[email protected]> writes:\n> > Tom Lane wrote:\n> >> Hm. Perhaps the \"cannot update view\" test is too strict --- it's not\n> >> bright enough to realize that the two rules together cover all cases,\n> >> so it complains that you *might* be trying to update the view. As the\n> >> code stands, you must provide an unconditional DO INSTEAD rule to\n> >> implement insertion or update of a view.\n>\n> > Disagree.\n>\n> > A conditional rule splits the command into two, one with the\n> > rules action and the condition added, one which is the\n> > original statement plus the negated condition. So there are\n> > cases left where an INSERT can happen to the view relation\n> > and it's the job of this test to prevent it.\n>\n> Well, in that case the present code is broken, because it's going to\n> spit up if any part of the rewritten query shows the view as result\n> relation (cf. QueryRewrite() ... note that this logic no longer looks\n> much like it did the last time you touched it ;-)). You'd have to\n> convert the existing rewrite-time test into a runtime test in order to\n> see whether the query actually tries to insert any tuples into the view.\n\n Yepp.\n\n> While that is maybe reasonable for insertions, it's totally silly\n> for update and delete queries. Since the view itself can never contain\n> any tuples to be updated or deleted, a runtime test that errors out\n> when one attempts to update or delete such a tuple could never fire.\n> I don't think that means that we shouldn't complain about an update\n> or delete on a view.\n>\n> I think the test is best left as-is...\n\n Since conditional rules aren't any better compared to an\n unconditional multi-action instead rule where the single\n actions have all the different conditions, let's leave it as\n is and insist on one unconditional instead rule.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n\n\n", "msg_date": "Mon, 4 Dec 2000 14:04:51 -0500 (EST)", "msg_from": "Jan Wieck <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [SQL] Rules with Conditions: Bug, or Misunderstanding" } ]
[ { "msg_contents": "> Would it be OK now to eliminate the separate xlog_bufmgr.c and\n> xlog_localbuf.c files, folding that code back into bufmgr.c and\n> localbuf.c? It's a real pain to have to make parallel updates\n> in two copies of that code...\n\nYes, it's OK now. I'll remove #ifdef XLOG in other files soon.\n\nVadim\n", "msg_date": "Wed, 29 Nov 2000 17:35:23 -0800", "msg_from": "\"Mikheev, Vadim\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: xlog_bufmgr" }, { "msg_contents": "\"Mikheev, Vadim\" <[email protected]> writes:\n> Yes, it's OK now. I'll remove #ifdef XLOG in other files soon.\n\nOK. Shall I do it, or do you want to?\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 29 Nov 2000 21:12:15 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RE: xlog_bufmgr " } ]
[ { "msg_contents": "is done. Initdb is required, sorry.\n\nBTW, why SETVAL is called in pg_dump output instead of\nif (called) NEXTVAL? SETVAL is disallowed for sequences\nwith cache_value > 1 - ie we can't dump such sequences now.\n\nVadim\n", "msg_date": "Wed, 29 Nov 2000 17:39:35 -0800", "msg_from": "\"Mikheev, Vadim\" <[email protected]>", "msg_from_op": true, "msg_subject": "Logging for sequences" } ]
[ { "msg_contents": "Would it be OK now to eliminate the separate xlog_bufmgr.c and\nxlog_localbuf.c files, folding that code back into bufmgr.c and\nlocalbuf.c? It's a real pain to have to make parallel updates\nin two copies of that code...\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 29 Nov 2000 20:43:10 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "xlog_bufmgr" } ]
[ { "msg_contents": "> > Yes, it's OK now. I'll remove #ifdef XLOG in other files soon.\n> \n> OK. Shall I do it, or do you want to?\n\nIf you have nothing to change in bufmgr now then I'll\ndo it myself today/tomorrow.\n\nVadim\n", "msg_date": "Wed, 29 Nov 2000 18:00:00 -0800", "msg_from": "\"Mikheev, Vadim\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: RE: xlog_bufmgr " }, { "msg_contents": ">>>> Yes, it's OK now. I'll remove #ifdef XLOG in other files soon.\n>> \n>> OK. Shall I do it, or do you want to?\n\n> If you have nothing to change in bufmgr now then I'll\n> do it myself today/tomorrow.\n\nOK, I didn't have any other reason to touch those files now.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 29 Nov 2000 21:42:44 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RE: xlog_bufmgr " } ]
[ { "msg_contents": "At 17:39 29/11/00 -0800, Mikheev, Vadim wrote:\n>is done. Initdb is required, sorry.\n>\n>BTW, why SETVAL is called in pg_dump output instead of\n>if (called) NEXTVAL? SETVAL is disallowed for sequences\n>with cache_value > 1 - ie we can't dump such sequences now.\n\nCan someone explain this to me? It's just a little over my head...\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Thu, 30 Nov 2000 13:23:10 +1100", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Logging for sequences" }, { "msg_contents": "Philip Warner <[email protected]> writes:\n> At 17:39 29/11/00 -0800, Mikheev, Vadim wrote:\n>> BTW, why SETVAL is called in pg_dump output instead of\n>> if (called) NEXTVAL? SETVAL is disallowed for sequences\n>> with cache_value > 1 - ie we can't dump such sequences now.\n\n> Can someone explain this to me? It's just a little over my head...\n\nHe's talking about the error check at the head of do_setval:\n\n if (seq->cache_value != 1)\n elog(ERROR, \"%s.setval: can't set value of sequence %s, cache != 1\",\n seqname, seqname);\n\nBecause of this, pg_dump's script will fail to set the sequence value\ncorrectly if the sequence was created with a cache setting larger than 1.\n\nVadim, Philip changed that part of pg_dump on my advice. The idea was\nto try to do the right thing for sequences when loading schema only or\ndata only. Analogously to loading data into a pre-existing table, we\nfelt that a data dump ought to be able to restore the current state of\nan already-existing sequence object. Hence it should use setval().\nBut I overlooked the cache issue.\n\nPhilip, the reasoning behind that error check is that if cache_value >\n1, then the behavior of the setval() may not be what the user expects.\nIn particular, other backends may have pre-cached sequence values, which\ntheir nextval() calls will continue to dole out even after the setval()\ncaller thinks he's changed the sequence's value.\n\nThis error check is probably good in the general case, but I think it's\nirrelevant for typical uses of pg_dump: there won't *be* any other\nbackends with cached values of the sequence object. Also, the behavior\nthat the error check is trying to prevent isn't exactly catastrophic,\nit's just potentially confusing to the user. So I don't want to let\nthe check stand in the way of making pg_dump do something reasonable\nwith sequences.\n\nMy inclination is to leave pg_dump as it stands, and change do_setval's\nerror check. We could rip out the check entirely, or we could modify\nthe code so that a setval() is allowed for a sequence with cache > 1\nonly if it's the new three-parameter form of setval(). That would allow\npg_dump to do its thing without changing the behavior for existing\napplications. Also, we can certainly make setval() flush any cached\nnextval assignments that the current backend is holding, even though we\nhave no easy way to clean out cached values in other backends.\n\nComments?\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 06 Dec 2000 16:12:42 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Logging for sequences " } ]
[ { "msg_contents": "Dear Sir,\nthanks for the reply.\nI tried select now()\nbut it gives the following error\nsyntax error near unexpected token `select.\n\nTo be specific about my problem, I want to compare one max date with the\ncurrent date in my Java servlet\nSince nested queries are not possible, how do i acheive my goal.\n\nMy present query doesn't works and is like this\n\nSelect months_between(('select max(h_date ) from query where\nemail=\"[email protected]\"),(select sysdate from dual)) from query\n\nWhat is the SQL query that can acheive the same effect.\nWith Best Regards\nSanjayArora\n\n\n\n", "msg_date": "Thu, 30 Nov 2000 10:51:21 +0000", "msg_from": "Manish Vig <[email protected]>", "msg_from_op": true, "msg_subject": "" }, { "msg_contents": "try this\n\nSELECT age(max(h_date), now()) FROM table WHERE email='hawks@vsnl';\n\nMichael Fork - CCNA - MCP - A+\nNetwork Support - Toledo Internet Access - Toledo Ohio\n\nOn Thu, 30 Nov 2000, Manish Vig wrote:\n\n> Dear Sir,\n> thanks for the reply.\n> I tried select now()\n> but it gives the following error\n> syntax error near unexpected token `select.\n> \n> To be specific about my problem, I want to compare one max date with the\n> current date in my Java servlet\n> Since nested queries are not possible, how do i acheive my goal.\n> \n> My present query doesn't works and is like this\n> \n> Select months_between(('select max(h_date ) from query where\n> email=\"[email protected]\"),(select sysdate from dual)) from query\n> \n> What is the SQL query that can acheive the same effect.\n> With Best Regards\n> SanjayArora\n> \n> \n> \n\n", "msg_date": "Thu, 30 Nov 2000 08:38:32 -0500 (EST)", "msg_from": "Michael Fork <[email protected]>", "msg_from_op": false, "msg_subject": "Re: " } ]
[ { "msg_contents": "When Postgres is fast, it is really fast. I love it. My biggest problem\nis when/how it chooses best path, it seems to me that relatively few\nrecords with a high duplication destroy performance. I can't stress\nenough that this is a serious problem in the real world.\n\nTake these two queries:\n\ncdinfo=# explain select trackid, song, title from zsong, ztitles where\nztitles.performer2 like 'Van Halen' and ztitles.muzenbr= zsong.muzenbr\nand contains(song, 'panama', 10)>0;\nNOTICE: QUERY PLAN:\n \nMerge Join (cost=6012.55..182151.93 rows=10902 width=36)\n -> Sort (cost=6012.55..6012.55 rows=3130 width=16)\n -> Index Scan using ztitles_performer2_ndx on ztitles \n(cost=0.00..5830.80 rows=3130 width=16)\n -> Index Scan using zsong_muzenbr_ndx on zsong (cost=0.00..166961.86\nrows=731071 width=20)\n\ncdinfo=# explain select trackid, song, title from zsong, ztitles where\nztitles.title like 'Van Halen' and ztitles.muzenbr = zsong.muzenbr and\ncontains(song, 'panama', 10)>0;\nNOTICE: QUERY PLAN:\n \nNested Loop (cost=0.00..93.45 rows=4 width=36)\n -> Index Scan using ztitles_title_ndx on ztitles (cost=0.00..7.08\nrows=1 width=16)\n -> Index Scan using zsong_muzenbr_ndx on zsong (cost=0.00..78.43\nrows=7 width=20) \n\nThey are fundamentally the same query, each with an index, each doing\nabout the same thing. Except that the performer2 field has a high number\nof duplicate records ala \"Various Artists\"\n\nNow we have had some small debates about how to fix this, and perhaps I\nam over simplifying it, but the current statistics are broken, they do\nnot work reliably and produce unreliable results. I think this is a must\nfor 7.1. A simple hack, such as discarding the upper and lower %5-10%\nshould be able to fix this behavior, without too many side effects (if\nany). While I agree it is not the \"right\" way to do something, it would\nbe a \"better\" way of doing something that is currently wrong.\n\nWith the exception of this problem, I love postgres, but this problem\nreally goes a long way to make it look REAL bad.\n\nBTW anyone know a way around this?\n\n-- \nhttp://www.mohawksoft.com\n", "msg_date": "Thu, 30 Nov 2000 07:26:27 -0500", "msg_from": "mlw <[email protected]>", "msg_from_op": true, "msg_subject": "Odd select behavior -- statistics, redux (7.0.x and devel)" } ]
[ { "msg_contents": "thanks for the infor commented out define complex macro poof compiles :)\ninitdb works :)\ncreateuser, createdb fail :( no entry in pg_hba.conf, have looked at it\nlooks like the standard default one on my linux box has entries for\nlocal and for host 127.0.0.1\ni would search the archives but when i tried to do that got page not\nfound on one of the groups, and not very meaningful, recent entries on\nkey word sco\ni am assuming a lot of these problems are sco releated, because havent\nhad any problems running apache, php, postgres on linux, if i cant get\nthis running on sco my client gets my laptop running linux, untill i can\ncome up with apache + php + sql database on sco :(\n\nwill try more searchs but its deault config, and it is being parsed,\nhacked it once mad typo gives me syntax error, removed hacks no syntax\nerror but access denied\n\nsorry to be such a pest, for being on the mailing list for only a few\nhours now, but the response and help ive got has been great, god i love\nopen source code\n\nthanks in advance Arno\n-- \nMy opinions are my own and not that of my employer even if I am self\nemployed\n", "msg_date": "Thu, 30 Nov 2000 10:07:06 -0600", "msg_from": "\"Arno A. Karner\" <[email protected]>", "msg_from_op": true, "msg_subject": "more fun with sco" }, { "msg_contents": "* Arno A. Karner <[email protected]> [001130 10:09]:\n> thanks for the infor commented out define complex macro poof compiles :)\n> initdb works :)\n> createuser, createdb fail :( no entry in pg_hba.conf, have looked at it\n> looks like the standard default one on my linux box has entries for\n> local and for host 127.0.0.1\n> i would search the archives but when i tried to do that got page not\n> found on one of the groups, and not very meaningful, recent entries on\n> key word sco\n> i am assuming a lot of these problems are sco releated, because havent\n> had any problems running apache, php, postgres on linux, if i cant get\n> this running on sco my client gets my laptop running linux, untill i can\n> come up with apache + php + sql database on sco :(\n> \n> will try more searchs but its deault config, and it is being parsed,\n> hacked it once mad typo gives me syntax error, removed hacks no syntax\n> error but access denied\n> \n> sorry to be such a pest, for being on the mailing list for only a few\n> hours now, but the response and help ive got has been great, god i love\n> open source code\n> \n> thanks in advance Arno\n\nI assume 7.0.3 of PG...\n\nApply the following patch in src/backend/libpq:\n\n*** pqcomm.c.old\tThu May 25 20:26:19 2000\n--- pqcomm.c\tSun Nov 12 12:03:25 2000\n***************\n*** 354,359 ****\n--- 354,361 ----\n \t\tperror(\"postmaster: StreamConnection: accept\");\n \t\treturn STATUS_ERROR;\n \t}\n+ \tif (port->raddr.sa.sa_family == 0)\n+ \t\tport->raddr.sa.sa_family = AF_UNIX;\n \n \t/* fill in the server (local) address */\n \taddrlen = sizeof(port->laddr);\n> -- \n> My opinions are my own and not that of my employer even if I am self\n> employed\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: [email protected]\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n", "msg_date": "Thu, 30 Nov 2000 10:13:13 -0600", "msg_from": "Larry Rosenman <[email protected]>", "msg_from_op": false, "msg_subject": "Re: more fun with sco" } ]
[ { "msg_contents": "eject\n\n\n\n--MIME Multi-part separator--\n\n", "msg_date": "Fri, 1 Dec 2000 09:10:00 +0900 (KST)", "msg_from": "������������ <[email protected]>", "msg_from_op": true, "msg_subject": "eject " } ]
[ { "msg_contents": "Thomas Lockhart <[email protected]> writes:\n> Tom, can you refresh my memory on the preferred way to define\n> \"commutative operators\" for operators with mixed input types? For\n> example, I want to define a new operator to add an interval to a time.\n> Do I need to fully implement the commutative function which adds a time\n> to an interval, or is there another way?\n\nNo other way. The commutative-operator stuff doesn't exist to make\nlife easy for the parser; it exists to make life easy for the executor,\nspecifically for the indexscan machinery, which thinks that indexscan\nqualifiers are always \"indexedvar OP constant\" and never \"constant OP\nindexedvar\". If you try to flip the order to make life easy in the\nparser, the planner will likely just flip it back again.\n\n> I used to have a cheat interpretation of commutation during operator\n> matching in the parser (which allowed a mixed-type operator to refer to\n> itself as its commutator, and the parser would then flip the arguments\n> around to match up), but I recall that you took this out to reinforce\n> the purity of the interpretation of commutation in the table flags.\n\nIIRC, I took it out because the oprsanity regress test was spitting up\non it. But that's just the tip of the iceberg. You could put it back\nif you want to fix the executor, the planner's knowledge that the\nexecutor wants \"var OP constant\", and oprsanity.sql. And maybe some\nother places :-(\n\nHaving said all that, I'm not sure that the planner is really all that\nsmart about reversing \"const OP var\" into \"var OP const\" for an index\nscan when the var and const are of different types. And this is all\n*totally* irrelevant for operators that are not index-related, ie,\nreferenced in pg_amop. If you're concerned about a \"time + interval =>\ntime\" operator, I suggest just not marking it commutative for now.\nThere's no value in making a commutator entry for a non-indexable\noperator.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 01 Dec 2000 02:16:39 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Operators and commutation " }, { "msg_contents": "Tom, can you refresh my memory on the preferred way to define\n\"commutative operators\" for operators with mixed input types? For\nexample, I want to define a new operator to add an interval to a time.\nDo I need to fully implement the commutative function which adds a time\nto an interval, or is there another way?\n\nI used to have a cheat interpretation of commutation during operator\nmatching in the parser (which allowed a mixed-type operator to refer to\nitself as its commutator, and the parser would then flip the arguments\naround to match up), but I recall that you took this out to reinforce\nthe purity of the interpretation of commutation in the table flags. So\nwhat is the best way to do this now?\n\n - Thomas\n", "msg_date": "Fri, 01 Dec 2000 07:19:02 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Operators and commutation" } ]
[ { "msg_contents": "\n> > > No, WAL does help, cause you can then pull in your last dump and recover\n> > > up to the moment that power cable was pulled out of the wall ...\n> > \n> > False, on so many counts I can't list them all.\n> \n> would love to hear them ... I'm always opening to having my\n> misunderstandings corrected ...\n\nOnly what has been transferred off site can be considered safe.\nBut: all the WAL improvements serve to reduce the probability that\nyou 1. need to restore and 2. need to restore from offsite backups.\n\nIf you need to restore from offsite backup you loose transactions\nunless you transfer the WAL synchronously with every commit. \n\nAndreas\n", "msg_date": "Fri, 1 Dec 2000 10:01:15 +0100 ", "msg_from": "Zeugswetter Andreas SB <[email protected]>", "msg_from_op": true, "msg_subject": "AW: beta testing version" }, { "msg_contents": "On Fri, Dec 01, 2000 at 10:01:15AM +0100, Zeugswetter Andreas SB wrote:\n> \n> > > > No, WAL does help, cause you can then pull in your last dump and recover\n> > > > up to the moment that power cable was pulled out of the wall ...\n> > > \n> > > False, on so many counts I can't list them all.\n> > \n> > would love to hear them ... I'm always opening to having my\n> > misunderstandings corrected ...\n> \n> Only what has been transferred off site can be considered safe.\n> But: all the WAL improvements serve to reduce the probability that\n> you 1. need to restore and 2. need to restore from offsite backups.\n> \n> If you need to restore from offsite backup you loose transactions\n> unless you transfer the WAL synchronously with every commit. \n\nCurrently the only way to avoid losing those transactions is by \nreplicating transactions at the application layer. That is, the\napplication talks to two different database instances, and enters\ntransactions into both. That's pretty hard to retrofit into an\nexisting application, so you'd really rather have replication in\nthe database. Of course, that's something PostgreSQL, Inc. is also \nworking on.\n\nNathan Myers\[email protected]\n", "msg_date": "Fri, 1 Dec 2000 11:09:27 -0800", "msg_from": "[email protected] (Nathan Myers)", "msg_from_op": false, "msg_subject": "Re: beta testing version" }, { "msg_contents": "At 11:09 AM 12/1/00 -0800, Nathan Myers wrote:\n>On Fri, Dec 01, 2000 at 10:01:15AM +0100, Zeugswetter Andreas SB wrote:\n\n>> If you need to restore from offsite backup you loose transactions\n>> unless you transfer the WAL synchronously with every commit. \n\n>Currently the only way to avoid losing those transactions is by \n>replicating transactions at the application layer. That is, the\n>application talks to two different database instances, and enters\n>transactions into both. That's pretty hard to retrofit into an\n>existing application, so you'd really rather have replication in\n>the database. Of course, that's something PostgreSQL, Inc. is also \n>working on.\n\nRecovery alone isn't quite that difficult. You don't need to instantiate\nyour database instance until you need to apply the archived transactions,\ni.e. after catastrophic failure destroys your db server.\n\nYou need to do two things:\n\n1. Transmit a consistent (known-state) snapshot of the database offsite.\n2. Synchronously tranfer the WAL as part of every commit (question, do\n wait to log a \"commit\" locally until after the remote site acks that\n it got the WAL?)\n\nThen you take a new machine, build a database out of the snapshot, and\napply the archived redo logs and off you go. If you get tired of saving\noodles of redo archives, you make a new snapshot and accumulate the\nWAL from that point forward.\n\nOf course, that's not a fast failover solution. The scenario you describe\nleads to being able to quickly switch over to a backup server when the\nprimary server fails. Much better for 24/7/365-style computing.\n\nExactly what is PostgreSQL, Inc doing in this area? I've not seen \ndiscussions about it here, and the two of the three most active developers\n(Jan and Tom) work for Great Bridge, not PostgreSQL, Inc...\n\nI should think Vadim should play a large role in any effort to add WAL-based\nreplication to Postgres.\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n", "msg_date": "Fri, 01 Dec 2000 11:48:23 -0800", "msg_from": "Don Baccus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: beta testing version" }, { "msg_contents": "On Fri, Dec 01, 2000 at 11:48:23AM -0800, Don Baccus wrote:\n> At 11:09 AM 12/1/00 -0800, Nathan Myers wrote:\n> >On Fri, Dec 01, 2000 at 10:01:15AM +0100, Zeugswetter Andreas SB wrote:\n> \n> >> If you need to restore from offsite backup you loose transactions\n> >> unless you transfer the WAL synchronously with every commit. \n> \n> >Currently the only way to avoid losing those transactions is by \n> >replicating transactions at the application layer. That is, the\n> >application talks to two different database instances, and enters\n> >transactions into both. That's pretty hard to retrofit into an\n> >existing application, so you'd really rather have replication in\n> >the database. Of course, that's something PostgreSQL, Inc. is also \n> >working on.\n> \n> Recovery alone isn't quite that difficult. You don't need to instantiate\n> your database instance until you need to apply the archived transactions,\n> i.e. after catastrophic failure destroys your db server.\n\nTrue, it's sufficient for the application just to log the text of \nits updating transactions off-site. Then, to recover, instantiate \na database from a backup and have the application re-run its \ntransactions. \n\n> You need to do two things:\n\n(Remember, we're talking about what you could do *now*, with 7.1.\nPresumably with 7.2 other options will open.)\n \n> 1. Transmit a consistent (known-state) snapshot of the database offsite.\n>\n> 2. Synchronously tranfer the WAL as part of every commit (question, do\n> wait to log a \"commit\" locally until after the remote site acks that\n> it got the WAL?)\n> \n> Then you take a new machine, build a database out of the snapshot, and\n> apply the archived redo logs and off you go. If you get tired of saving\n> oodles of redo archives, you make a new snapshot and accumulate the\n> WAL from that point forward.\n \nI don't know of any way to synchronously transfer the WAL, currently.\n\nAnyway, I would expect doing it to interfere seriously with performance.\nThe \"wait to log a 'commit' locally until after the remote site acks that\nit got the WAL\" is (akin to) the familiar two-phase commit.\n\nNathan Myers\[email protected]\n", "msg_date": "Fri, 1 Dec 2000 12:56:06 -0800", "msg_from": "[email protected] (Nathan Myers)", "msg_from_op": false, "msg_subject": "Re: beta testing version" }, { "msg_contents": "At 12:56 PM 12/1/00 -0800, Nathan Myers wrote:\n\n>(Remember, we're talking about what you could do *now*, with 7.1.\n>Presumably with 7.2 other options will open.)\n\nMaybe *you* are :) Seriously, I'm thinking out loud about future\npossibilities. Putting a lot of work into building up a temporary\nsolution on top of 7.1 doesn't make a lot of sense, anyone wanting\nto work on such things ought to think about 7.2, which presumably will\nbeta sometime mid-2001 or so???\n\nAnd I don't think there are 7.1 hacks that are simple ... could be\nwrong, though.\n\n>I don't know of any way to synchronously transfer the WAL, currently.\n\nNope.\n\n>Anyway, I would expect doing it to interfere seriously with performance.\n\nYep. Anyone here have experience with replication and Oracle or others?\nI've heard from one source that setting it up reliabily in Oracle and\ngetting the switch from the dead to the backup server working properly was\nsomething of a DBA nightmare, but that's true of just about anything in\nOracle. Once it was up, it worked reliably, though (also typical\nof Oracle).\n\n>The \"wait to log a 'commit' locally until after the remote site acks that\n>it got the WAL\" is (akin to) the familiar two-phase commit.\n\nRight.\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n", "msg_date": "Fri, 01 Dec 2000 13:28:10 -0800", "msg_from": "Don Baccus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: beta testing version" }, { "msg_contents": "Don Baccus writes:\n\n> Exactly what is PostgreSQL, Inc doing in this area?\n\nGood question... See http://www.erserver.com/.\n\n> I've not seen discussions about it here, and the two of the three most\n> active developers (Jan and Tom) work for Great Bridge, not PostgreSQL,\n> Inc...\n\nVadim Mikheev and Thomas Lockhart work for PostgreSQL, Inc., at least in\nsome form or another. Which *might* be construed as a reason for their\nperceived inactivity.\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n", "msg_date": "Sat, 2 Dec 2000 17:42:38 +0100 (CET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: beta testing version" }, { "msg_contents": "At 05:42 PM 12/2/00 +0100, Peter Eisentraut wrote:\n>Don Baccus writes:\n>\n>> Exactly what is PostgreSQL, Inc doing in this area?\n>\n>Good question... See http://www.erserver.com/.\n\n\"Advanced Replication and Distributed Information capabilities are also under development to meet specific\n business and competitive requirements for both PostgreSQL, Inc. and clients. Several of these enhanced\n PostgreSQL, Inc. developments may remain proprietary for up to 24 months, with availability limited to\n clients and partners, in order to assist us in recovering development costs and continue to provide funding\n for our other Open Source contributions. \"\n\nBoy, I can just imagine the uproar this statement will cause on Slashdot when\nthe world finds out about it.\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n", "msg_date": "Sat, 02 Dec 2000 11:31:37 -0800", "msg_from": "Don Baccus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: beta testing version" }, { "msg_contents": "On Sat, Dec 02, 2000 at 11:31:37AM -0800, Don Baccus wrote:\n> At 05:42 PM 12/2/00 +0100, Peter Eisentraut wrote:\n> >Don Baccus writes:\n> >\n> >> Exactly what is PostgreSQL, Inc doing in this area?\n> >\n> >Good question... See http://www.erserver.com/.\n> \n<snip>\n> \n> Boy, I can just imagine the uproar this statement will cause on Slashdot when\n> the world finds out about it.\n> \n\nThat one doesn't worry me us much as this quote from the press release at\n\nhttp://www.pgsql.com/press/PR_5.html\n\n\"We expect to have the source code tested and ready to contribute to\nthe open source community before the middle of October. Until that time\nwe are considering requests from a number of development companies and\nventure capital groups to join us in this process.\"\n\nWhere's the damn core code? I've seen a number of examples already of\npeople asking about remote access/replication function, with an eye\ntoward implementing it, and being told \"PostgreSQL, Inc. is working\non that\". It's almost Microsoftesque: preannounce future functionality\nsuppressing the competition.\n\nI realize this is probably just the typical deadline slip that we see\non the public releases of pgsql itself, not a silent retraction of the\npromise to release the code (especially since some of the same core\npeople are involved), but there is a difference: if I absolutely need\nsomething that's only in CVS right now, I can bite the bullet and use\na snapshot server. With erserver, I'm stuck sitting on my hands, with a\npromise of future functionality. Well, not really sitting on my hands:\nworking on other tasks, with the assumption that erserver will be there\nsoon. I'd rather not roll my own in an incompatable way, and have to\nport or redo the custom parts.\n\nSo, now I'm going into a couple critical, funding decision making\nmeetings in the next few weeks. I was planning on being able to promise\ncertain systems with concrete knowledge of what I will and won't be\nable to provide, and how much custom coding will be needed. Now, If the\nschedsule slips much more, I won't. It's even possible that the erserver's\nimplementation won't fit my needs at all, and I'll be back rolling my own.\n\nI realize this sounds a bit ungrateful: they're giving away the code,\nafter all, and potentially saving my a lot of work.\n\nIt's just the contrast between the really open work on the core server,\nand the lack of a peep when the promised deadlines have rolled past that\ngets under my skin.\n\nI'd be really happy with someone reiterating the commitment to an\nopen release, and letting us all know how badly the schedule has\nslipped. Remember, we're all here to help! Get everyone stomping bugs\nin code you're going to release soon anyway, and concentrate on the\nquasi-propriatary extensions.\n\nRoss\n", "msg_date": "Sat, 2 Dec 2000 15:51:15 -0600", "msg_from": "\"Ross J. Reedstrom\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: beta testing version" }, { "msg_contents": "\nOn Sat, 2 Dec 2000, Don Baccus wrote:\n\n...\n> Will Great Bridge step to the plate and fund a truly open source alternative,\n> leaving us with a potential code fork? If IB gets its political problems\n> under control and developers rally around it, two years is going to be a\n> long time to just sit back and wait for PG, Inc to release eRServer.\n\n I doubt that. There is an IB (Interbase) replication option today, but\nyou must purchase it. That isn't so bad actually. PostgreSQL looks to be\ngoing that way too: base functionality is open source, periphial\ncompanies make money selling extensions.\n\n Besides simple master-slave replication is old news anyhow, and not\nterribly useful. Products like FrontBase (www.frontbase.com) have full\nshared-nothing cluster support too (FrontBase is commerical). Clustering\nis a much better solution for redundancy purposes that replication.\n\n\nTom\n\n", "msg_date": "Sat, 2 Dec 2000 13:52:27 -0800 (PST)", "msg_from": "Tom Samplonius <[email protected]>", "msg_from_op": false, "msg_subject": "Re: beta testing version" }, { "msg_contents": "At 03:51 PM 12/2/00 -0600, Ross J. Reedstrom wrote:\n\n>\"We expect to have the source code tested and ready to contribute to\n>the open source community before the middle of October. Until that time\n>we are considering requests from a number of development companies and\n>venture capital groups to join us in this process.\"\n>\n>Where's the damn core code? I've seen a number of examples already of\n>people asking about remote access/replication function, with an eye\n>toward implementing it, and being told \"PostgreSQL, Inc. is working\n>on that\". It's almost Microsoftesque: preannounce future functionality\n>suppressing the competition.\n\nWell, this is just all 'round a bad precedent and an unwelcome path\nfor PostgreSQL, Inc to embark upon.\n\nThey've also embarked on one fully proprietary product (built on PG),\nwhich means they're not an Open Source company, just a sometimes Open\nSource company.\n\nIt's a bit ironic to learn about this on the same day I learned that\nSolaris 8 is being made available in source form. Sun's slowly \"getting\nit\" and moving glacially towards Open Source, while PostgreSQL, Inc.\nseems to be drifting in the opposite direction.\n\n>if I absolutely need\n>something that's only in CVS right now, I can bite the bullet and use\n>a snapshot server. \n\nThis work might be released as Open Source, but it isn't an open development\nscenario. The core work's not available for public scrutiny, and the details\nof what they're actually up don't appear to be public either.\n\nOK, they're probably funding Vadim's work on WAL, so the idictment's probably\nnot 100% accurate - but I don't know that. \n\n>I'd be really happy with someone reiterating the commitment to an\n>open release, and letting us all know how badly the schedule has\n>slipped. Remember, we're all here to help! Get everyone stomping bugs\n>in code you're going to release soon anyway, and concentrate on the\n>quasi-propriatary extensions.\n\nWhich makes me wonder, is Vadim's time going to be eaten up by working\non these quasi-proprietary extensions that the rest of us won't get\nfor two years unless we become customers of Postgres, Inc?\n\nWill Great Bridge step to the plate and fund a truly open source alternative,\nleaving us with a potential code fork? If IB gets its political problems\nunder control and developers rally around it, two years is going to be a\nlong time to just sit back and wait for PG, Inc to release eRServer.\n\nThese developments are a major annoyance.\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n", "msg_date": "Sat, 02 Dec 2000 14:11:17 -0800", "msg_from": "Don Baccus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: beta testing version" }, { "msg_contents": "At 01:52 PM 12/2/00 -0800, Tom Samplonius wrote:\n\n> I doubt that. There is an IB (Interbase) replication option today, but\n>you must purchase it. That isn't so bad actually. PostgreSQL looks to be\n>going that way too: base functionality is open source, periphial\n>companies make money selling extensions.\n\nPostgreSQL, Inc perhaps has that as a game plan. Thus far Great Bridge claims\nto be 100% devoted to the Open Source model.\n\n> Besides simple master-slave replication is old news anyhow, and not\n>terribly useful. Products like FrontBase (www.frontbase.com) have full\n>shared-nothing cluster support too (FrontBase is commerical). Clustering\n>is a much better solution for redundancy purposes that replication.\n\nI'm not so much concerned about exactly what PG, Inc is planning to offer\nas a proprietary piece - I'm purist enough that I worry about what this\nsignals for their future direction.\n\nIf PG, Inc starts doing proprietary chunks, and Great Bridge remains 100%\ndedicated to Open Source, I know who I'll want to succeed and prosper.\n\n\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n", "msg_date": "Sat, 02 Dec 2000 14:41:34 -0800", "msg_from": "Don Baccus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: beta testing version" }, { "msg_contents": "On Sat, Dec 02, 2000 at 03:47:19PM -0800, Adam Haberlach wrote:\n> > \n> > Where's the damn core code? I've seen a number of examples already of\n> > people asking about remote access/replication function, with an eye\n> > toward implementing it, and being told \"PostgreSQL, Inc. is working\n> > on that\". It's almost Microsoftesque: preannounce future functionality\n> > suppressing the competition.\n\nWell, I'll admit that this was getting a little over the top, especially\nquoted out of context. ;-)\n\n> \n> \tFor What It's Worth: In the three years (has it really been that long?)\n> that I've been off and on Postgres mailing lists, I've probably seen at\n> least 100 requests for replication, with about 40 of them mentioning\n> implementing it themself.\n> \n> \tI'm pretty sure that being told \"PostgreSQL Inc. is working on that\" is\n> not the only thing stopping it from happening. Most people just aren't up\n> to making it happen.\n\nIndeed. And it's only been less than a year that that response\nhas been given. However, it is only in that same timespan that the\nfunctionality and performance of the core server gotten to the point\nwere replication/remote access is one of immediately fruitful itches to\nscratch. We'll see what happens in the future.\n\nRoss\n", "msg_date": "Sat, 2 Dec 2000 16:43:56 -0600", "msg_from": "\"Ross J. Reedstrom\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: beta testing version" }, { "msg_contents": "On Sat, Dec 02, 2000 at 03:51:15PM -0600, Ross J. Reedstrom wrote:\n> On Sat, Dec 02, 2000 at 11:31:37AM -0800, Don Baccus wrote:\n> > At 05:42 PM 12/2/00 +0100, Peter Eisentraut wrote:\n> > >Don Baccus writes:\n> > >\n> > >> Exactly what is PostgreSQL, Inc doing in this area?\n> > >\n> > >Good question... See http://www.erserver.com/.\n> > \n> <snip>\n> > \n> > Boy, I can just imagine the uproar this statement will cause on Slashdot when\n> > the world finds out about it.\n> > \n> \n> That one doesn't worry me us much as this quote from the press release at\n> \n> http://www.pgsql.com/press/PR_5.html\n> \n> \"We expect to have the source code tested and ready to contribute to\n> the open source community before the middle of October. Until that time\n> we are considering requests from a number of development companies and\n> venture capital groups to join us in this process.\"\n> \n> Where's the damn core code? I've seen a number of examples already of\n> people asking about remote access/replication function, with an eye\n> toward implementing it, and being told \"PostgreSQL, Inc. is working\n> on that\". It's almost Microsoftesque: preannounce future functionality\n> suppressing the competition.\n\n\tFor What It's Worth: In the three years (has it really been that long?)\nthat I've been off and on Postgres mailing lists, I've probably seen at\nleast 100 requests for replication, with about 40 of them mentioning\nimplementing it themself.\n\n\tI'm pretty sure that being told \"PostgreSQL Inc. is working on that\" is\nnot the only thing stopping it from happening. Most people just aren't up\nto making it happen.\n\n-- \nAdam Haberlach |\"California's the big burrito, Texas is the big\[email protected] | taco ... and following that theme, Florida is\nhttp://www.newsnipple.com| the big tamale ... and the only tamale that \n'88 EX500 | counts any more.\" -- Dan Rather \n", "msg_date": "Sat, 2 Dec 2000 15:47:19 -0800", "msg_from": "Adam Haberlach <[email protected]>", "msg_from_op": false, "msg_subject": "Re: beta testing version" }, { "msg_contents": "> PostgreSQL, Inc perhaps has that as a game plan.\n> I'm not so much concerned about exactly what PG, Inc is planning to offer\n> as a proprietary piece - I'm purist enough that I worry about what this\n> signals for their future direction.\n\nHmm. What has kept replication from happening in the past? It is a big\njob and difficult to do correctly. It is entirely my fault that you\nhaven't seen the demo code released; I've been packaging it to make it a\nbit easier to work with.\n\n> If PG, Inc starts doing proprietary chunks, and Great Bridge remains 100%\n> dedicated to Open Source, I know who I'll want to succeed and prosper.\n\nLet me be clear: PostgreSQL Inc. is owned and controlled by people who\nhave lived the Open Source philosophy, which is not typical of most\ncompanies in business today. We are eager to show how this can be done\non a full time basis, not only as an avocation. And we are eager to do\nthis as part of the community we have helped to build.\n\nAs soon as you find a business model which does not require income, let\nme know. The .com'ers are trying it at the moment, and there seems to be\na few flaws... ;)\n\n - Thomas\n", "msg_date": "Sun, 03 Dec 2000 02:58:18 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: beta testing version" }, { "msg_contents": "At 02:58 AM 12/3/00 +0000, Thomas Lockhart wrote:\n>> PostgreSQL, Inc perhaps has that as a game plan.\n>> I'm not so much concerned about exactly what PG, Inc is planning to offer\n>> as a proprietary piece - I'm purist enough that I worry about what this\n>> signals for their future direction.\n>\n>Hmm. What has kept replication from happening in the past? It is a big\n>job and difficult to do correctly.\n\nPresumably what has kept it from happening in the past is that other \nthings were of much higher priority. Replicating a database on an\nengine as unreliable as PG was in earlier incarnations would simply\nreplicate your problems, for instance.\n\nThis statement of yours kinda belittles the work done over the past\nfew years by volunteers. It also ignores the fact that folks in other\ncompanies do get paid to work on open source software full-time without having\nto resort to creating closed source, proprietary products.\n\n> It is entirely my fault that you\n>haven't seen the demo code released; I've been packaging it to make it a\n>bit easier to work with.\n\nOK, good, this part gets open sourced. Still not an open development model.\nKnowing details about what's going on while code's being developed, not to mention\nbeing able to critique decisions, is one of the major benefits of the open\ndevelopment model.\n\n>Let me be clear: PostgreSQL Inc. is owned and controlled by people who\n>have lived the Open Source philosophy, which is not typical of most\n>companies in business today. We are eager to show how this can be done\n>on a full time basis, not only as an avocation.\n\nBuilding closed source proprietary products helps you live the open source\nphilosophy on a full-time basis?\n\n...\n\n>As soon as you find a business model which does not require income, let\n>me know.\n\nRed herring, and you know it. The question isn't whether or not your business\ngenerates income, but how it generates income.\n\nYour comment is the classic one tossed out by closed-source, proprietary\nsoftware advocates who dismiss open source software out-of-hand. \n\nCouldn't you think of something better, at least? Like ... something \noriginal?\n\n> The .com'ers are trying it at the moment, and there seems to be\n>a few flaws... ;)\n\nThat's a horrible analogy, and I suspect you know it, but at least it is\noriginal.\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n", "msg_date": "Sat, 02 Dec 2000 19:32:14 -0800", "msg_from": "Don Baccus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: beta testing version" }, { "msg_contents": "> This statement of yours kinda belittles the work done over the past\n> few years by volunteers.\n\nimho it does not, and if somehow you can read that into it then you have\na much different understanding of language than I. I *am* one of those\nvolunteers, and know that the hundreds of hours I have contributed is\nonly a small part of the whole.\n\nMy discussion on this is over; apologies to others for helping to waste\nbandwidth :(\n\nI'll be happy to continue it next over some beers, which is a much more\nappropriate setting.\n\n - Thomas\n", "msg_date": "Sun, 03 Dec 2000 04:42:05 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: beta testing version" }, { "msg_contents": "Thomas Lockhart wrote:\n> \n> > PostgreSQL, Inc perhaps has that as a game plan.\n> > I'm not so much concerned about exactly what PG, Inc is planning to offer\n> > as a proprietary piece - I'm purist enough that I worry about what this\n> > signals for their future direction.\n> Hmm. What has kept replication from happening in the past? It is a big\n> job and difficult to do correctly.\n\nWell, this has nothing whatsoever to do with open or closed source. Linux\nand FreeBSD are much larger, much harder to do correctly, as they are supersets\nof thousands of open source projects. Complexity is not relative to licensing.\n\n> > If PG, Inc starts doing proprietary chunks, and Great Bridge remains 100%\n> > dedicated to Open Source, I know who I'll want to succeed and prosper.\n> Let me be clear: PostgreSQL Inc. is owned and controlled by people who\n> have lived the Open Source philosophy, which is not typical of most\n> companies in business today.\n\nThat's one of the reasons why it's worked... open source meant open\ncontribution, open collaboration, open bug fixing. The price of admission\nwas doing your own installs, service, support, and giving something back....\n\nPG, I assume, is pretty much the same as most open source projects, massive\namounts of contribution shepherded by one or two individuals.\n\n> We are eager to show how this can be done\n> on a full time basis, not only as an avocation. And we are eager to do\n> this as part of the community we have helped to build.\n> As soon as you find a business model which does not require income, let\n> me know. The .com'ers are trying it at the moment, and there seems to be\n> a few flaws... ;)\n\nWell, whether or not a product is open, or closed, has very little\nto do with commercial success. Heck, the entire IBM PC spec was open, and\nthat certainly didn't hurt Dell, Compaq, etc.... the genie coming out\nof the bottle _only_ hurt IBM. In this case, however, the genie's been\nout for quite a while....\n\nBUT:\nPeople don't buy a product because it's open, they buy it because it offers\nsignificant value above and beyond what they can do *without* paying for\na product. Linus didn't start a new kernel out of some idealistic mantra\nof freeing the world, he was broke and wanted a *nix-y OS. Years later,\nthe product has grown massively. Those who are profiting off of it are\nunrelated to the code, to most of the developers.... why is this?\n\nAs it is, any company trying to make a closed version of an open source\nproduct has some _massive_ work to do. Manuals. Documentation. Sales.\nBranding. Phone support lines. Legal departments/Lawsuit prevention. Figuring\nout how to prevent open source from stealing the thunder by duplicating\nfeatures. And building a _product_.\n\nMost Open Source projects are not products, they are merely code, and some\nhorrid documentation, and maybe some support. The companies making money\nare not making better code, they are making better _products_....\n\nAnd I really havn't seen much in the way of full featured products, complete\nwith printed docs, 24 hour support, tutorials, wizards, templates, a company\nto sue if the code causes damage, GUI install, setup, removal, etc. etc. etc.\n\nWant to make money from open source? Well, you have to find, or build,\na _product_. Right now, there are no OS db products that can compare to oh,\nan Oracle product, a MSSQL product. There may be superior code, but that\ndoesn't make a difference in business. Business has very little to do\nwith building the perfect mousetrap, if nobody can easily use it.\n\n-Bop\n--\nBrought to you from boop!, the dual boot Linux/Win95 Compaq Presario 1625\nlaptop, currently running RedHat 6.1. Your bopping may vary.\n", "msg_date": "Sat, 02 Dec 2000 21:56:34 -0700", "msg_from": "Ron Chmara <[email protected]>", "msg_from_op": false, "msg_subject": "Re: beta testing version" }, { "msg_contents": "On Sat, Dec 02, 2000 at 07:32:14PM -0800, Don Baccus wrote:\n> At 02:58 AM 12/3/00 +0000, Thomas Lockhart wrote:\n> >> PostgreSQL, Inc perhaps has that as a game plan.\n> >> I'm not so much concerned about exactly what PG, Inc is planning to offer\n> >> as a proprietary piece - I'm purist enough that I worry about what this\n\n.\n.\n.\n\n> >As soon as you find a business model which does not require income, let\n> >me know.\n> \n> Red herring, and you know it. The question isn't whether or not your business\n> generates income, but how it generates income.\n\n\tSo far, Open Source doesn't. The VA Linux IPO made ME some income,\nbut I'm not sure that was part of their plan...\n\n> Your comment is the classic one tossed out by closed-source, proprietary\n> software advocates who dismiss open source software out-of-hand. \n> \n> Couldn't you think of something better, at least? Like ... something \n> original?\n> \n> > The .com'ers are trying it at the moment, and there seems to be\n> >a few flaws... ;)\n> \n> That's a horrible analogy, and I suspect you know it, but at least it is\n> original.\n\n\tIt wasn't an analogy.\n\n\tIn any case, can we create pgsql-politics so we don't have to go over\nthis issue every three months? Can we create pgsql-benchmarks while we\nare at it, to take care of the other thread that keeps popping up?\n\n-- \nAdam Haberlach |\"California's the big burrito, Texas is the big\[email protected] | taco ... and following that theme, Florida is\nhttp://www.newsnipple.com| the big tamale ... and the only tamale that \n'88 EX500 | counts any more.\" -- Dan Rather \n", "msg_date": "Sat, 2 Dec 2000 21:29:04 -0800", "msg_from": "Adam Haberlach <[email protected]>", "msg_from_op": false, "msg_subject": "Re: beta testing version" }, { "msg_contents": ">And I really havn't seen much in the way of full featured products, complete\n>with printed docs, 24 hour support, tutorials, wizards, templates, a company\n>to sue if the code causes damage, GUI install, setup, removal, etc. etc. etc.\n\nMac OS X.\n\n;-)\n\n-pmb\n\n--\[email protected]\n\n\"4 out of 5 people with the wrong hardware want to run Mac OS X because...\"\nhttp://www.newertech.com/oscompatibility/osxinfo.html\n\n\n", "msg_date": "Sat, 2 Dec 2000 21:40:36 -0800", "msg_from": "Peter Bierman <[email protected]>", "msg_from_op": false, "msg_subject": "Re: beta testing version" }, { "msg_contents": "At 09:29 PM 12/2/00 -0800, Adam Haberlach wrote:\n>> Red herring, and you know it. The question isn't whether or not your business\n>> generates income, but how it generates income.\n>\n>\tSo far, Open Source doesn't. The VA Linux IPO made ME some income,\n>but I'm not sure that was part of their plan...\n\nVA Linux is a HARDWARE COMPANY. They sell servers. \"We've engineered 2U\nperformance into a 1U box\" is their current line.\n\nDell probably makes more money on their Linux server offerings (I have to\nadmit that donb.photo.net is running on one of their PowerEdge servers) than\nVA Linux does.\n\nIf I can show you a HARDWARE COMPANY that is diving on selling MS NT servers,\nwill you agree that this proves that the closed source and open source models\nboth must be wrong, because HARDWARE COMPANIES based on each paradigm are\nlosing money???\n>> > The .com'ers are trying it at the moment, and there seems to be\n>> >a few flaws... ;)\n>> \n>> That's a horrible analogy, and I suspect you know it, but at least it is\n>> original.\n\n>\tIt wasn't an analogy.\n\nSure it is. Read, damn it. First he makes the statement that a business\nbased on open source is, by definition, a zero-revenue company then he\nraises the spectre of .com companies (how many of them are open source?)\nas support for his argument. \n\nOK, it's not an analogy, it's a disassociation with reality. Feel better?\n\n>\tIn any case, can we create pgsql-politics so we don't have to go over\n>this issue every three months? \n\nMaybe you don't care about the open source aspect of this, but as a user\nwith about 1500 Open Source advocates using my code, I do. If IB comes \nforth in a fully Open Source state my user base will insist I switch.\n\nAnd I will.\n\nAnd I'll stop telling the world that MySQL sucks, too. Or at least that\nthey suck worse than the PG world :)\n\nThere is risk here. It isn't so much in the fact that PostgreSQL, Inc\nis doing a couple of modest closed-source things with the code. After\nall, the PG community has long acknowleged that the BSD license would\nallow others to co-op the code and commercialize it with no obligations.\n\nIt is rather sad to see PG, Inc. take the first step in this direction.\n\nHow long until the entire code base gets co-opted?\n\n(Yeah, that's extremist, but seeing PG, Inc. lay down the formal foundation\nfor such co-opting by taking the first step might well make the potential\nreality become real. It certainly puts some of the long-term developers\nin no position to argue against such a co-opted snitch of the code).\n\nI have to say I'm feeling pretty silly about raising such an effort to\nincrease PG awareness in mindshare vs. MySQL. I mean, if PG, Inc's \nefforts somehow delineate the hopes and goals of the PG community, I'm\nfairly disgusted. \n\n\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n", "msg_date": "Sat, 02 Dec 2000 22:06:01 -0800", "msg_from": "Don Baccus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: beta testing version" }, { "msg_contents": "At 04:42 AM 12/3/00 +0000, Thomas Lockhart wrote:\n>> This statement of yours kinda belittles the work done over the past\n>> few years by volunteers.\n>\n>imho it does not,\n\nSure it does. You in essence are saying that \"advanced replication is so\nhard that it could only come about if someone were willing to finance a\nPROPRIETARY solution. The PG developer group couldn't manage it if\nit were done Open Source\".\n\nIn other words, it is much harder than any of the work done by the\nsame group of people before they started working on proprietary \nversions.\n\nAnd that the only way to get them doing their best work is to put them\non proprietary, or \"semi-proprietary\" projects, though 24 months from\nnow, who's going to care? You've opened the door to IB prominence, not\nonly shooting PG's open source purity down in flames, but probably PG, Inc's\nas well - IF IB can figure out their political problems. \n\nIB, as it stands, is a damned good product in many ways ahead of PG. You're\ngiving them life by this approach, which is a kind of bizarre businees strategy.\n\n> I *am* one of those volunteers\n\nYes, I well remember you screwing up PG 7.0 just before beta, without bothering\nto test your code, and leaving on vacation. \n\nYou were irresponsible then, and you're being irresponsible now.\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n", "msg_date": "Sat, 02 Dec 2000 22:14:32 -0800", "msg_from": "Don Baccus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: beta testing version" }, { "msg_contents": "At 09:56 PM 12/2/00 -0700, Ron Chmara wrote:\n...\n\n>And I really havn't seen much in the way of full featured products, complete\n>with printed docs, 24 hour support, tutorials, wizards, templates, a company\n>to sue if the code causes damage, GUI install, setup, removal, etc. etc. etc.\n>\n>Want to make money from open source? Well, you have to find, or build,\n>a _product_. Right now, there are no OS db products that can compare to oh,\n>an Oracle product, a MSSQL product. There may be superior code, but that\n>doesn't make a difference in business. Business has very little to do\n>with building the perfect mousetrap, if nobody can easily use it.\n\nWhich of course is the business model - certainly not a \"zero revenue\" model\nas Thomas arrogantly suggests - which OSS service companies are following.\n\nThey provide the cocoon around the code.\n\nI buy RH releases from Fry's. Yes, I could download, but the price is such\nthat I'd rather just go buy the damned release CDs. I don't begrudge it,\nthey're providing me a real SERVICE, saving me time, which saves me dollars\nin opportunity costs (given my $200/hr customer billing rate). They make\nmoney buy publishing releases, I still get all the sources. We all win.\n\nIt is not a bad model. \n\nQuestion - if this model sucks, then certainly PG, Inc's net revenue last\nyear was greater than any true open source software company's? I mean, let's\nsee that slam against the \"zero revenue business model\" be proven by showing\nus some real numbers. \n\nJust what was PG, Inc's net revenue last year, and just how does their mixed\nrevenue model stack up against the OSS world?\n\n(NOT the .com world, which is in a different business, no matter what Thomas\nwants to claim).\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n", "msg_date": "Sat, 02 Dec 2000 22:21:18 -0800", "msg_from": "Don Baccus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: beta testing version" }, { "msg_contents": "> There is risk here. It isn't so much in the fact that PostgreSQL, Inc\n> is doing a couple of modest closed-source things with the code. After\n> all, the PG community has long acknowleged that the BSD license would\n> allow others to co-op the code and commercialize it with no obligations.\n> \n> It is rather sad to see PG, Inc. take the first step in this direction.\n> \n> How long until the entire code base gets co-opted?\n\nI totaly missed your point here. How closing source of ERserver is related\nto closing code of PostgreSQL DB server? Let me clear things:\n\n1. ERserver isn't based on WAL. It will work with any version >= 6.5\n\n2. WAL was partially sponsored by my employer, Sectorbase.com,\nnot by PG, Inc.\n\nVadim\n\n\n", "msg_date": "Sat, 2 Dec 2000 23:00:29 -0800", "msg_from": "\"Vadim Mikheev\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: beta testing version" }, { "msg_contents": "Don Baccus <[email protected]> writes:\n\n> At 04:42 AM 12/3/00 +0000, Thomas Lockhart wrote:\n> >> This statement of yours kinda belittles the work done over the past\n> >> few years by volunteers.\n> >\n> >imho it does not,\n> \n> Sure it does. You in essence are saying that \"advanced replication is so\n> hard that it could only come about if someone were willing to finance a\n> PROPRIETARY solution. The PG developer group couldn't manage it if\n> it were done Open Source\".\n<snip>\n> \n> - Don Baccus, Portland OR <[email protected]>\n> Nature photos, on-line guides, Pacific Northwest\n> Rare Bird Alert Service and other goodies at\n> http://donb.photo.net.\n\nMr. Baccus,\n\nIt is funny how you rant and rave about the importance of opensource\nand how Postgresql Inc. making an non-opensource product is bad. Yet I\ngo to your website which is full of photographs and you make it a big\ndeal about people should not steal your photographs and how someone\nmust buy a commercial license to use them. That doesn't sound very\n'open-source' to me! Why don't you practice what you preach and allow\nredistribution of those photographs?\n\n-- \nPrasanth Kumar\[email protected]\n", "msg_date": "02 Dec 2000 23:49:14 -0800", "msg_from": "[email protected] (Prasanth A. Kumar)", "msg_from_op": false, "msg_subject": "Re: beta testing version" }, { "msg_contents": "Don Baccus writes:\n\n> How long until the entire code base gets co-opted?\n\nYeah so what? Nobody's forcing you to use, buy, or pay attention to any\nsuch efforts. The market will determine whether the release model of\nPostgreSQL, Inc. appeals to customers. Open source software is a\nprivilege, and nobody has the right to call someone \"irresponsible\"\nbecause they want to get paid for their work and don't choose to give away\ntheir code.\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n", "msg_date": "Sun, 3 Dec 2000 13:06:22 +0100 (CET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: beta testing version" }, { "msg_contents": "Ron Chmara wrote:\n\n> As it is, any company trying to make a closed version of an open source\n> product has some _massive_ work to do. Manuals. Documentation. Sales.\n> Branding. Phone support lines. Legal departments/Lawsuit prevention. Figuring\n> out how to prevent open source from stealing the thunder by duplicating\n> features. And building a _product_.\n>\n> Most Open Source projects are not products, they are merely code, and some\n> horrid documentation, and maybe some support. The companies making money\n> are not making better code, they are making better _products_....\n>\n> And I really havn't seen much in the way of full featured products, complete\n> with printed docs, 24 hour support, tutorials, wizards, templates, a company\n> to sue if the code causes damage, GUI install, setup, removal, etc. etc. etc.\n\nThis kind of stuff is more along the lines of what Great Bridge is doing. In about\na week, we'll be releasing a GB-branded release of 7.0.3 - including printed\nmanuals (much of which is new), a GUI installer (which is open source), support\npackages including fully-staffed 24/7. Details to follow soon on pgsql-announce.\n\nI don't want to speak for Pgsql Inc., but it seems to me that they are pursuing a\nslightly different business model than us - more focused on providing custom\ndevelopment around the base PostgreSQL software. And that's a great way to get\nmore people using PostgreSQL. Some of what they create for their customers may be\nopen source, some not. It's certainly their decision - and it's a perfectly\njustifiable business model, followed by open source companies such as Covalent\n(Apache), Zend (PHP), and TurboLinux. I don't think it's productive or appropriate\nto beat up on Pgsql Inc for developing bolt-on products in a different way -\nparticularly with Vadim's clarification that the bolt-ons don't require anything\nspecial in the open source backend.\n\nOur own business model is, as I indicated, different. We got a substantial\ninvestment from our parent company, whose chairman sat on the Red Hat board for\nthree years, and a mandate to create a *big* company that could provide the\ninfrastructure (human and technical) to enable PostgreSQL to go up against the\nproprietary players like Oracle and Microsoft. A fully-staffed 24/7 data center\nisn't cheap, and our services won't be either. But it's a different type of\nbusiness - we're providing the benefits of the open source development model to a\ngroup of customers that might not otherwise get involved, precisely because they\ndemand to see a company of Great Bridge's heft behind a product before they buy.\n\nI think PostgreSQL and other open source projects are big enough for lots of\ndifferent companies, with lots of different types of business models. Indeed, from\nwhat I've seen of Pgsql Inc (and I hope I haven't mischaracterized them), our\nbusiness models are highly complementary. At Great Bridge, we hope and expect that\nother companies that \"get it\" will get more involved with PostgreSQL - that can\nonly add to the strength of the project.\n\nRegards,\nNed\n--\n----------------------------------------------------\nNed Lilly e: [email protected]\nVice President w: www.greatbridge.com\nEvangelism / Hacker Relations v: 757.233.5523\nGreat Bridge, LLC f: 757.233.5555\n\n", "msg_date": "Sun, 03 Dec 2000 08:01:43 -0500", "msg_from": "Ned Lilly <[email protected]>", "msg_from_op": false, "msg_subject": "Re: beta testing version" }, { "msg_contents": "> > Branding. Phone support lines. Legal departments/Lawsuit prevention.\nFiguring\n> > out how to prevent open source from stealing the thunder by duplicating\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n> > features. And building a _product_.\n\nOops. You didn't really mean that, did you? Could it be that there are some\npeople out there thinking \"let them free software fools do the hard initial\nwork, once things are working nicely, we take over, add a few \"secret\"\ningredients, and voila - the commercial product has been created?\n\nAfter reading the statement above I believe that surely most of the honest\ndevelopers involved in postgres would wish they had chosen GPL as licensing\nscheme.\n\nI agree that most of the work is always done by a few. I also agree that it\nwould be nice if they could get some financial reward for it. But no dirty\ntricks please. Do not betray the base. Otherwise, the broad developer base\nwill be gone before you even can say \"freesoftware\".\n\nI, for my part, have learned another lesson today. I was just about to give\nin with the licensing scheme in our project to allow the GPL incompatible\nOpenSSL to be used. After reading the above now I know it is worth the extra\neffort to \"roll our own\" or wait for another GPL'd solution rather than\nsacrificing the unique protection the GPL gives us.\n\nHorst\ncoordinator gnumed project\n\n", "msg_date": "Mon, 4 Dec 2000 01:02:44 +1100", "msg_from": "\"Horst Herb\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: beta testing version" }, { "msg_contents": "> > How long until the entire code base gets co-opted?\n>\n> Yeah so what? Nobody's forcing you to use, buy, or pay attention to any\n> such efforts. The market will determine whether the release model of\n> PostgreSQL, Inc. appeals to customers. Open source software is a\n> privilege, and nobody has the right to call someone \"irresponsible\"\n> because they want to get paid for their work and don't choose to give away\n> their code.\n\nJust bear in mind that although a few developers always deliver outstanding\nperformance in any project, those open source projects have usually seen a\nhuge broad developer base. Hundreds of people putting their effort into the\nproject. These people never ask for a cent, never even dream of some\ncommercial benefit. They do it for the sake of creating something good,\nbeing part of something great.\n\nEspecially in the case of Postgres the \"product\" has a long heritage, and\nthe most active people today are not neccessarily the ones who have put in\nmost \"total\" effort (AFAIK, I might be wrong here). Anyway, Postgres would\nnot be where it is today without the hundreds of small cooperators &\ntesters. Lock them out from the source code - even if it is only a side\nbranch, and Postgres will die (well, at least it would die for our project)\n\nOpen source is not a mere marketing model. It is a philosophy. It is about\nessential freedom, about human progress, about freedom of speech and\nthought. It is about sharing and caring. Those who don't understand this,\nshould please stick to their ropes and develop closed source from the\nbeginning and not try to fool the free software community.\n\nHorst\n\n\n\n", "msg_date": "Mon, 4 Dec 2000 01:11:46 +1100", "msg_from": "\"Horst Herb\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: beta testing version" }, { "msg_contents": "Thomas Lockhart wrote:\n\n> As soon as you find a business model which does not require income, let\n> me know. The .com'ers are trying it at the moment, and there seems to be\n> a few flaws... ;)\n\nWhile I have not contributed anything to Postgres yet, I have\ncontributed to other environments. The prospect that I could create a\npiece of code, spend weeks/years of my own time on something and some\nentity can come along, take what I've written and create a product which\nis better for it, and then not share back is offensive. Under GPL it is\nillegal. (Postgres should try to move to GPL)\n\nI am working on a full-text search engine for Postgres. A really fast\none, something better than anything else out there. It combines the\npower and scalability of a web search engine, with the data-mining\ncapabilities of SQL.\n\nIf I write this extension to Postgres, and release it, is it right that\na business can come along, add a few things here and there and introduce\na new closed source product on what I have written? That is certainly\nnot what I intend. My intention was to honor the people before me for\nproviding the rich environment which is Postgres. I have made real money\nusing Postgres in a work environment. The time I would give back more\nthan covers MSSQL/Oracle licenses.\n\nOpen source is a social agreement, not a business model. If you break\nthe social agreement for a business model, the business model will fail\nbecause the society which fundamentally created the product you wish to\nsell will crumble from mistrust (or shun you). In short, it is wrong to\nsell the work of others without proper compensation and the full\nagreement of everyone that has contributed. If you don't get that, get\nout of the open source market now.\n\nThat said, there is a long standing business model which is 100%\ncompatible with Open Source and it is of the lowly 'VAR.' You do not\nthink for one minute that an Oracle VAR would dare to add features to\nOracle and make their own SQL do you?\n\nAs a PostgreSQL \"VAR\" you are in a better position that any other VAR.\nYou get to partner in the code development process. (You couldn't ask\nOracle to add a feature and expect to keep it to yourself, could you?)\n\nI know this is a borderline rant, and I am sorry, but I think it is very\nimportant that the integrity of open source be preserved at 100% because\nit is a very slippery slope, and we are all surrounded by the temptation\ncheat the spirit of open source \"just a little\" for short term gain. \n\n\n-- \nhttp://www.mohawksoft.com\n", "msg_date": "Sun, 03 Dec 2000 10:41:50 -0500", "msg_from": "mlw <[email protected]>", "msg_from_op": false, "msg_subject": "Re: beta testing version" }, { "msg_contents": "At 11:00 PM 12/2/00 -0800, Vadim Mikheev wrote:\n>> There is risk here. It isn't so much in the fact that PostgreSQL, Inc\n>> is doing a couple of modest closed-source things with the code. After\n>> all, the PG community has long acknowleged that the BSD license would\n>> allow others to co-op the code and commercialize it with no obligations.\n>> \n>> It is rather sad to see PG, Inc. take the first step in this direction.\n>> \n>> How long until the entire code base gets co-opted?\n>\n>I totaly missed your point here. How closing source of ERserver is related\n>to closing code of PostgreSQL DB server? Let me clear things:\n\n(not based on WAL)\n\nThat's wasn't clear from the blurb.\n\nStill, this notion that PG, Inc will start producing closed-source products\npoisons the well. It strengthens FUD arguments of the \"open source can't\nprovide enterprise solutions\" variety. \"Look, even PostgreSQL, Inc realizes\nthat you must follow a close sourced model in order to provide tools for\nthe corporate world.\"\n\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n", "msg_date": "Sun, 03 Dec 2000 07:44:06 -0800", "msg_from": "Don Baccus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: beta testing version" }, { "msg_contents": "At 01:06 PM 12/3/00 +0100, Peter Eisentraut wrote:\n\n> Open source software is a\n>privilege,\n\nI admit that I don't subscribe to Stallman's \"source to software is a\nright\" argument. That's far off my reality map.\n\n> and nobody has the right to call someone \"irresponsible\"\n>because they want to get paid for their work and don't choose to give away\n>their code.\n\nHowever, I do have the right to make such statements, just as you have the\nright to disagree. It's called the first amendment in my country.\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n", "msg_date": "Sun, 03 Dec 2000 07:48:06 -0800", "msg_from": "Don Baccus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: beta testing version" }, { "msg_contents": "> >I totaly missed your point here. How closing source of ERserver is related\n> >to closing code of PostgreSQL DB server? Let me clear things:\n> \n> (not based on WAL)\n> \n> That's wasn't clear from the blurb.\n> \n> Still, this notion that PG, Inc will start producing closed-source products\n> poisons the well. It strengthens FUD arguments of the \"open source can't\n> provide enterprise solutions\" variety. \"Look, even PostgreSQL, Inc realizes\n> that you must follow a close sourced model\n> in order to provide tools for the corporate world.\"\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\nDid you miss Thomas' answer? Wasn't it clear that the order is to provide\nincome?\n\nVadim\n\n\n", "msg_date": "Sun, 3 Dec 2000 12:09:05 -0800", "msg_from": "\"Vadim Mikheev\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: beta testing version" }, { "msg_contents": "I think this trend is MUCH bigger than what Postgres, Inc. is doing... its\nhappening all over\nthe comminity. Heck take a look around... Jabber, Postgres, Red Hat, SuSe,\nStorm etc. etc.\nthese companies are making good money off a business plan that was basically\n\"hey, lets take some\nof that open source and make a real product out of it...\". As long as they\ndribble releases into\nthe community, they're not in violation... Its not a bad business model if\nyou think about it, if you\ncan take a product that is good (great as in PG) and add value, sell it and\nmake money, why not?\nHell, you didn't have to spend the gazillion R&D dollars on the initial\ndesign and implementation,\nyour basically reaping the rewards off of the work of other people.\n\nAre you ready for hundreds upon hundreds of little projects turning into\n\"startup\" companies?\nIt was bound to happen. Why? because money is involved, plain and simple.\n\nMaybe its a natural progression of this stuff, who knows, I just know that\nI've been around\nthe block a couple times, been in the industry too long to know that the\nminority voice never\ngets the prize... we usually set the trend and pay for it in the end...\nfatalistic? maybe. But not\nfar from the truth...\n\nSorry to be a downer... The Red Sox didn't get Mussina....\n\n----- Original Message -----\nFrom: \"Don Baccus\" <[email protected]>\nTo: \"Ross J. Reedstrom\" <[email protected]>\nCc: \"Peter Eisentraut\" <[email protected]>; \"PostgreSQL Development\"\n<[email protected]>\nSent: Saturday, December 02, 2000 5:11 PM\nSubject: Re: [HACKERS] beta testing version\n\n\n> At 03:51 PM 12/2/00 -0600, Ross J. Reedstrom wrote:\n>\n> >\"We expect to have the source code tested and ready to contribute to\n> >the open source community before the middle of October. Until that time\n> >we are considering requests from a number of development companies and\n> >venture capital groups to join us in this process.\"\n> >\n> >Where's the damn core code? I've seen a number of examples already of\n> >people asking about remote access/replication function, with an eye\n> >toward implementing it, and being told \"PostgreSQL, Inc. is working\n> >on that\". It's almost Microsoftesque: preannounce future functionality\n> >suppressing the competition.\n>\n> Well, this is just all 'round a bad precedent and an unwelcome path\n> for PostgreSQL, Inc to embark upon.\n>\n> They've also embarked on one fully proprietary product (built on PG),\n> which means they're not an Open Source company, just a sometimes Open\n> Source company.\n>\n> It's a bit ironic to learn about this on the same day I learned that\n> Solaris 8 is being made available in source form. Sun's slowly \"getting\n> it\" and moving glacially towards Open Source, while PostgreSQL, Inc.\n> seems to be drifting in the opposite direction.\n>\n> >if I absolutely need\n> >something that's only in CVS right now, I can bite the bullet and use\n> >a snapshot server.\n>\n> This work might be released as Open Source, but it isn't an open\ndevelopment\n> scenario. The core work's not available for public scrutiny, and the\ndetails\n> of what they're actually up don't appear to be public either.\n>\n> OK, they're probably funding Vadim's work on WAL, so the idictment's\nprobably\n> not 100% accurate - but I don't know that.\n>\n> >I'd be really happy with someone reiterating the commitment to an\n> >open release, and letting us all know how badly the schedule has\n> >slipped. Remember, we're all here to help! Get everyone stomping bugs\n> >in code you're going to release soon anyway, and concentrate on the\n> >quasi-propriatary extensions.\n>\n> Which makes me wonder, is Vadim's time going to be eaten up by working\n> on these quasi-proprietary extensions that the rest of us won't get\n> for two years unless we become customers of Postgres, Inc?\n>\n> Will Great Bridge step to the plate and fund a truly open source\nalternative,\n> leaving us with a potential code fork? If IB gets its political problems\n> under control and developers rally around it, two years is going to be a\n> long time to just sit back and wait for PG, Inc to release eRServer.\n>\n> These developments are a major annoyance.\n>\n>\n>\n> - Don Baccus, Portland OR <[email protected]>\n> Nature photos, on-line guides, Pacific Northwest\n> Rare Bird Alert Service and other goodies at\n> http://donb.photo.net.\n\n\n", "msg_date": "Sun, 3 Dec 2000 15:29:47 -0500", "msg_from": "\"Gary MacDougall\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: beta testing version" }, { "msg_contents": "On Sat, 2 Dec 2000, Adam Haberlach wrote:\n\n> \tIn any case, can we create pgsql-politics so we don't have to go over\n> this issue every three months? Can we create pgsql-benchmarks while we\n> are at it, to take care of the other thread that keeps popping up?\n\nno skin off my back:\n\n\tpgsql-advocacy\n\tpgsql-chat\n\tpgsql-benchmarks\n\n-advocacy/-chat are pretty much the same concept ... \n\n\n", "msg_date": "Sun, 3 Dec 2000 17:06:17 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: beta testing version" }, { "msg_contents": "On Sat, 2 Dec 2000, Don Baccus wrote:\n\n> > I *am* one of those volunteers\n> \n> Yes, I well remember you screwing up PG 7.0 just before beta, without bothering\n> to test your code, and leaving on vacation. \n> \n> You were irresponsible then, and you're being irresponsible now.\n\nOkay, so let me get this one straight ... it was irresponsible for him to\nput code in that was broken the last time, but it wouldn't be\nirresponsible for us to release code that we don't feel is ready this\ntime? *raised eyebrow*\n\nJust want to get this straight, as it kinda sounds hypocritical to me, but\nwant to make sure that I understand before I fully arrive at that\nconclusion ... :)\n\n", "msg_date": "Sun, 3 Dec 2000 17:10:02 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: beta testing version" }, { "msg_contents": "Don Baccus wrote:\n> \n> At 04:42 AM 12/3/00 +0000, Thomas Lockhart wrote:\n> >> This statement of yours kinda belittles the work done over the past\n> >> few years by volunteers.\n> >\n> >imho it does not,\n> \n> Sure it does. You in essence are saying that \"advanced replication is so\n> hard that it could only come about if someone were willing to finance a\n> PROPRIETARY solution. The PG developer group couldn't manage it if\n> it were done Open Source\".\n> \n> In other words, it is much harder than any of the work done by the\n> same group of people before they started working on proprietary\n> versions.\n> \n> And that the only way to get them doing their best work is to put them\n> on proprietary, or \"semi-proprietary\" projects, though 24 months from\n> now, who's going to care? You've opened the door to IB prominence, not\n> only shooting PG's open source purity down in flames, but probably PG, Inc's\n> as well - IF IB can figure out their political problems.\n>\n> IB, as it stands, is a damned good product in many ways ahead of PG. You're\n> giving them life by this approach, which is a kind of bizarre businees strategy.\n> \n\nYou (and others ;) may also be interested in SAPDB (SAP's version of\nAdabas), \nthat is soon to be released under GPL. It is already downloadable for\nfree use \nfrom www.sapdb.org\n\n-------------\nHannu\n", "msg_date": "Sun, 03 Dec 2000 23:19:37 +0200", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: beta testing version" }, { "msg_contents": "On Mon, 4 Dec 2000, Horst Herb wrote:\n\n> > > Branding. Phone support lines. Legal departments/Lawsuit prevention.\n> Figuring\n> > > out how to prevent open source from stealing the thunder by duplicating\n> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n> > > features. And building a _product_.\n> \n> Oops. You didn't really mean that, did you? Could it be that there are some\n> people out there thinking \"let them free software fools do the hard initial\n> work, once things are working nicely, we take over, add a few \"secret\"\n> ingredients, and voila - the commercial product has been created?\n> \n> After reading the statement above I believe that surely most of the\n> honest developers involved in postgres would wish they had chosen GPL\n> as licensing scheme.\n> \n> I agree that most of the work is always done by a few. I also agree\n> that it would be nice if they could get some financial reward for it.\n> But no dirty tricks please. Do not betray the base. Otherwise, the\n> broad developer base will be gone before you even can say\n> \"freesoftware\".\n> \n> I, for my part, have learned another lesson today. I was just about to\n> give in with the licensing scheme in our project to allow the GPL\n> incompatible OpenSSL to be used. After reading the above now I know it\n> is worth the extra effort to \"roll our own\" or wait for another GPL'd\n> solution rather than sacrificing the unique protection the GPL gives\n> us.\n\nto this day, this still cracks me up ... if a BSD licensed OSS project\nsomehow gets its code base \"closed\", that closing can only affect the code\nbase from its closing on forward ... on that day, there is *nothing*\nstopping from the OSS community from taking the code base from teh second\nbefore it was closed and running with it ...\n\nyou get no more, and no less, protection under either license. \n\nthe \"protection\" that GPL provides is that it prevents someone from taking\nthe code, making proprietary modications to it and branding it as their\nown for release ... cause under GPL, they would have to release the source\ncode for the modifications ...\n\nPgSQL, Inc hasn't done anything so far but develop third party\n*applications* over top of PgSQL, with plans to release them at various\nstages as the clients we are developing them for permit ... as well as\nprovided consulting to clients looking at moving towards PgSQL and\nrequiring help with migrations ...\n\nWe aren't going to release something that is half-assed and buggy ... the\nwhole erServer stuff right now is *totally* external to the PgSQL server,\nand, as such, is a third-party application, not a proprietary extension\nlike Don wants to make it out to be ...\n\n", "msg_date": "Sun, 3 Dec 2000 17:22:25 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: beta testing version" }, { "msg_contents": "\"Gary MacDougall\" <[email protected]> writes:\n\n> I think this trend is MUCH bigger than what Postgres, Inc. is\n> doing... its happening all over the comminity. Heck take a look\n> around... Jabber, Postgres, Red Hat, SuSe, Storm etc. etc. these\n> companies are making good money off a business plan that was basically\n> \"hey, lets take some of that open source and make a real product out\n> of it...\". \n\nI doubt many of these \"make good money\". We're almost breaking even,\nwhich is probably the best among these.\n\nNote also that some companies contribute engineering resources into\ncore free software components, like gcc, gdb, the linux kernel, glibc,\ngnome, gtk+, rpm, apache, XFree, KDE - AFAIK, Red Hat and SuSE are by\nfar the two doing this the most.\n\n\n-- \nTrond Eivind Glomsr�d\nRed Hat, Inc.\n", "msg_date": "03 Dec 2000 16:24:11 -0500", "msg_from": "[email protected] (Trond Eivind =?iso-8859-1?q?Glomsr=d8d?=)", "msg_from_op": false, "msg_subject": "Re: beta testing version" }, { "msg_contents": "Adam Haberlach wrote:\n> In any case, can we create pgsql-politics so we don't have to go over\n> this issue every three months? Can we create pgsql-benchmarks while we\n> are at it, to take care of the other thread that keeps popping up?\n\n pgsql-yawn, where any of them can happen as often and long as\n they want.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n\n\n", "msg_date": "Sun, 3 Dec 2000 16:40:01 -0500 (EST)", "msg_from": "Jan Wieck <[email protected]>", "msg_from_op": false, "msg_subject": "Re: beta testing version" }, { "msg_contents": "mlw wrote:\n> \n> Thomas Lockhart wrote:\n> \n> > As soon as you find a business model which does not require income, let\n> > me know. The .com'ers are trying it at the moment, and there seems to be\n> > a few flaws... ;)\n> \n> While I have not contributed anything to Postgres yet, I have\n> contributed to other environments. The prospect that I could create a\n> piece of code, spend weeks/years of my own time on something and some\n> entity can come along, take what I've written and create a product which\n> is better for it, and then not share back is offensive. Under GPL it is\n> illegal. (Postgres should try to move to GPL)\n\nI think that forbidding anyone else from profiting from your work is\nalso \nsomewhat obscene ;)\n\nThe whole idea of open source is that in open ideas mature faster, bugs\nare \n\n> I am working on a full-text search engine for Postgres. A really fast\n> one, something better than anything else out there.\n\nIs'nt everybody ;)\n\n> It combines the power and scalability of a web search engine, with \n> the data-mining capabilities of SQL.\n\nAre you doing it in a fully open-source fashion or just planning to\nrelease \nit as OS \"when it somewhat works\" ?\n\n> If I write this extension to Postgres, and release it, is it right that\n> a business can come along, add a few things here and there and introduce\n> a new closed source product on what I have written? That is certainly \n> not what I intend. \n\nIf your intention is to later cash in on proprietary uses of your code \nyou should of course use GPL.\n\n> My intention was to honor the people before me for\n> providing the rich environment which is Postgres. I have made real money\n> using Postgres in a work environment. The time I would give back more\n> than covers MSSQL/Oracle licenses.\n> \n> Open source is a social agreement, not a business model.\n\nNot one but many (and btw. incompatible) social agreements.\n\n> If you break the social agreement for a business model, \n\nYou are free to put your additions under GPL, it is just a tradition in\nPG \ncommunity not to contaminate the core with anything less free than BSD\n(and yes, \nforcing your idea of freedom on other people qualifies as \"less free\" ;)\n\n> the business model will fail\n> because the society which fundamentally created the product you wish to\n> sell will crumble from mistrust (or shun you). In short, it is wrong to\n> sell the work of others without proper compensation and the full\n> agreement of everyone that has contributed. If you don't get that, get\n> out of the open source market now.\n\nSO now a social contract is a market ? I _am_ confused.\n\n> That said, there is a long standing business model which is 100%\n> compatible with Open Source and it is of the lowly 'VAR.' You do not\n> think for one minute that an Oracle VAR would dare to add features to\n> Oracle and make their own SQL do you?\n\nBut if Oracle were released under BSD license, it might benefit both the \nVAR and the customer to do so under some circumstances.\n\n> As a PostgreSQL \"VAR\" you are in a better position that any other VAR.\n> You get to partner in the code development process. (You couldn't ask\n> Oracle to add a feature and expect to keep it to yourself, could you?)\n\nYou could ask another VAR to do that if you yourself are incapable/don't \nhave time, etc.\n\nAnd of course I can keep it to myself even if done by Oracle. \nWhat I can't do is forbid others from having it too .\n\n> I know this is a borderline rant, and I am sorry, but I think it is very\n> important that the integrity of open source be preserved at 100% because\n> it is a very slippery slope, and we are all surrounded by the temptation\n> cheat the spirit of open source \"just a little\" for short term gain.\n\nDo you mean that anyone who has contributed to an opensource project\nshould \nbe forbidden from doing any closed-source development ?\n\n\n-----------\nHannu\n", "msg_date": "Sun, 03 Dec 2000 23:40:14 +0200", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: beta testing version" }, { "msg_contents": "The Hermit Hacker wrote:\n> \n> On Sat, 2 Dec 2000, Don Baccus wrote:\n> \n> > > I *am* one of those volunteers\n> >\n> > Yes, I well remember you screwing up PG 7.0 just before beta, without bothering\n> > to test your code, and leaving on vacation.\n> >\n> > You were irresponsible then, and you're being irresponsible now.\n> \n> Okay, so let me get this one straight ... it was irresponsible for him to\n> put code in that was broken the last time, but it wouldn't be\n> irresponsible for us to release code that we don't feel is ready this\n> time? *raised eyebrow*\n> \n> Just want to get this straight, as it kinda sounds hypocritical to me, but\n> want to make sure that I understand before I fully arrive at that\n> conclusion ... :)\n\nIIRC, this thread woke up on someone complaining about PostgreSQl inc\npromising \nto release some code for replication in mid-october and asking for\nconfirmation \nthat this is just a schedule slip and that the project is still going on\nand \ngoing to be released as open source.\n\nWhat seems to be the answer is: \"NO, we will keep the replication code\nproprietary\".\n\nI have not seen this answer myself, but i've got this impression from\nthe contents \nof the whole discussion.\n\nDo you know if this is the case ?\n\n-----------\nHannu\n", "msg_date": "Sun, 03 Dec 2000 23:48:55 +0200", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: beta testing version" }, { "msg_contents": "Hannu Krosing wrote:\n> > I know this is a borderline rant, and I am sorry, but I think it is very\n> > important that the integrity of open source be preserved at 100% because\n> > it is a very slippery slope, and we are all surrounded by the temptation\n> > cheat the spirit of open source \"just a little\" for short term gain.\n> \n> Do you mean that anyone who has contributed to an opensource project\n> should be forbidden from doing any closed-source development ?\n\nNo, not at all. At least for me, if I write code which is dependent on\nthe open source work of others, then hell yes, that work should also be\nopen source. That, to me, is the difference between right and wrong.\n\nIf you write a program which stands on its own, takes no work from\nuncompensated parties, then you have the unambiguous right to do what\never you want.\n\nI honestly feel that it is wrong to take what others have shared and use\nit for the basis of something you will not share, and I can't understand\nhow anyone could think differently.\n\n-- \nhttp://www.mohawksoft.com\n", "msg_date": "Sun, 03 Dec 2000 17:17:36 -0500", "msg_from": "mlw <[email protected]>", "msg_from_op": false, "msg_subject": "Re: beta testing version" }, { "msg_contents": "mlw wrote:\n> \n> Hannu Krosing wrote:\n> > > I know this is a borderline rant, and I am sorry, but I think it is very\n> > > important that the integrity of open source be preserved at 100% because\n> > > it is a very slippery slope, and we are all surrounded by the temptation\n> > > cheat the spirit of open source \"just a little\" for short term gain.\n> >\n> > Do you mean that anyone who has contributed to an opensource project\n> > should be forbidden from doing any closed-source development ?\n> \n> No, not at all. At least for me, if I write code which is dependent on\n> the open source work of others, then hell yes, that work should also be\n> open source. That, to me, is the difference between right and wrong.\n\nThat may be so, that the world as a whole is not that far yet. If\nopen-source \nis going to prevail (which I believe it will do), it is not because it\nis \n\"right\", but because it is a more efficient way of producing quality\nsoftware.\n\n> I honestly feel that it is wrong to take what others have shared and use\n> it for the basis of something you will not share, and I can't understand\n> how anyone could think differently.\n\nThere can be many many reasons you would need to also write\nclosed-source code.\nBSD license gives you that freedom (GPL does not).\n\nBy distributing your code under BSD license you acnowledge that the\nworld is \nnot perfect. This is not the way of a true revolutionary ;) \n\nDon't let that scare you away from contributing to PostgreSQL though,\nYou could \nalways contribute and keep your code under different license,\nGPL,LGPL,MPL, ...\nIt would probably not be integrated in the core, but would very likely\nbe kept \nin contrib.\n\n----------\nHannu\n", "msg_date": "Mon, 04 Dec 2000 00:30:41 +0200", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: beta testing version" }, { "msg_contents": "No offense Trond, if you were in on the Red Hat IPO from the start,\nyou'd have to say those people made \"good money\". Bad market\nor good market, those \"friends of Red Hat\" made some serious coin.\n\nLet me clarify, I'm not against this process (and making money), I just\nthink there is an issue with OSL that will start to catch up with itself\npretty soon.\n\ng.\n\n----- Original Message -----\nFrom: \"Trond Eivind Glomsr�d\" <[email protected]>\nTo: \"PostgreSQL Development\" <[email protected]>\nSent: Sunday, December 03, 2000 4:24 PM\nSubject: Re: [HACKERS] beta testing version\n\n\n\"Gary MacDougall\" <[email protected]> writes:\n\n> I think this trend is MUCH bigger than what Postgres, Inc. is\n> doing... its happening all over the comminity. Heck take a look\n> around... Jabber, Postgres, Red Hat, SuSe, Storm etc. etc. these\n> companies are making good money off a business plan that was basically\n> \"hey, lets take some of that open source and make a real product out\n> of it...\".\n\nI doubt many of these \"make good money\". We're almost breaking even,\nwhich is probably the best among these.\n\nNote also that some companies contribute engineering resources into\ncore free software components, like gcc, gdb, the linux kernel, glibc,\ngnome, gtk+, rpm, apache, XFree, KDE - AFAIK, Red Hat and SuSE are by\nfar the two doing this the most.\n\n\n--\nTrond Eivind Glomsr�d\nRed Hat, Inc.\n\n\n", "msg_date": "Sun, 3 Dec 2000 18:10:41 -0500", "msg_from": "\"Gary MacDougall\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: beta testing version" }, { "msg_contents": "\"Gary MacDougall\" <[email protected]> writes:\n\n> No offense Trond, if you were in on the Red Hat IPO from the start,\n> you'd have to say those people made \"good money\".\n\nI'm talking about the business as such, not the IPO where the price\nwent stratospheric (we were priced like we were earning 1 or 2 billion\ndollars year, which was kindof weird). \n\n\n-- \nTrond Eivind Glomsr�d\nRed Hat, Inc.\n", "msg_date": "03 Dec 2000 18:13:16 -0500", "msg_from": "[email protected] (Trond Eivind =?iso-8859-1?q?Glomsr=d8d?=)", "msg_from_op": false, "msg_subject": "Re: beta testing version" }, { "msg_contents": "> No, not at all. At least for me, if I write code which is dependent on\n> the open source work of others, then hell yes, that work should also be\n> open source. That, to me, is the difference between right and wrong.\n>\n\nActually, your not legally bound to anything if you write \"new\" additional\ncode, even if its dependant on something. You could consider it\n\"propietary\"\nand charge for it. There a tons of these things going on right now.\n\nHaving dependancy on an open source product/code/functionality does not\nmake one bound to make thier code \"open source\".\n\n> If you write a program which stands on its own, takes no work from\n> uncompensated parties, then you have the unambiguous right to do what\n> ever you want.\n\nThats a given.\n\n> I honestly feel that it is wrong to take what others have shared and use\n> it for the basis of something you will not share, and I can't understand\n> how anyone could think differently.\n\nThe issue isn't \"fairness\", the issue really is really trust. And from what\nI'm\nseeing, like anything else in life, if you rely solely on trust when money\nis\ninvolved, the system will fail--eventually.\n\nsad... isn't it?\n\n\n> --\n> http://www.mohawksoft.com\n\n\n", "msg_date": "Sun, 3 Dec 2000 18:15:46 -0500", "msg_from": "\"Gary MacDougall\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: beta testing version" }, { "msg_contents": "Gary MacDougall wrote:\n> \n> > No, not at all. At least for me, if I write code which is dependent on\n> > the open source work of others, then hell yes, that work should also be\n> > open source. That, to me, is the difference between right and wrong.\n> >\n> \n> Actually, your not legally bound to anything if you write \"new\" additional\n> code, even if its dependant on something. You could consider it\n> \"propietary\"\n> and charge for it. There a tons of these things going on right now.\n> \n> Having dependancy on an open source product/code/functionality does not\n> make one bound to make thier code \"open source\".\n> \n> > If you write a program which stands on its own, takes no work from\n> > uncompensated parties, then you have the unambiguous right to do what\n> > ever you want.\n> \n> Thats a given.\n> \n> > I honestly feel that it is wrong to take what others have shared and use\n> > it for the basis of something you will not share, and I can't understand\n> > how anyone could think differently.\n> \n> The issue isn't \"fairness\", the issue really is really trust. And from what\n> I'm\n> seeing, like anything else in life, if you rely solely on trust when money\n> is\n> involved, the system will fail--eventually.\n> \n> sad... isn't it?\n\nThat's why, as bad as it is, GPL is the best answer.\n\n\n-- \nhttp://www.mohawksoft.com\n", "msg_date": "Sun, 03 Dec 2000 18:18:49 -0500", "msg_from": "mlw <[email protected]>", "msg_from_op": false, "msg_subject": "Re: beta testing version" }, { "msg_contents": "On Sun, Dec 03, 2000 at 05:17:36PM -0500, mlw wrote:\n> ... if I write code which is dependent on\n> the open source work of others, then hell yes, that work should also be\n> open source. That, to me, is the difference between right and wrong.\n\nThis is short and I will say no more:\n\nThe entire social contract around PostgreSQL is written down in the \nlicense. Those who have contributed to the project (are presumed to) \nhave read it and agreed to it before submitting their changes. Some\npeople have contributed intending someday to fold the resulting code \nbase into their proprietary product, and carefully checked to ensure \nthe license would allow it. Nobody has any legal or moral right to \nimpose extra use restrictions, on their own code or (especially!) on \nanybody else's.\n\nIf you would like to place additional restrictions on your own \ncontributions, you can:\n\n1. Work on other projects. (Adabas will soon be GPL, but you can \n start now. Others are coming, too.) There's always plenty of \n work to be done on Free Software.\n\n2. Fork the source base, add your code, and release the whole thing \n under GPL. You can even fold in changes from the original project, \n later. (Don't expect everybody to get along, afterward.) A less\n drastic alternative is to release GPL'd patches.\n\n3. Grin and bear it. Greed is a sin, but so is envy.\n\nFlame wars about licensing mainly distract people from writing code. \nHow would *you* like the time spent? \n\nNathan Myers\[email protected]\n\n", "msg_date": "Sun, 3 Dec 2000 15:26:35 -0800", "msg_from": "[email protected] (Nathan Myers)", "msg_from_op": false, "msg_subject": "Re: beta testing version" }, { "msg_contents": "> mlw wrote: [heavily edited]\n>> No, not at all. At least for me, if I write code which is dependent on\n>> the open source work of others, then hell yes, that work should also be\n>> open source. That, to me, is the difference between right and wrong.\n>> I honestly feel that it is wrong to take what others have shared and use\n>> it for the basis of something you will not share, and I can't understand\n>> how anyone could think differently.\n\nYou're missing the point almost completely. We've been around on this\nGPL-vs-BSD discussion many many many times before, and the discussion\nalways ends up at the same place: we aren't changing the license.\n\nThe two key reasons (IMHO) are:\n\n1. The original code base is BSD. We do not have the right to\nunilaterally relabel that code as GPL. Maybe we could try to say that\nall additions/changes after a certain date are GPL, but that'd become a\nhopeless mess very shortly; how would you keep track of what was which?\nNot to mention the fact that a mixed-license project would not satisfy\nGPL partisans anyway.\n\n2. Since Postgres is a database, and the vast majority of uses for\ndatabases are business-related, we have to have a license that\nbusinesses will feel comfortable with. One aspect of that comfort is\nthat they be able to do things like building proprietary applications\natop the database. If we take a purist GPL approach, we'll just drive\naway a lot of potential users and contributors. (I for one wouldn't be\nhere today, most likely, if Postgres had been GPL --- my then company\nwould not have gotten involved with it.)\n\nI have nothing against GPL; it's appropriate for some things. But\nit's not appropriate for *this* project, because of history and subject\nmatter. We've done just fine with the BSD license and I do not see a\nreason to think that GPL would be an improvement.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 03 Dec 2000 18:55:03 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: beta testing version " }, { "msg_contents": "At 5:17 PM -0500 12/3/00, mlw wrote:\n>I honestly feel that it is wrong to take what others have shared and use\n>it for the basis of something you will not share, and I can't understand\n>how anyone could think differently.\n\nYeah, it really sucks when companies that are in buisness to make money by creating solutions and support for end users take the hard work of volenteers, commit resources to extending and enhancing that work, and make that work more accessable end users (for a fee).\n\nMaybe it's unfair that the people at the bottom of that chain don't reap a percentage of the revenue generated at the top, but those people were free to read the license of the product they were contributing to.\n\nIronically, the GPL protects the future income a programmer much bettter than the BSD license, becuase under the GPL the original author can sell the code to a commercial enterprise who otherwise would not have been able to use it. Even more ironically, the GPL doesn't prevent 3rd parties from feeding at the trough as long as they DON'T extend and enhance the product. (Though Red Hat and friends donate work back to maintain community support.)\n\nTo me, Open Source is about admitting that the Computer Science field is in it's infancy, and the complex systems we're building today are the fundamental building blocks of tomorrow's systems. It is about exchanging control for adoption, a trade-off that has millions of case studies.\n\nThink Different,\n-pmb\n\n--\n\"Every time you provide an option, you're asking the user to make a decision.\n That means they will have to think about something and decide about it.\n It's not necessarily a bad thing, but, in general, you should always try to\n minimize the number of decisions that people have to make.\"\n http://joel.editthispage.com/stories/storyReader$51\n\n\n", "msg_date": "Sun, 3 Dec 2000 16:26:38 -0800", "msg_from": "Peter Bierman <[email protected]>", "msg_from_op": false, "msg_subject": "Re: beta testing version" }, { "msg_contents": "On Sun, 3 Dec 2000, Hannu Krosing wrote:\n\n> The Hermit Hacker wrote:\n> > \n> > On Sat, 2 Dec 2000, Don Baccus wrote:\n> > \n> > > > I *am* one of those volunteers\n> > >\n> > > Yes, I well remember you screwing up PG 7.0 just before beta, without bothering\n> > > to test your code, and leaving on vacation.\n> > >\n> > > You were irresponsible then, and you're being irresponsible now.\n> > \n> > Okay, so let me get this one straight ... it was irresponsible for him to\n> > put code in that was broken the last time, but it wouldn't be\n> > irresponsible for us to release code that we don't feel is ready this\n> > time? *raised eyebrow*\n> > \n> > Just want to get this straight, as it kinda sounds hypocritical to me, but\n> > want to make sure that I understand before I fully arrive at that\n> > conclusion ... :)\n> \n> IIRC, this thread woke up on someone complaining about PostgreSQl inc\n> promising \n> to release some code for replication in mid-october and asking for\n> confirmation \n> that this is just a schedule slip and that the project is still going on\n> and \n> going to be released as open source.\n> \n> What seems to be the answer is: \"NO, we will keep the replication code\n> proprietary\".\n> \n> I have not seen this answer myself, but i've got this impression from\n> the contents \n> of the whole discussion.\n> \n> Do you know if this is the case ?\n\nIf this is the impression that someone gave, I am shocked ... Thomas\nhimself has already posted stating that it was a scheduale slip on his\npart. Vadim did up the software days before the Oracle OpenWorld\nconference, but it was a very rudimentary implementation. At the show,\nThomas dove in to build a basic interface to it, and, as time permits, has\nbeen working on packaging to get it into contrib before v7.1 is released\n...\n\nI've been trying to follow this thread, and seem to have missed where\nsomeone arrived at the conclusion that we were proprietarizing(word?) this\n... we do apologize that it didn't get out mid-October, but it is/was\npurely a scheduale slip ...\n\n", "msg_date": "Sun, 3 Dec 2000 20:49:09 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: beta testing version" }, { "msg_contents": "On Sun, 3 Dec 2000, mlw wrote:\n\n> Hannu Krosing wrote:\n> > > I know this is a borderline rant, and I am sorry, but I think it is very\n> > > important that the integrity of open source be preserved at 100% because\n> > > it is a very slippery slope, and we are all surrounded by the temptation\n> > > cheat the spirit of open source \"just a little\" for short term gain.\n> > \n> > Do you mean that anyone who has contributed to an opensource project\n> > should be forbidden from doing any closed-source development ?\n> \n> No, not at all. At least for me, if I write code which is dependent on\n> the open source work of others, then hell yes, that work should also be\n> open source. That, to me, is the difference between right and wrong.\n> \n> If you write a program which stands on its own, takes no work from\n> uncompensated parties, then you have the unambiguous right to do what\n> ever you want.\n> \n> I honestly feel that it is wrong to take what others have shared and use\n> it for the basis of something you will not share, and I can't understand\n> how anyone could think differently.\n\n\n", "msg_date": "Sun, 3 Dec 2000 20:50:24 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: beta testing version" }, { "msg_contents": "On Sun, 3 Dec 2000, Gary MacDougall wrote:\n\n> > If you write a program which stands on its own, takes no work from\n> > uncompensated parties, then you have the unambiguous right to do what\n> > ever you want.\n> \n> Thats a given.\n\nokay, then now I'm confused ... neither SePICK or erServer are derived\nfrom uncompensated parties ... they work over top of PgSQL, but are not\nintegrated into them, nor have required any changes to PgSQL in order to\nmake it work ...\n\n... so, where is this whole outcry coming from?\n\n\n", "msg_date": "Sun, 3 Dec 2000 20:53:08 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: beta testing version" }, { "msg_contents": "On Sun, 3 Dec 2000, Don Baccus wrote:\n\n> At 11:00 PM 12/2/00 -0800, Vadim Mikheev wrote:\n> >> There is risk here. It isn't so much in the fact that PostgreSQL, Inc\n> >> is doing a couple of modest closed-source things with the code. After\n> >> all, the PG community has long acknowleged that the BSD license would\n> >> allow others to co-op the code and commercialize it with no obligations.\n> >> \n> >> It is rather sad to see PG, Inc. take the first step in this direction.\n> >> \n> >> How long until the entire code base gets co-opted?\n> >\n> >I totaly missed your point here. How closing source of ERserver is related\n> >to closing code of PostgreSQL DB server? Let me clear things:\n> \n> (not based on WAL)\n> \n> That's wasn't clear from the blurb.\n> \n> Still, this notion that PG, Inc will start producing closed-source products\n> poisons the well. It strengthens FUD arguments of the \"open source can't\n> provide enterprise solutions\" variety. \"Look, even PostgreSQL, Inc realizes\n> that you must follow a close sourced model in order to provide tools for\n> the corporate world.\"\n\nDon ... have you never worked for a client that has paid you to develop a\nproduct for them? Have you taken the work you did for them, that they\npaid for, and shoved it out into the community to use for free? Why would\nwe do anything any differently? \n\nYour clients ask you to develop something for them as an extension to\nPgSQL (its extensible, ya know?) that can be loaded as a simple module\n(ala IPMeter) that gives them a competitive advantage over their\ncompetitors, but that doesn't require any changes to the physical backend\nto implement ... would you refuse their money? or would you do like\nPgSQL, Inc is doing, where we do a risk-analysis of the changes and work\nwith the client to make use of the \"competitive advantage\" it gives them\nfor a period of time prior to releasing it open source?\n\nGeoff explains it much better then I do, from a business perspective, but\nany extension/application that PgSQL, Inc develops for our clients has a\nlife-span on it ... after which, keeping it in track with what is being\ndeveloped would cost more then the competitive advantage it gives the\nclients ... sometimes, that is 0, some times, 6 months ... extreme cases,\n24 months ... \n\nNobody is going to pay you to develop X if you are going to turn around\nand give it for free to their competitor ... it makes no business. In\nalot of cases, making these changes benefits the project as some of the\nstuff that is required for them get integrated into the backend ...\n\n\n", "msg_date": "Sun, 3 Dec 2000 21:03:54 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: beta testing version" }, { "msg_contents": "On Sun, Dec 03, 2000 at 08:49:09PM -0400, The Hermit Hacker wrote:\n> On Sun, 3 Dec 2000, Hannu Krosing wrote:\n> \n> > \n> > IIRC, this thread woke up on someone complaining about PostgreSQl inc\n> > promising \n> > to release some code for replication in mid-october and asking for\n> > confirmation \n> > that this is just a schedule slip and that the project is still going on\n> > and \n> > going to be released as open source.\n> > \n\nThat would be me asking the question, as a reply to Don's concern regarding\nthe 'prorietary extension on a 24 mo. release delay'\n\n> > What seems to be the answer is: \"NO, we will keep the replication code\n> > proprietary\".\n> > \n> > I have not seen this answer myself, but i've got this impression from\n> > the contents \n> > of the whole discussion.\n> > \n> > Do you know if this is the case ?\n> \n> If this is the impression that someone gave, I am shocked ... Thomas\n> himself has already posted stating that it was a scheduale slip on his\n> part. \n\nActually, Thomas said:\n\nThomas> Hmm. What has kept replication from happening in the past? It\nThomas> is a big job and difficult to do correctly. It is entirely my\nThomas> fault that you haven't seen the demo code released; I've been\nThomas> packaging it to make it a bit easier to work with.\n\nI noted the use of the words \"demo code\" rather than \"core code\". That\nbothered (and still bothers) me, but I didn't reply at the time,\nsince there was already enough heat in this thread. I'll take your\ninterpretation to mean it's just a matter of semantics.\n\n> [...] Vadim did up the software days before the Oracle OpenWorld\n> conference, but it was a very rudimentary implementation. At the show,\n> Thomas dove in to build a basic interface to it, and, as time permits, has\n> been working on packaging to get it into contrib before v7.1 is released\n> ...\n> \n> I've been trying to follow this thread, and seem to have missed where\n> someone arrived at the conclusion that we were proprietarizing(word?) this\n> ... we do apologize that it didn't get out mid-October, but it is/was\n> purely a scheduale slip ...\n> \n\nMixture of the silent schedule slip on the core code, and the explicit\nstatement on the erserver.com page regarding the 'proprietary extensions'\nwith a delayed source release.\n\nThe biggest problem I see with having core developers making proprietary\nextensions is the potentional for conflict of interest when and if\nsome of us donate equivalent code to the core. The core developers who\nhave also done proprietary versions will have to be very cautious\nwhen working on such code. They're in a bind, with two parts. First,\nthey have obligations to their employer and their employer's partners\nto not release the closed work early. Second, possibly ignoring such\nindependent extensions, or even actively excluding them for the core,\nin favor of their own code. The core developers _do_ have a bit of a\ntrack record favoring each others code over external code, as is natural:\nwe all trust work more from sources we know better, especially when that\nsource is ourselves. But this favoratism could work against the earliest\npossible open solution.\n\nI'm still anxious to see the core patches needed to support replication.\nSince you've leaked that they work going back to v6.5, I have a feeling\nthe approach may not be the one I was hoping for. \n\nRoss\n", "msg_date": "Sun, 3 Dec 2000 20:00:53 -0600", "msg_from": "\"Ross J. Reedstrom\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: beta testing version" }, { "msg_contents": "> I'm still anxious to see the core patches needed to support replication.\n> Since you've leaked that they work going back to v6.5, I have a feeling\n> the approach may not be the one I was hoping for.\n\nThere are no core patches required to support replication. This has been\nsaid already, but perhaps lost in the noise.\n\n - Thomas\n", "msg_date": "Mon, 04 Dec 2000 02:16:08 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: beta testing version" }, { "msg_contents": "I'm agreeing with the people like SePICK and erServer.\nI'm only being sort of cheeky in saying that they wouldn't have had a\nproduct had\nit not been for the Open Source that they are leveraging off of.\nMaking money? I don't know what they're plans are, but at some point I would\nfully expect *someone* to make money.\n\n\n\n----- Original Message -----\nFrom: \"The Hermit Hacker\" <[email protected]>\nTo: \"Gary MacDougall\" <[email protected]>\nCc: \"mlw\" <[email protected]>; \"Hannu Krosing\" <[email protected]>; \"Thomas\nLockhart\" <[email protected]>; \"Don Baccus\"\n<[email protected]>; \"PostgreSQL Development\"\n<[email protected]>\nSent: Sunday, December 03, 2000 7:53 PM\nSubject: Re: [HACKERS] beta testing version\n\n\n> On Sun, 3 Dec 2000, Gary MacDougall wrote:\n>\n> > > If you write a program which stands on its own, takes no work from\n> > > uncompensated parties, then you have the unambiguous right to do what\n> > > ever you want.\n> >\n> > Thats a given.\n>\n> okay, then now I'm confused ... neither SePICK or erServer are derived\n> from uncompensated parties ... they work over top of PgSQL, but are not\n> integrated into them, nor have required any changes to PgSQL in order to\n> make it work ...\n>\n> ... so, where is this whole outcry coming from?\n>\n>\n\n\n", "msg_date": "Sun, 3 Dec 2000 22:01:45 -0500", "msg_from": "\"Gary MacDougall\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: beta testing version" }, { "msg_contents": "Correct me if I'm wrong but in the last 3 years what company that you\nknow of didn't consider an IPO part of the \"business and such\". Most\ntech companies that have been formed in the last 4 - 5 years have one\nthing on the brain--IPO. It's the #1 thing (sadly) that they care about.\nI only wished these companies cared as much about *creating* and\ninovation more than they cared about going public...\n\ng.\n\n> No offense Trond, if you were in on the Red Hat IPO from the start,\n> you'd have to say those people made \"good money\".\n\n>>I'm talking about the business as such, not the IPO where the price\n>>went stratospheric (we were priced like we were earning 1 or 2 billion\n>>dollars year, which was kindof weird).\n\n\n>>--\n>>Trond Eivind Glomsr�d\n>>Red Hat, Inc.\n\n\n\n", "msg_date": "Sun, 3 Dec 2000 22:06:25 -0500", "msg_from": "\"Gary MacDougall\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: beta testing version" }, { "msg_contents": "On Sun, 3 Dec 2000, Ross J. Reedstrom wrote:\n\n> > If this is the impression that someone gave, I am shocked ... Thomas\n> > himself has already posted stating that it was a scheduale slip on his\n> > part. \n> \n> Actually, Thomas said:\n> \n> Thomas> Hmm. What has kept replication from happening in the past? It\n> Thomas> is a big job and difficult to do correctly. It is entirely my\n> Thomas> fault that you haven't seen the demo code released; I've been\n> Thomas> packaging it to make it a bit easier to work with.\n> \n> I noted the use of the words \"demo code\" rather than \"core code\". That\n> bothered (and still bothers) me, but I didn't reply at the time,\n> since there was already enough heat in this thread. I'll take your\n> interpretation to mean it's just a matter of semantics.\n\nthere is nothing that we are developing at this date that is *core* code\n... the \"demo code\" that we are going to be putting into contrib is a\nsimplistic version, and the first cut, of what we are developing ... like\neverything in contrib, it will be hack-on-able, extendable, etc ...\n\n> I'm still anxious to see the core patches needed to support\n> replication. Since you've leaked that they work going back to v6.5, I\n> have a feeling the approach may not be the one I was hoping for.\n\nthis is where the 'confusion' appears to be arising .. there are no\n*patches* ... anything that will require patches to the core server will\nalmost have to be put to the open source or we hit problems where\ndevelopment continues without us ... what we are doing with replication\nrequires *zero* patches to the server, it is purely a third-party\napplication ...\n\n\n", "msg_date": "Sun, 3 Dec 2000 23:15:00 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: beta testing version" }, { "msg_contents": "On Sun, 3 Dec 2000, Gary MacDougall wrote:\n\n> I'm agreeing with the people like SePICK and erServer.\n> I'm only being sort of cheeky in saying that they wouldn't have had a\n> product had\n> it not been for the Open Source that they are leveraging off of.\n\nSo, basically, if I hadn't pulled together Thomas, Bruce and Vadim 5 years\nago, when Jolly and Andrew finished their graduate thesis, and continued\nto provide the resources required to bring PgSQL from v1.06 to now, we\nwouldn't be able to use that as a basis for third party applications\n... pretty much, ya, that sums it up ...\n\n> ----- Original Message -----\n> From: \"The Hermit Hacker\" <[email protected]>\n> To: \"Gary MacDougall\" <[email protected]>\n> Cc: \"mlw\" <[email protected]>; \"Hannu Krosing\" <[email protected]>; \"Thomas\n> Lockhart\" <[email protected]>; \"Don Baccus\"\n> <[email protected]>; \"PostgreSQL Development\"\n> <[email protected]>\n> Sent: Sunday, December 03, 2000 7:53 PM\n> Subject: Re: [HACKERS] beta testing version\n> \n> \n> > On Sun, 3 Dec 2000, Gary MacDougall wrote:\n> >\n> > > > If you write a program which stands on its own, takes no work from\n> > > > uncompensated parties, then you have the unambiguous right to do what\n> > > > ever you want.\n> > >\n> > > Thats a given.\n> >\n> > okay, then now I'm confused ... neither SePICK or erServer are derived\n> > from uncompensated parties ... they work over top of PgSQL, but are not\n> > integrated into them, nor have required any changes to PgSQL in order to\n> > make it work ...\n> >\n> > ... so, where is this whole outcry coming from?\n> >\n> >\n> \n> \n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Sun, 3 Dec 2000 23:18:00 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: beta testing version" }, { "msg_contents": "On Sun, Dec 03, 2000 at 08:53:08PM -0400, The Hermit Hacker wrote:\n> On Sun, 3 Dec 2000, Gary MacDougall wrote:\n> \n> > > If you write a program which stands on its own, takes no work from\n> > > uncompensated parties, then you have the unambiguous right to do what\n> > > ever you want.\n> > \n> > Thats a given.\n> \n> okay, then now I'm confused ... neither SePICK or erServer are derived\n> from uncompensated parties ... they work over top of PgSQL, but are not\n> integrated into them, nor have required any changes to PgSQL in order to\n> make it work ...\n> \n> ... so, where is this whole outcry coming from?\n\nThis paragraph from erserver.com:\n\n eRServer development is currently concentrating on core, universal\n functions that will enable individuals and IT professionals\n to implement PostgreSQL ORDBMS solutions for mission critical\n datawarehousing, datamining, and eCommerce requirements. These\n initial developments will be published under the PostgreSQL Open\n Source license, and made available through our sites, Certified\n Platinum Partners, and others in PostgreSQL community.\n\nled me (and many others) to believe that this was going to be a tighly\nintegrated service, requiring code in the PostgreSQL core, since that's the\nnormal use of 'core' around here.\n\nNow that I know it's a completely external implementation, I feel bad about\ngriping about deadlines. I _do_ wish I'd known this _design choice_ a bit\nearlier, as it impacts how I'll try to do some things with pgsql, but that's\nmy own fault for over interpreting press releases and pre-announcements.\n\nRoss\n", "msg_date": "Sun, 3 Dec 2000 21:42:37 -0600", "msg_from": "\"Ross J. Reedstrom\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: beta testing version" }, { "msg_contents": "On Sun, 3 Dec 2000, Ross J. Reedstrom wrote:\n\n> On Sun, Dec 03, 2000 at 08:53:08PM -0400, The Hermit Hacker wrote:\n> > On Sun, 3 Dec 2000, Gary MacDougall wrote:\n> > \n> > > > If you write a program which stands on its own, takes no work from\n> > > > uncompensated parties, then you have the unambiguous right to do what\n> > > > ever you want.\n> > > \n> > > Thats a given.\n> > \n> > okay, then now I'm confused ... neither SePICK or erServer are derived\n> > from uncompensated parties ... they work over top of PgSQL, but are not\n> > integrated into them, nor have required any changes to PgSQL in order to\n> > make it work ...\n> > \n> > ... so, where is this whole outcry coming from?\n> \n> This paragraph from erserver.com:\n> \n> eRServer development is currently concentrating on core, universal\n> functions that will enable individuals and IT professionals\n> to implement PostgreSQL ORDBMS solutions for mission critical\n> datawarehousing, datamining, and eCommerce requirements. These\n> initial developments will be published under the PostgreSQL Open\n> Source license, and made available through our sites, Certified\n> Platinum Partners, and others in PostgreSQL community.\n> \n> led me (and many others) to believe that this was going to be a tighly\n> integrated service, requiring code in the PostgreSQL core, since that's the\n> normal use of 'core' around here.\n> \n> Now that I know it's a completely external implementation, I feel bad about\n> griping about deadlines. I _do_ wish I'd known this _design choice_ a bit\n> earlier, as it impacts how I'll try to do some things with pgsql, but that's\n> my own fault for over interpreting press releases and pre-announcements.\n\nApologies from our side as well ... failings on the english language and\nchoice of said on our side ... the last thing that we want to do is have\nto maintain patches across multiple versions for stuff that is core to the\nserver ... Thomas/Vadim can easily correct me if I've missed something,\nbut to the best of my knowledge, from our many discussions, anything that\nis *core* to the PgSQL server itself will always be released similar to\nany other project (namely, tested and open) ... including hooks for any\nproprietary projects ... the sanctity of the *core* server is *always*\nforemost in our minds, no matter what other projects we are working on ...\n\n\n", "msg_date": "Sun, 3 Dec 2000 23:59:05 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: beta testing version" }, { "msg_contents": "bingo.\n\nNot just third-party app's, but think of all the vertical products that\ninclude PG...\nI'm right now wondering if TIVO uses it?\n\nYou have to think that PG will show up in some pretty interesting money\nmaking products...\n\nSo yes, had you not got the ball rolling.... well, you know what I'm saying.\n\ng.\n\n----- Original Message -----\nFrom: \"The Hermit Hacker\" <[email protected]>\nTo: \"Gary MacDougall\" <[email protected]>\nCc: \"mlw\" <[email protected]>; \"Hannu Krosing\" <[email protected]>; \"Thomas\nLockhart\" <[email protected]>; \"Don Baccus\"\n<[email protected]>; \"PostgreSQL Development\"\n<[email protected]>\nSent: Sunday, December 03, 2000 10:18 PM\nSubject: Re: [HACKERS] beta testing version\n\n\n> On Sun, 3 Dec 2000, Gary MacDougall wrote:\n>\n> > I'm agreeing with the people like SePICK and erServer.\n> > I'm only being sort of cheeky in saying that they wouldn't have had a\n> > product had\n> > it not been for the Open Source that they are leveraging off of.\n>\n> So, basically, if I hadn't pulled together Thomas, Bruce and Vadim 5 years\n> ago, when Jolly and Andrew finished their graduate thesis, and continued\n> to provide the resources required to bring PgSQL from v1.06 to now, we\n> wouldn't be able to use that as a basis for third party applications\n> ... pretty much, ya, that sums it up ...\n>\n> > ----- Original Message -----\n> > From: \"The Hermit Hacker\" <[email protected]>\n> > To: \"Gary MacDougall\" <[email protected]>\n> > Cc: \"mlw\" <[email protected]>; \"Hannu Krosing\" <[email protected]>; \"Thomas\n> > Lockhart\" <[email protected]>; \"Don Baccus\"\n> > <[email protected]>; \"PostgreSQL Development\"\n> > <[email protected]>\n> > Sent: Sunday, December 03, 2000 7:53 PM\n> > Subject: Re: [HACKERS] beta testing version\n> >\n> >\n> > > On Sun, 3 Dec 2000, Gary MacDougall wrote:\n> > >\n> > > > > If you write a program which stands on its own, takes no work from\n> > > > > uncompensated parties, then you have the unambiguous right to do\nwhat\n> > > > > ever you want.\n> > > >\n> > > > Thats a given.\n> > >\n> > > okay, then now I'm confused ... neither SePICK or erServer are derived\n> > > from uncompensated parties ... they work over top of PgSQL, but are\nnot\n> > > integrated into them, nor have required any changes to PgSQL in order\nto\n> > > make it work ...\n> > >\n> > > ... so, where is this whole outcry coming from?\n> > >\n> > >\n> >\n> >\n> >\n>\n> Marc G. Fournier ICQ#7615664 IRC Nick:\nScrappy\n> Systems Administrator @ hub.org\n> primary: [email protected] secondary:\nscrappy@{freebsd|postgresql}.org\n>\n\n\n", "msg_date": "Sun, 3 Dec 2000 23:17:14 -0500", "msg_from": "\"Gary MacDougall\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: beta testing version" }, { "msg_contents": "At 09:42 PM 12/3/00 -0600, Ross J. Reedstrom wrote:\n\n>This paragraph from erserver.com:\n>\n> eRServer development is currently concentrating on core, universal\n> functions that will enable individuals and IT professionals\n> to implement PostgreSQL ORDBMS solutions for mission critical\n> datawarehousing, datamining, and eCommerce requirements. These\n> initial developments will be published under the PostgreSQL Open\n> Source license, and made available through our sites, Certified\n> Platinum Partners, and others in PostgreSQL community.\n>\n>led me (and many others) to believe that this was going to be a tighly\n>integrated service, requiring code in the PostgreSQL core, since that's the\n>normal use of 'core' around here.\n\nRight. This is a big source of misunderstanding. There's still the fact\nthat 50% of the PG steering committee that are involved in [partially] closed\nsource development based on PG, though. This figure disturbs me.\n\n50% is a lot. It's like ... half, right? Or did I miss something in the\nconversion?\n\nThis represents significant change from the past where 0%, AFAIK, were \ninvolved in closed source PG add-ons.\n\n>Now that I know it's a completely external implementation, I feel bad about\n>griping about deadlines. I _do_ wish I'd known this _design choice_ a bit\n>earlier, as it impacts how I'll try to do some things with pgsql, but that's\n>my own fault for over interpreting press releases and pre-announcements.\n\nIF 50% of the steering committee is to embark on such a task in a closed source\nor semi-closed source development model, it would seem common courtesy to inform the\ncommunity of the facts as early as they were decided upon.\n\nIn fact, it might seem to be common courtesy to float the notion in the community,\nto gauge reaction and to build support, before finalizing such a decision.\n\nAFAIC this arrived out of no where, a sort of stealth \"50% of the steering committee\nhas decided to embark on a semi-proprietary solution to the replication problem that\nyou won't see as open source for [up to] two years after its completion\".\n\nThat's a paradigm shift. Whether right or wrong, there's a responsibility to\ncommunicate the fact that 50% of the steering committee has decided to partially\nabandon the open source development model for one that is (in some cases) closed\nfor two years and (in other cases) forever.\n\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n", "msg_date": "Sun, 03 Dec 2000 22:03:11 -0800", "msg_from": "Don Baccus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: beta testing version" }, { "msg_contents": "At 11:59 PM 12/3/00 -0400, The Hermit Hacker wrote:\n> the sanctity of the *core* server is *always*\n>foremost in our minds, no matter what other projects we are working on ...\n\nWhat happens if financially things aren't entirely rosy with your company?\nThe problem in taking itty-bitty steps in this direction is that you're\ninvolving outside money interests that don't necessarily adhere to this\nview.\n\nHaving taken the first steps to a proprietary, closed source future, would\nyou pledge to bankrupt your company rather than accept a large captital \ninvestment with an ROI based on proprietary extensions to the core that \nmight not be likely to come out of the non-tainted side of the development\nhouse?\n\nWould your company sign a contract to that effect with independent parties,\ni.e. that it would never violate the sanctity of the *core*? Even if it means\nyou go broke? And that your investors go broke?\n\nOr would your investors prefer you not make such a formal committment, in order\nto keep options open if things don't go well?\n\n(in the early 80's my company received a total of $8,000,000 in pre-IPO\ncapital investments, so I have some experience with the expectations of investors.\nIt tends to make me a bit paranoid. I'm not the only COO to have such experiences\nwhile living the life).\n\nWhat happens in two years if those investors in eRServer haven't gotten adequate\nreturn on their investment? Do you have a formal agreement that the source will\nbe released regardless? Can the community inspect the agreement so we can judge\nfor ourselves whether or not this assurance is adequately backed by contract\nlanguage?\n\nAre your agreements Open Source? :)\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n", "msg_date": "Sun, 03 Dec 2000 22:14:51 -0800", "msg_from": "Don Baccus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: beta testing version" }, { "msg_contents": "> In fact, it might seem to be common courtesy...\n\nAn odd choice of words coming from you Don.\n\nWe are offering our services and expertise to a community outside\n-hackers, as a business formed in a way that this new community expects\nto see. Nothing special or sinister here. Other than it seems to have\nraised the point that you expected each of us to be working for you,\ngratis, on projects you find compelling, using all of our available\ntime, far into the future just as each of us has over the last five\nyears.\n\nAfter your recent spewing, it irks me a little to admit that this will\nnot change, and that we are likely to continue to each work on OS\nPostgreSQL projects using all of our available time, just as we have in\nthe past.\n\nA recent example of non-sinister change in another area is the work done\nto release 7.0.3. This is a release which would not have happened in\nprevious cycles, since we are so close to beta on 7.1. But GB paid Tom\nLane to work on it as part of *their* business plan, and he sheparded it\nthrough the cycle. There was no outcry from you at this presumption, and\non this diversion of community resources for this effort. Not sure why,\nother than you chose to pick some other fight.\n\nAnd no matter which fight you chose, you're wasting the time of others\nas you fight your demons.\n\n - Thomas\n", "msg_date": "Mon, 04 Dec 2000 07:11:23 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: beta testing version" }, { "msg_contents": "Horst Herb wrote:\n> > > Branding. Phone support lines. Legal departments/Lawsuit prevention.\n> Figuring\n> > > out how to prevent open source from stealing the thunder by duplicating\n> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n> > > features. And building a _product_.\n> Oops. You didn't really mean that, did you? Could it be that there are some\n> people out there thinking \"let them free software fools do the hard initial\n> work, once things are working nicely, we take over, add a few \"secret\"\n> ingredients, and voila - the commercial product has been created?\n\nThat wasn't the _intended_ meaning, but I suppose that it's a related issue.\n\nI was referring to companies expending variable amounts of time and resources\non a new closed source technology, only to have their marketshare shriveled up\nby OSS coders rapidly duplicating their efforts, and releasing free code or a\nless expensive product.\n\nTo put it in proper context:\nIf the project under discussion was reverse engineered (or even clean room\n\"re-engineered\") and released as a separate, open source, product (or\neven just \"free\" code), the demand for the PG, Inc. software is placed at\nrisk.\n\nThe actual size, and scope, of the project is irrelevant, as determined\nOSS advocates have pretty much taken on any, and every, viable project. It's\nnot really about \"stealing\" code efforts, anymore than RedHat \"stole\"\nlinux, or that Pg has been stealing features from other ORDBMS's... it's that\nOSS is a difficult market to capture, if you are selling closed source\ncode that can be created, or duplicated, by others.\n\nStronghold and Raven(?) were more sucessful products before the OSS\nencryption efforts took off. Now anybody can build an SSL server, without\npaying for licenses that used to cost thousands (I know there's the RSA\nissue in this history as well, but let's be realistic about who actually\nobeyed all those laws, okay?). Zend is trying to build an IDE for\nPHP, but the open-source market moves fast enough that within a few\nmonths of release, there will be clones, reverse engineered versions,\netc. SSH tried valiantly to close their code base.... which created\na market for OpenSSH. You see it time and again, there's a closed\nversion/extension/plug-in, feature, and an OSS clone gets built up\nfor it. GUI for sendmail? OSS now. New AIM protocols? gaim was on\nit in days. New, proprietary, M$ mail software that took years to build\nup, research, and develop? Give the OSS hordes a few months. New,\nclosed, SMB protocols? Give the samba team a few days, maybe a few\nweeks.\n\nTo wrap up this point, a closed derivative (or closed new) product is\nnow competing against OSS pools of developers, which is much harder to\nstop than a single closed source company. It's difficult to compete\non code quality, or code features.... you have to compete with a\n*product* that is bettter than anything globally co-ordinated code\nhackers can build themselves.\n\n-Bop\n\n--\nBrought to you from iBop the iMac, a MacOS, Win95, Win98, LinuxPPC machine,\nwhich is currently in MacOS land. Your bopping may vary.\n", "msg_date": "Mon, 04 Dec 2000 00:38:04 -0700", "msg_from": "Ron Chmara <[email protected]>", "msg_from_op": false, "msg_subject": "Re: beta testing version" }, { "msg_contents": "At 07:11 AM 12/4/00 +0000, Thomas Lockhart wrote:\n\n>We are offering our services and expertise to a community outside\n>-hackers, as a business formed in a way that this new community expects\n>to see. Nothing special or sinister here. Other than it seems to have\n>raised the point that you expected each of us to be working for you,\n>gratis, on projects you find compelling, using all of our available\n>time, far into the future just as each of us has over the last five\n>years.\n\nNo, not at all. Working gratis is not the issue, as I made clear. There\nare - despite your rather condescending statement implying otherwise -\nbusiness models that lead to revenue without abandoning open source.\n\nI'm making a decent living following such a business model, thank\nyou very much. I'm living proof that it is possible.\n\n...\n\n>A recent example of non-sinister change in another area is the work done\n>to release 7.0.3. This is a release which would not have happened in\n>previous cycles, since we are so close to beta on 7.1. But GB paid Tom\n>Lane to work on it as part of *their* business plan, and he sheparded it\n>through the cycle. There was no outcry from you at this presumption, and\n>on this diversion of community resources for this effort. Not sure why,\n>other than you chose to pick some other fight.\n\nThere's a vast difference between releasing 7.0.3 in open source form TODAY\nand eRServer, which may not be released in open source form for up to two\nyears after it enters the market on a closed source, proprietary footing.\nTo suggest there is no difference, as you seem to be doing, is a hopelessly\nunconvincing argument.\n\nThe fact that you seem blind to the difference is one reason why PG, Inc \nworries me (since you are a principle in the company).\n\nThe reason you heard no outcry from me in the PG 7.0.3 case is because there\n*is* a difference between it and a semi-proprietary product like eRServer.\nIf GB had held Tom's work on PG 7.0.3 and released it only in (say) a packaged\nrelease for purchase, saying \"we'll release it to the CVS tree after we\nrecoup our investment\", there would've been an outcry from me, bet on it.\n\nProbably others, too...\n\n>And no matter which fight you chose, you're wasting the time of others\n>as you fight your demons.\n\nWell, I guess I'll have to stay off my medication, otherwise my demons\nmight disappear. I'm a regular miracle of medical science until I forget\nto take them.\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n", "msg_date": "Mon, 04 Dec 2000 00:15:16 -0800", "msg_from": "Don Baccus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: beta testing version" }, { "msg_contents": "On Sun, 3 Dec 2000, Don Baccus wrote:\n\n> At 11:59 PM 12/3/00 -0400, The Hermit Hacker wrote:\n> > the sanctity of the *core* server is *always*\n> >foremost in our minds, no matter what other projects we are working on ...\n> \n> What happens if financially things aren't entirely rosy with your\n> company? The problem in taking itty-bitty steps in this direction is\n> that you're involving outside money interests that don't necessarily\n> adhere to this view.\n> \n> Having taken the first steps to a proprietary, closed source future,\n> would you pledge to bankrupt your company rather than accept a large\n> captital investment with an ROI based on proprietary extensions to the\n> core that might not be likely to come out of the non-tainted side of\n> the development house?\n\nYou mean sort of like Great Bridge investing in core developers? Quite\nfrankly, I have yet to see anything but good come out of Tom as a result\nof that, as now he has more time on his hands ... then again, maybe Outer\nJoins was a bad idea? *raised eyebrow*\n\nPgSQL is *open source* ... that means that if you don't like it, take the\ncode, fork off your own version if you don't like what's happening to the\ncurrent tree and build your own community *shrug* \n\n\n", "msg_date": "Mon, 4 Dec 2000 11:25:38 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: beta testing version" }, { "msg_contents": "Can we PLEASE kill this thread? There are only a handful of people who\nare making contributions here and nothing really new is being said. I\nagree that the issue should be discussed, but this does not seem like the\nright forum.\n\nThanks.\n- brandon\n\n\nb. palmer, [email protected]\npgp: www.crimelabs.net/bpalmer.pgp5\n\n", "msg_date": "Mon, 4 Dec 2000 10:45:10 -0500 (EST)", "msg_from": "bpalmer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: beta testing version" }, { "msg_contents": "On Mon, 4 Dec 2000, Don Baccus wrote:\n\n> >A recent example of non-sinister change in another area is the work done\n> >to release 7.0.3. This is a release which would not have happened in\n> >previous cycles, since we are so close to beta on 7.1. But GB paid Tom\n> >Lane to work on it as part of *their* business plan, and he sheparded it\n> >through the cycle. There was no outcry from you at this presumption, and\n> >on this diversion of community resources for this effort. Not sure why,\n> >other than you chose to pick some other fight.\n> \n> There's a vast difference between releasing 7.0.3 in open source form\n> TODAY and eRServer, which may not be released in open source form for\n> up to two years after it enters the market on a closed source,\n> proprietary footing. To suggest there is no difference, as you seem to\n> be doing, is a hopelessly unconvincing argument.\n\nExcept, eRServer, the basic model, will be released Open Source, and, if\nall goes as planned, in time for inclusion in contrib of v7.1 ... \n\n\n", "msg_date": "Mon, 4 Dec 2000 12:31:34 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: beta testing version" }, { "msg_contents": "> This paragraph from erserver.com:\n> eRServer development is currently concentrating on core, universal\n> functions that will enable individuals and IT professionals\n> to implement PostgreSQL ORDBMS solutions for mission critical\n> datawarehousing, datamining, and eCommerce requirements. These\n> initial developments will be published under the PostgreSQL Open\n> Source license, and made available through our sites, Certified\n> Platinum Partners, and others in PostgreSQL community.\n> led me (and many others) to believe that this was going to be a tighly\n> integrated service, requiring code in the PostgreSQL core, since that's the\n> normal use of 'core' around here.\n\n\"Around here\" isn't \"around there\" ;)\n\nAs you can see, \"core\" == \"fundamental\" in the general sense, in a\nstatement not written specifically for the hacker community but for the\nworld at large. In many cases, taking one syllable rather than four is a\ngood thing, but sorry it led to confusion.\n\nMy schedule is completely out of whack, partly from taking the afternoon\noff to cool down from the personal attacks being lobbed my direction.\n\nWill pick things up as time permits, but we should have some code for\ncontrib/ in time for beta2, if it is acceptable to the community to put\nit in there.\n\n - Thomas\n", "msg_date": "Tue, 05 Dec 2000 05:29:36 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: beta testing version" }, { "msg_contents": "On Tue, Dec 05, 2000 at 05:29:36AM +0000, Thomas Lockhart wrote:\n> \n> As you can see, \"core\" == \"fundamental\" in the general sense, in a\n> statement not written specifically for the hacker community but for the\n> world at large. In many cases, taking one syllable rather than four is a\n> good thing, but sorry it led to confusion.\n\nYep, a closer re-read led me to enlightenment.\n\n> \n> My schedule is completely out of whack, partly from taking the afternoon\n> off to cool down from the personal attacks being lobbed my direction.\n\nI'm sorry about that. I hope the part of this thread that I helped start\ndidn't contribute too much to your distress. Had I realized at the time \nthat there was _no_ pgsql core work involved, I would have been less\ndistressed myself by the time slip. With beta on the way, I was concerned\nthat it wouldn't get in until the 7.2 tree opened.\n\n> \n> Will pick things up as time permits, but we should have some code for\n> contrib/ in time for beta2, if it is acceptable to the community to put\n> it in there.\n> \n\nI'm of the 'contrib is for stuff that doesn't even necessarily currently\nbuild' school, although I appreciate the work that's been done to reverse\nthe bit rot. Drop it in at any time, as far as I'm concerned.\n\nRoss\n", "msg_date": "Tue, 5 Dec 2000 11:39:13 -0600", "msg_from": "\"Ross J. Reedstrom\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: beta testing version" }, { "msg_contents": "On Sunday 03 December 2000 04:00, Vadim Mikheev wrote:\n> > There is risk here. It isn't so much in the fact that PostgreSQL, Inc\n> > is doing a couple of modest closed-source things with the code. After\n> > all, the PG community has long acknowleged that the BSD license would\n> > allow others to co-op the code and commercialize it with no obligations.\n> >\n> > It is rather sad to see PG, Inc. take the first step in this direction.\n> >\n> > How long until the entire code base gets co-opted?\n>\n> I totaly missed your point here. How closing source of ERserver is related\n> to closing code of PostgreSQL DB server? Let me clear things:\n>\n> 1. ERserver isn't based on WAL. It will work with any version >= 6.5\n>\n> 2. WAL was partially sponsored by my employer, Sectorbase.com,\n> not by PG, Inc.\n\nHas somebody thought about putting PG in the GPL licence instead of the BSD? \nPG inc would still be able to do there money giving support (just like IBM, \nHP and Compaq are doing there share with Linux), without been able to close \nthe code.\n\nOnly a thought... \n\nSaludos... :-)\n\n-- \n\"And I'm happy, because you make me feel good, about me.\" - Melvin Udall\n-----------------------------------------------------------------\nMart�n Marqu�s\t\t\temail: \[email protected]\nSanta Fe - Argentina\t\thttp://math.unl.edu.ar/~martin/\nAdministrador de sistemas en math.unl.edu.ar\n-----------------------------------------------------------------\n", "msg_date": "Tue, 5 Dec 2000 16:23:40 -0300", "msg_from": "\"Martin A. Marques\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: beta testing version" }, { "msg_contents": "On Sunday 03 December 2000 12:41, mlw wrote:\n> Thomas Lockhart wrote:\n> > As soon as you find a business model which does not require income, let\n> > me know. The .com'ers are trying it at the moment, and there seems to be\n> > a few flaws... ;)\n>\n> While I have not contributed anything to Postgres yet, I have\n> contributed to other environments. The prospect that I could create a\n> piece of code, spend weeks/years of my own time on something and some\n> entity can come along, take what I've written and create a product which\n> is better for it, and then not share back is offensive. Under GPL it is\n> illegal. (Postgres should try to move to GPL)\n\nWith you on the last statemente.\n\n> I am working on a full-text search engine for Postgres. A really fast\n> one, something better than anything else out there. It combines the\n> power and scalability of a web search engine, with the data-mining\n> capabilities of SQL.\n\nIf you want to make something GPL I would be more then interested to help \nyou. We could use something like that over here, and I have no problem at all \nwith releasing it as GPL code.\n\n> If I write this extension to Postgres, and release it, is it right that\n> a business can come along, add a few things here and there and introduce\n> a new closed source product on what I have written? That is certainly\n> not what I intend. My intention was to honor the people before me for\n> providing the rich environment which is Postgres. I have made real money\n> using Postgres in a work environment. The time I would give back more\n> than covers MSSQL/Oracle licenses.\n\nI'm not sure, but you could introduce a peice of GPL code in the BSD code, \nbut the result would have to be GPL.\n\nHoping to hear from you, \n\n-- \n\"And I'm happy, because you make me feel good, about me.\" - Melvin Udall\n-----------------------------------------------------------------\nMart���n Marqu���s\t\t\temail: \[email protected]\nSanta Fe - Argentina\t\thttp://math.unl.edu.ar/~martin/\nAdministrador de sistemas en math.unl.edu.ar\n-----------------------------------------------------------------\n", "msg_date": "Tue, 5 Dec 2000 16:34:28 -0300", "msg_from": "\"Martin A. Marques\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: beta testing version" }, { "msg_contents": "On Sunday 03 December 2000 21:49, The Hermit Hacker wrote:\n>\n> I've been trying to follow this thread, and seem to have missed where\n> someone arrived at the conclusion that we were proprietarizing(word?) this\n\nI have missed that part as well.\n\n> ... we do apologize that it didn't get out mid-October, but it is/was\n> purely a scheduale slip ...\n\nI would never say something about schedules of OSS. Let it be in BSD or GPL \nlicense.\n\nSaludos... :-)\n\n-- \n\"And I'm happy, because you make me feel good, about me.\" - Melvin Udall\n-----------------------------------------------------------------\nMart���n Marqu���s\t\t\temail: \[email protected]\nSanta Fe - Argentina\t\thttp://math.unl.edu.ar/~martin/\nAdministrador de sistemas en math.unl.edu.ar\n-----------------------------------------------------------------\n", "msg_date": "Tue, 5 Dec 2000 16:58:24 -0300", "msg_from": "\"Martin A. Marques\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: beta testing version" }, { "msg_contents": "On Tuesday 05 December 2000 16:23, Martin A. Marques wrote:\n>\n> Has somebody thought about putting PG in the GPL licence instead of the\n> BSD? PG inc would still be able to do there money giving support (just like\n> IBM, HP and Compaq are doing there share with Linux), without been able to\n> close the code.\n\nI shouldn't be answering myself, but I just got to the end of the thread \n(exams got on me the last 2 days), so I want to apologize for sending this \nmail (even if it reflects what my feelings are) without reading the other \nmails in the thread.\n\nSorry\n\n\n-- \n\"And I'm happy, because you make me feel good, about me.\" - Melvin Udall\n-----------------------------------------------------------------\nMart�n Marqu�s\t\t\temail: \[email protected]\nSanta Fe - Argentina\t\thttp://math.unl.edu.ar/~martin/\nAdministrador de sistemas en math.unl.edu.ar\n-----------------------------------------------------------------\n", "msg_date": "Tue, 5 Dec 2000 17:21:38 -0300", "msg_from": "\"Martin A. Marques\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: beta testing version" }, { "msg_contents": "On Tue, 5 Dec 2000, Martin A. Marques wrote:\n\n> On Sunday 03 December 2000 04:00, Vadim Mikheev wrote:\n> > > There is risk here. It isn't so much in the fact that PostgreSQL, Inc\n> > > is doing a couple of modest closed-source things with the code. After\n> > > all, the PG community has long acknowleged that the BSD license would\n> > > allow others to co-op the code and commercialize it with no obligations.\n> > >\n> > > It is rather sad to see PG, Inc. take the first step in this direction.\n> > >\n> > > How long until the entire code base gets co-opted?\n> >\n> > I totaly missed your point here. How closing source of ERserver is related\n> > to closing code of PostgreSQL DB server? Let me clear things:\n> >\n> > 1. ERserver isn't based on WAL. It will work with any version >= 6.5\n> >\n> > 2. WAL was partially sponsored by my employer, Sectorbase.com,\n> > not by PG, Inc.\n> \n> Has somebody thought about putting PG in the GPL licence instead of the BSD? \n\nits been brought up and rejected continuously ... in some of our opinions,\nGPL is more harmful then helpful ... as has been said before many times,\nand I'm sure will continue to be said \"changing the license to GPL is a\nnon-discussable issue\" ...\n\n\n", "msg_date": "Tue, 5 Dec 2000 17:03:38 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: beta testing version" }, { "msg_contents": "The Hermit Hacker wrote:\n> its been brought up and rejected continuously ... in some of our opinions,\n> GPL is more harmful then helpful ... as has been said before many times,\n> and I'm sure will continue to be said \"changing the license to GPL is a\n> non-discussable issue\" ...\n\nI've declined commenting on this thread until now -- but this statement\nbears amplification. \n\nGPL is NOT the be-all end-all Free Software (in the FSF/GNU sense!)\nlicense. There is room for more than one license -- just as there is\nroom for more than one OS, more than one Unix, more than one Free RDBMS,\nmore than one Free webserver, more than one scripting language, more\nthan one compiler system, more than one Linux distribution, more than\none BSD, and more than one CPU architecture.\n\nWhy make a square peg development group fit a round peg license? :-) \nUse a round peg for round holes, and a square peg for square holes.\n\nChoice of license for PostgreSQL is not negotiable. I don't say that as\nan edict from Lamar Owen (after all, I am in no position to edict\nanything :-)) -- I say that as a studied observation of the last times\nthis subject has come up.\n\nI personally prefer GPL. But my personal preference and what is good\nfor the project are two different things. BSD is good for this project\nwith this group of developers -- and it should not change.\n\nAnd, like any other open development effort, there will be missteps --\nwhich missteps should, IMHO, be put behind us. No software is perfect;\nno development team is, either.\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Tue, 05 Dec 2000 16:45:44 -0500", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: beta testing version" }, { "msg_contents": "On Tuesday 05 December 2000 18:03, The Hermit Hacker wrote:\n> >\n> > Has somebody thought about putting PG in the GPL licence instead of the\n> > BSD?\n>\n> its been brought up and rejected continuously ... in some of our opinions,\n> GPL is more harmful then helpful ... as has been said before many times,\n> and I'm sure will continue to be said \"changing the license to GPL is a\n> non-discussable issue\" ...\n\nIt's pretty clear to me, and I respect the decision (I really do).\n\n-- \n\"And I'm happy, because you make me feel good, about me.\" - Melvin Udall\n-----------------------------------------------------------------\nMart���n Marqu���s\t\t\temail: \[email protected]\nSanta Fe - Argentina\t\thttp://math.unl.edu.ar/~martin/\nAdministrador de sistemas en math.unl.edu.ar\n-----------------------------------------------------------------\n", "msg_date": "Tue, 5 Dec 2000 19:06:57 -0300", "msg_from": "\"Martin A. Marques\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: beta testing version" }, { "msg_contents": "Regardless of what license is best, could the license even be changed now? I\nmean, some of the initial Berkeley code is still in there in some sense and\nI would think that the original license (BSD I assume) of the initial source\ncode release would have to be somehow honored.. I'm just wondering if the PG\nteam could change the license even if they wanted to.. I should go read the\nlicense again, I know the answer to the above is in there but it's been a\nlong time since I've looked it over and I'm in the middle of packing, so I\nhaven't got the time right now.. Thanks to anyone for satisfying my\ncuriosity in answering this question.\n\nI think that it's very, very good if the license is indeed untouchable, it\nkeeps PostgreSQL from becoming totally closed-source and/or totally\ncommercial.. Obviously things can be added to PG and sold commercially, but\nthere will always be the base PostgreSQL out there for everyone...... I\nhope.\n\nJust my $0.02 worth..\n\n-Mitch\n\n\n----- Original Message -----\nFrom: \"Lamar Owen\" <[email protected]>\nTo: \"PostgreSQL Development\" <[email protected]>\nSent: Tuesday, December 05, 2000 1:45 PM\nSubject: Re: [HACKERS] beta testing version\n\n\n> The Hermit Hacker wrote:\n> > its been brought up and rejected continuously ... in some of our\nopinions,\n> > GPL is more harmful then helpful ... as has been said before many times,\n> > and I'm sure will continue to be said \"changing the license to GPL is a\n> > non-discussable issue\" ...\n>\n> I've declined commenting on this thread until now -- but this statement\n> bears amplification.\n>\n> GPL is NOT the be-all end-all Free Software (in the FSF/GNU sense!)\n> license. There is room for more than one license -- just as there is\n> room for more than one OS, more than one Unix, more than one Free RDBMS,\n> more than one Free webserver, more than one scripting language, more\n> than one compiler system, more than one Linux distribution, more than\n> one BSD, and more than one CPU architecture.\n>\n> Why make a square peg development group fit a round peg license? :-)\n> Use a round peg for round holes, and a square peg for square holes.\n>\n> Choice of license for PostgreSQL is not negotiable. I don't say that as\n> an edict from Lamar Owen (after all, I am in no position to edict\n> anything :-)) -- I say that as a studied observation of the last times\n> this subject has come up.\n>\n> I personally prefer GPL. But my personal preference and what is good\n> for the project are two different things. BSD is good for this project\n> with this group of developers -- and it should not change.\n>\n> And, like any other open development effort, there will be missteps --\n> which missteps should, IMHO, be put behind us. No software is perfect;\n> no development team is, either.\n> --\n> Lamar Owen\n> WGCR Internet Radio\n> 1 Peter 4:11\n>\n\n", "msg_date": "Tue, 5 Dec 2000 15:13:33 -0800", "msg_from": "\"Mitch Vincent\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: beta testing version" }, { "msg_contents": "Mitch Vincent wrote:\n> \n> Regardless of what license is best, could the license even be changed now? I\n> mean, some of the initial Berkeley code is still in there in some sense and\n> I would think that the original license (BSD I assume) of the initial source\n> code release would have to be somehow honored.. I'm just wondering if the PG\n> team could change the license even if they wanted to.. I should go read the\n> license again, I know the answer to the above is in there but it's been a\n\n_Every_single_ copyright holder of code in the core server would have to\nagree to any change.\n\nNot a likely event.\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Tue, 05 Dec 2000 18:27:37 -0500", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: beta testing version" }, { "msg_contents": "Lamar Owen <[email protected]> writes:\n\n> Mitch Vincent wrote:\n> > \n> > Regardless of what license is best, could the license even be changed now? I\n> > mean, some of the initial Berkeley code is still in there in some sense and\n> > I would think that the original license (BSD I assume) of the initial source\n> > code release would have to be somehow honored.. I'm just wondering if the PG\n> > team could change the license even if they wanted to.. I should go read the\n> > license again, I know the answer to the above is in there but it's been a\n> \n> _Every_single_ copyright holder of code in the core server would have to\n> agree to any change.\n\nNo - GPL projects can include BSD-copyrighted code, no problem\nthere. That being said, creating bad blood is not a good thing, so an\napproach like this would hurt PostgreSQL a lot.\n \n\n-- \nTrond Eivind Glomsr�d\nRed Hat, Inc.\n", "msg_date": "05 Dec 2000 18:43:44 -0500", "msg_from": "[email protected] (Trond Eivind =?iso-8859-1?q?Glomsr=d8d?=)", "msg_from_op": false, "msg_subject": "Re: beta testing version" }, { "msg_contents": "\"Martin A. Marques\" wrote:\n> \n> Has somebody thought about putting PG in the GPL licence instead of the BSD?\n\nIt is somewhat difficult to put other peoples code under some different\nlicense.\n\nAnd AFAIK (IANAL) the old license would still apply too for all the code\nthat \nhas been released under it.\n\n> PG inc would still be able to do there money giving support (just like IBM,\n> HP and Compaq are doing there share with Linux), without been able to close\n> the code.\n\nPG inc would also be able to make money selling dairy products (as they\nseem \nto employ some smart people and smart peole, if in need, are usually\nable to \nmake the money they need). \n\nBut I suspect that the farther away from developing postgres(-related)\nproducts \nthey have to look for making a living, the worse the results are for\nPostgreSQL.\n\n> Only a thought...\n\nYou can always license _your_ contributions under whatever license you\nchoose -\nGPL, LGPL,MPL, SCL or even a shrink-wrap, open-the-wrap-before-reading\nlicense \nthat demands users to forfeit their firstborn child for even looking at\nthe product.\n\n From what I have read on this list (I guess) it may be unsafe for you \nto release something in public domain (in US at least), as you are then\nunable to claim yourself not liable for your users' blunders (akin to \nleaving a loaded gun on a parkbench)\n\n----------\nHannu\n", "msg_date": "Wed, 06 Dec 2000 02:25:22 +0200", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: beta testing version (not really anymore ;)" }, { "msg_contents": "Trond Eivind Glomsr�d wrote:\n> Lamar Owen <[email protected]> writes:\n> > Mitch Vincent wrote:\n> > > code release would have to be somehow honored.. I'm just wondering if the PG\n> > > team could change the license even if they wanted to.. I should go read the\n\n> > _Every_single_ copyright holder of code in the core server would have to\n> > agree to any change.\n \n> No - GPL projects can include BSD-copyrighted code, no problem\n> there. That being said, creating bad blood is not a good thing, so an\n> approach like this would hurt PostgreSQL a lot.\n\nWell, in actuality, the original code from PostgreSQL would still be\nBSD-licensed and would be immune to infection from the GPL 'virus'. See\nrms' comments on the Vista software package -- that package is public\ndomain, and the original code will always be public domain.\n\nTo get the 'original' code relicensed would require the consent of every\ndeveloper.\n\nOf course, the BSD license allows redistribution under virtually any\nlicense -- but said redistribution doesn't affect the original in any\nway.\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n\nPS: is there a difference in the '�' you used in your e-mail this time\nand the '�' that used to be present? I'm ignorant of that letter's\nusage.\n", "msg_date": "Tue, 05 Dec 2000 19:47:03 -0500", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: beta testing version" } ]
[ { "msg_contents": "\n> If the system were capable of determining that either rule1 or rule2\n> condition will always hold, perhaps it could deduce that the original\n> query on the view will never be applied. However, I doubt that we\n> really want to let loose an automated theorem prover on the results\n> of every rewrite ...\n\nYes, a theorem prover is way too complex, and can not cover \nthe case where the application guards against the \"apply original query\" case.\n\nWould it be possible to push the elog down to the heap access,\nand only throw the elog if a heap access is actually about to be performed\non a view ?\n\nAndreas\n", "msg_date": "Fri, 1 Dec 2000 10:13:40 +0100 ", "msg_from": "Zeugswetter Andreas SB <[email protected]>", "msg_from_op": true, "msg_subject": "AW: Re: [SQL] Rules with Conditions: Bug, or Misunderst\n\tanding" } ]
[ { "msg_contents": "Hello,\n\nPlease , excuse me for my bad english.\n\nOne question on bitmaps index. In them Commercial data bases (oracle DB2),\nLet bitmap type index is supported.This index is used for fields of type sex or Boolean generally, would be it(he)\nsupported in postgres??? If not is foreseen it??? \n\nBest regards\n\nPEJAC Pascal\n", "msg_date": "Fri, 1 Dec 2000 17:17:36 +0100 (CET)", "msg_from": "<[email protected]>", "msg_from_op": true, "msg_subject": "Bitmap index" } ]
[ { "msg_contents": "> Ok, this has peaked my interest in learning exactly what WAL \n> is and what it does... I don't see any in-depth explanation\n> of WAL on the postgresql.org site, can someone point me to\n> some documentation? (if any exists, that is).\n\nWAL (Write Ahead Log) is standard technique described in,\nI think, any book about transaction processing.\n\nVadim\n", "msg_date": "Fri, 1 Dec 2000 11:50:26 -0800 ", "msg_from": "\"Mikheev, Vadim\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: WAL information" } ]
[ { "msg_contents": "I've just noticed that COPY BINARY is pretty thoroughly broken by TOAST,\nbecause what it does is to dump out verbatim the bytes making up each\ntuple of the relation. In the case of a moved-off value, you'll get\nthe toast reference, which is not going to be too helpful for reloading\nthe table data. In the case of a compressed-in-line datum, you'll at\nleast have all the data there, but the COPY BINARY reader will crash\nand burn when it sees it.\n\nFixing this while retaining backwards compatibility with the existing\nCOPY BINARY file format is possible, but it seems rather a headache:\nwe'd need to detoast all the toasted columns, then heap_formtuple a\nnew tuple containing the expanded data, and finally write that out.\n(Can't do it on a field-by-field basis because the file format requires\nthe total tuple size to precede the tuple data.) Kind of ugly.\n\nThe existing COPY BINARY file format is entirely brain-dead anyway; for\nexample, it wants the number of tuples to be stored at the front, which\nmeans we have to scan the whole relation an extra time to get that info.\nIts handling of nulls is bizarre, too. I'm thinking this might be a\ngood time to abandon backwards compatibility and switch to a format\nthat's a little easier to read and write. Does anyone have an opinion\npro or con about that?\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 01 Dec 2000 17:35:33 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "COPY BINARY is broken..." }, { "msg_contents": "* Tom Lane <[email protected]> [001201 14:42] wrote:\n> I've just noticed that COPY BINARY is pretty thoroughly broken by TOAST,\n> because what it does is to dump out verbatim the bytes making up each\n> tuple of the relation. In the case of a moved-off value, you'll get\n> the toast reference, which is not going to be too helpful for reloading\n> the table data. In the case of a compressed-in-line datum, you'll at\n> least have all the data there, but the COPY BINARY reader will crash\n> and burn when it sees it.\n> \n> Fixing this while retaining backwards compatibility with the existing\n> COPY BINARY file format is possible, but it seems rather a headache:\n> we'd need to detoast all the toasted columns, then heap_formtuple a\n> new tuple containing the expanded data, and finally write that out.\n> (Can't do it on a field-by-field basis because the file format requires\n> the total tuple size to precede the tuple data.) Kind of ugly.\n> \n> The existing COPY BINARY file format is entirely brain-dead anyway; for\n> example, it wants the number of tuples to be stored at the front, which\n> means we have to scan the whole relation an extra time to get that info.\n> Its handling of nulls is bizarre, too. I'm thinking this might be a\n> good time to abandon backwards compatibility and switch to a format\n> that's a little easier to read and write. Does anyone have an opinion\n> pro or con about that?\n\nBINARY COPY scared the bejeezus out of me, anyone using the interface\nis asking for trouble and supporting it seems like a nightmare, I\nwould rip it out.\n\n-- \n-Alfred Perlstein - [[email protected]|[email protected]]\n\"I have the heart of a child; I keep it in a jar on my desk.\"\n", "msg_date": "Fri, 1 Dec 2000 14:54:26 -0800", "msg_from": "Alfred Perlstein <[email protected]>", "msg_from_op": false, "msg_subject": "Re: COPY BINARY is broken..." }, { "msg_contents": "Alfred Perlstein <[email protected]> writes:\n> I would rip it out.\n\nI thought about that too, but was afraid to suggest it ;-)\n\nHow many people are actually using COPY BINARY?\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 01 Dec 2000 17:56:57 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: COPY BINARY is broken... " }, { "msg_contents": "* Tom Lane <[email protected]> [001201 14:57] wrote:\n> Alfred Perlstein <[email protected]> writes:\n> > I would rip it out.\n> \n> I thought about that too, but was afraid to suggest it ;-)\n\nI think you'd agree that you have more fun and important things to\ndo than to deal with this yucky interface. :)\n\n> How many people are actually using COPY BINARY?\n\nI'm not using it. :)\n\nHow about adding COPY XML?\n\n\n\n\n\n\n\n\n\n\n\n(kidding of course about the XML, but it would make postgresql more\nbuzzword compliant :) )\n\n-- \n-Alfred Perlstein - [[email protected]|[email protected]]\n\"I have the heart of a child; I keep it in a jar on my desk.\"\n", "msg_date": "Fri, 1 Dec 2000 15:05:11 -0800", "msg_from": "Alfred Perlstein <[email protected]>", "msg_from_op": false, "msg_subject": "Re: COPY BINARY is broken..." }, { "msg_contents": "At 03:05 PM 12/1/00 -0800, Alfred Perlstein wrote:\n\n>How about adding COPY XML?\n>(kidding of course about the XML, but it would make postgresql more\n>buzzword compliant :) )\n\nHey, we could add a parser and call the module MyXML ...\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n", "msg_date": "Fri, 01 Dec 2000 15:47:58 -0800", "msg_from": "Don Baccus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: COPY BINARY is broken..." }, { "msg_contents": "On Fri, Dec 01, 2000 at 05:56:57PM -0500, Tom Lane wrote:\n> Alfred Perlstein <[email protected]> writes:\n> > I would rip it out.\n> \n> I thought about that too, but was afraid to suggest it ;-)\n> \n> How many people are actually using COPY BINARY?\n> \nI have used it, I don't think I'm actually using right now. But, it was \nvery handy. (Once I finally figured out the format through trial and error,\nouch!) It is very nice to be able to just dump tuples in, instead of having\nto format them to text, then the database has to put them back to binary\nagain. So, an alternative with a clean interface would be very much\nappreciated.\n\n", "msg_date": "Sat, 2 Dec 2000 16:09:17 -0800", "msg_from": "Samuel Sieb <[email protected]>", "msg_from_op": false, "msg_subject": "Re: COPY BINARY is broken..." }, { "msg_contents": "> > Its handling of nulls is bizarre, too. I'm thinking this might be a\n> > good time to abandon backwards compatibility and switch to a format\n> > that's a little easier to read and write. Does anyone have an opinion\n> > pro or con about that?\n> \n> BINARY COPY scared the bejeezus out of me, anyone using the interface\n> is asking for trouble and supporting it seems like a nightmare, I\n> would rip it out.\n\nTom, just keep in mind that the format is documented in copy.sgml.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 10 Dec 2000 15:52:16 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: COPY BINARY is broken..." }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n>>>> Its handling of nulls is bizarre, too. I'm thinking this might be a\n>>>> good time to abandon backwards compatibility and switch to a format\n>>>> that's a little easier to read and write. Does anyone have an opinion\n>>>> pro or con about that?\n>> \n>> BINARY COPY scared the bejeezus out of me, anyone using the interface\n>> is asking for trouble and supporting it seems like a nightmare, I\n>> would rip it out.\n\n> Tom, just keep in mind that the format is documented in copy.sgml.\n\nNot documented *correctly*, I notice. There are at least two errors,\nplus the rather major omission that <tuple data> is not explained.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 10 Dec 2000 17:34:27 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: COPY BINARY is broken... " }, { "msg_contents": "\nTom, this is fixed, right?\n\n> I've just noticed that COPY BINARY is pretty thoroughly broken by TOAST,\n> because what it does is to dump out verbatim the bytes making up each\n> tuple of the relation. In the case of a moved-off value, you'll get\n> the toast reference, which is not going to be too helpful for reloading\n> the table data. In the case of a compressed-in-line datum, you'll at\n> least have all the data there, but the COPY BINARY reader will crash\n> and burn when it sees it.\n> \n> Fixing this while retaining backwards compatibility with the existing\n> COPY BINARY file format is possible, but it seems rather a headache:\n> we'd need to detoast all the toasted columns, then heap_formtuple a\n> new tuple containing the expanded data, and finally write that out.\n> (Can't do it on a field-by-field basis because the file format requires\n> the total tuple size to precede the tuple data.) Kind of ugly.\n> \n> The existing COPY BINARY file format is entirely brain-dead anyway; for\n> example, it wants the number of tuples to be stored at the front, which\n> means we have to scan the whole relation an extra time to get that info.\n> Its handling of nulls is bizarre, too. I'm thinking this might be a\n> good time to abandon backwards compatibility and switch to a format\n> that's a little easier to read and write. Does anyone have an opinion\n> pro or con about that?\n> \n> \t\t\tregards, tom lane\n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 22 Jan 2001 22:46:54 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: COPY BINARY is broken..." }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> Tom, this is fixed, right?\n\nYes.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 22 Jan 2001 23:20:39 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: COPY BINARY is broken... " } ]
[ { "msg_contents": "> The existing COPY BINARY file format is entirely brain-dead \n> anyway; for example, it wants the number of tuples to be stored\n> at the front, which means we have to scan the whole relation an\n> extra time to get that info. Its handling of nulls is bizarre, too.\n> I'm thinking this might be a good time to abandon backwards\n> compatibility and switch to a format that's a little easier to read\n> and write. Does anyone have an opinion pro or con about that?\n\nSwitch to new format.\n\nVadim\n", "msg_date": "Fri, 1 Dec 2000 14:47:48 -0800 ", "msg_from": "\"Mikheev, Vadim\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: COPY BINARY is broken..." }, { "msg_contents": "Hi,\n\n\tI would very much like some way of writing binary data to a database.\nCopy binary recently broke on me after upgrading to 7.0. I have large\nsimulation codes and writing lots of floats to the database by\nconverting them to text first is 1) a real pain, 2) slow and 3) can lead\nto unexpected loss in precision. \n\nI think binary writes would actually be solved better and safer through\nsome type of CORBA interface, but previous discussions seemed to\nindicate that that is even more of a pain than fixing the current binary\ninterface.\n\nSo I agree that the current version is a problem, but I do think\nsomething needs to be put in place. Not everybody only writes a few\nnumbers from a web page into the database -- some have masses of data to\ndump into a database. For all I care it doesn't even have to look like\nSQL, but can be purely accessible through libpq.\n\nAdriaan\n", "msg_date": "Sun, 03 Dec 2000 18:44:29 +0200", "msg_from": "Adriaan Joubert <[email protected]>", "msg_from_op": false, "msg_subject": "Re: COPY BINARY is broken..." }, { "msg_contents": "Adriaan Joubert <[email protected]> writes:\n> Copy binary recently broke on me after upgrading to 7.0.\n\nI think you're talking about binary copy via the frontend, which has a\ndifferent set of problems. To fix that, we need to make some protocol\nchanges, which would (preferably) also apply to non-binary frontend\ncopy, which would create a compatibility problem. (The reason the\nprotocol is broken is there's no reasonable way to find or signal the\nend of the COPY data stream after an error.)\n\nI think that's worth doing, but there's no time to design and implement\nit for 7.1. Maybe for 7.2.\n\n> I think binary writes would actually be solved better and safer through\n> some type of CORBA interface,\n\nCORBA would provide a more machine-independent interface, but migrating\nto CORBA would be a huge task, and I'm not sure the payoff is worth\nit...\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 03 Dec 2000 12:10:02 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: COPY BINARY is broken... " } ]
[ { "msg_contents": "> Alfred Perlstein <[email protected]> writes:\n> > I would rip it out.\n> \n> I thought about that too, but was afraid to suggest it ;-)\n> \n> How many people are actually using COPY BINARY?\n\nIt could be useful if only single scan would be required.\nBut I have no strong opinion about keeping it.\n\nVadim\n", "msg_date": "Fri, 1 Dec 2000 14:50:07 -0800 ", "msg_from": "\"Mikheev, Vadim\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: COPY BINARY is broken... " } ]
[ { "msg_contents": "I am curious as to where the newest ODBC driver source is -- I retrieved\n/src/interfaces/odbc from CVS, but it appeared to only be version\n6.40.0009 and was lacking the Visual C++ workspace/project files that\nwere in the 6.50.0000 release zip file on the FTP server. \n\nThanks\n\nMichael Fork - CCNA - MCP - A+\nNetwork Support - Toledo Internet Access - Toledo Ohio\n\n", "msg_date": "Fri, 1 Dec 2000 19:24:05 -0500 (EST)", "msg_from": "Michael Fork <[email protected]>", "msg_from_op": true, "msg_subject": "ODBC Driver" } ]
[ { "msg_contents": "\nI am working on an implementation of 'ALTER FUNCTION' and have run into a \nproblem.\n\nplpgsql. plperl and pltcl all cache the result of a compile of prosrc.\nWhich leads to things like:\n\nmhh=# create function f() returns integer as 'begin return 42; end;' language \n'plpgsql';\nCREATE\nmhh=# select f();\n f \n----\n 42\n(1 row)\n\nmhh=# alter function f() as 'begin return 44; end;';\nALTER\nmhh=# select f();\n f \n----\n 42\n(1 row)\n\nmhh=# select proname, prosrc from pg_proc where proname = 'f';\n proname | prosrc \n---------+-----------------------\n f | begin return 44; end;\n\nOf course, leaving psql and re-entering fixes the problem. But the same \nproblem is manifested between concurrent sessions as well.\n\nI would like to propose that a new attribute be added to pg_proc 'proserial'. \n'CREATE FUNCTION' will set proserial to 0. 'ALTER FUNCTION' will increment it\neach time. It would be up to the individual PL handlers to check to make sure \nthat their cache is not out of date.\n\nIs there a better way to solve this problem?\n\n\n-- \nMark Hollomon\n", "msg_date": "Fri, 1 Dec 2000 21:38:01 -0500", "msg_from": "Mark Hollomon <[email protected]>", "msg_from_op": true, "msg_subject": "ALTER FUNCTION problem" }, { "msg_contents": "Mark Hollomon <[email protected]> writes:\n> plpgsql. plperl and pltcl all cache the result of a compile of prosrc.\n\nplpgsql does, but I didn't think the other two do.\n\n> I would like to propose that a new attribute be added to pg_proc\n> 'proserial'. 'CREATE FUNCTION' will set proserial to 0. 'ALTER\n> FUNCTION' will increment it each time. It would be up to the\n> individual PL handlers to check to make sure that their cache is not\n> out of date.\n\nThis is completely inadequate for plpgsql, if not for the others,\nbecause plpgsql also caches query plans --- which depend on more than\nthe text of the function. I don't think it's worth our time to put\nin a partial solution; we need to think about a generic cache\ninvalidation mechanism.\n\nJan Wieck has posted some comments about this, and I think there was\nalso some discussion in connection with Karel Zak's proposed cross-\nbackend query plan cache. Check the archives...\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 02 Dec 2000 00:11:23 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ALTER FUNCTION problem " } ]
[ { "msg_contents": "(apologies for posting directly to pgsql-hackers, but I'm asking for a\nhacker to explicitly check on the accuracy of another posting!)\n\nI've written (& submitted to pgsql-docs) a tutorial on using RI features\nand on alter the system catalog to change RI properties for existing\nrelationships.\n\nI needs polishing, etc., but, mostly it needs someone more familiar than I\nto look at the last section, on Hacking RI. All of the changes I recommend\nI've tried in my databases (pg7.0.2 and pg7.1-devel), and haven't noticed\nany problems, but if anyone has any words of warning/advice/additional\ntips, I'd appreciate it.)\n\nIt should be in today's pgsql-docs listings.\n\nThanks!\n\nJoel Burton\[email protected]\n\n", "msg_date": "Sat, 2 Dec 2000 18:07:58 -0500 (EST)", "msg_from": "Joel Burton <[email protected]>", "msg_from_op": true, "msg_subject": "RI tutorial hack reading needed" } ]
[ { "msg_contents": "I am trying to set the update and delete rules that are returned from the\nODBC driver and the spec has the following to say:\n\nSQL_NO_ACTION: If a delete of a row in the referenced table would cause a\n\"dangling reference\" in the referencing table (that is, rows in the\nreferencing table would have no counterparts in the referenced table),\nthen the update is rejected. (This action is the same as the SQL_RESTRICT\naction in ODBC 2.x.)\n\nWhat I need to know is if RI_FKey_noaction_del and RI_FKey_restrict_del\nprocedures are functionally the same. The ODBC (which I would hope\nconforms to SQL 9x) spec has 4 types of RI (CASCADE, NO_ACTION, SET_NULL,\nSET_DEFAULT), and Postgres appears to have 5 (RI_FKey_cascade_del,\nRI_FKey_noaction_del, RI_FKey_restrict_del, RI_FKey_setdefault_del,\nRI_FKey_setnull_del), which leads me to belive that restrict and noaction\nare the same thing, and the one that is used depends on what the user puts\nin the REFERENCES line.\n\nAm I correct?\n\nMichael Fork - CCNA - MCP - A+\nNetwork Support - Toledo Internet Access - Toledo Ohio\n\nOn Fri, 1 Dec 2000, Stephan Szabo wrote:\n\n> \n> It's representing a single null I believe. I'm not\n> sure if in general it's an octal or decimal number\n> but 3 digits for the value of the character.\n> \n> Stephan Szabo\n> [email protected]\n> \n> On Fri, 1 Dec 2000, Michael Fork wrote:\n> \n> > What are these characters:\n> > \n> > \\000\n> > \n> > are they 3 nulls? a null followed by 2 zeros?\n> > \n> > The reason I have been asking is that I am adding foreign key support to\n> > the ODBC driver :)\n> \n> \n\n\n", "msg_date": "Sat, 2 Dec 2000 18:27:58 -0500 (EST)", "msg_from": "Michael Fork <[email protected]>", "msg_from_op": true, "msg_subject": "RI Types" }, { "msg_contents": "At 06:27 PM 12/2/00 -0500, Michael Fork wrote:\n>I am trying to set the update and delete rules that are returned from the\n>ODBC driver and the spec has the following to say:\n>\n>SQL_NO_ACTION: If a delete of a row in the referenced table would cause a\n>\"dangling reference\" in the referencing table (that is, rows in the\n>referencing table would have no counterparts in the referenced table),\n>then the update is rejected. (This action is the same as the SQL_RESTRICT\n>action in ODBC 2.x.)\n>\n>What I need to know is if RI_FKey_noaction_del and RI_FKey_restrict_del\n>procedures are functionally the same. The ODBC (which I would hope\n>conforms to SQL 9x) spec has 4 types of RI (CASCADE, NO_ACTION, SET_NULL,\n>SET_DEFAULT), and Postgres appears to have 5 (RI_FKey_cascade_del,\n>RI_FKey_noaction_del, RI_FKey_restrict_del, RI_FKey_setdefault_del,\n>RI_FKey_setnull_del), which leads me to belive that restrict and noaction\n>are the same thing, and the one that is used depends on what the user puts\n>in the REFERENCES line.\n>\n>Am I correct?\n\n\"RESTRICT\" is a SQL3 thing, an extension to SQL92. It appears that the\nintent is that restrict should happen BEFORE the delete goes chunking\nits way through the tables, while noaction tries to delete then rolls\nback and gives an error if necessary.\n\nThe final table entries are exactly the same for the RESTRICT and NOACTION\ncases, so the semantics in the sense of the transformation that occurs on\nthe database are equivalent. \n\nCurrently, PG treats NOACTION and RESTRICT as being the same, they're\nseparated in the code with a comment to that effect, i.e. the code for\nNOACTION is duplicated for RESTRICT (in part to make it clear that\nin the future we might want to implement RESTRICT more efficiently if\nanyone figures out how).\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n", "msg_date": "Sat, 02 Dec 2000 15:43:49 -0800", "msg_from": "Don Baccus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RI Types" } ]
[ { "msg_contents": "\nAn obscure series of events seems to cause a core dump and OID\ncorruption:\n\n-- tolower function for varchar\ncreate function varchar_lower(varchar) returns varchar\n as '/usr/local/lib/pgcontains.so', 'pglower'\n language 'c'; \n\ncreate index ztables_title_ndx on ztitles ( varchar_lower (title) ) ;\n\nvacuum analyze ;\n\n{ leave }\n\nat some point come back\n\ndrop function varchar_lower (varchar) ;\n\ncreate function varchar_lower(varchar) returns varchar\n as '/usr/local/lib/pgcontains.so', 'pglower'\n language 'c'; \n\n\nand strange things start to happen.\n\n\nI realize that (and only belatedly) once I drop the function the index\nis corrupt, but it seems there are invalid oids when I try to dump the\ndatabase, and dumping some tables caused a core dump.\n\nI didn't save the data, I was in live service panic mode.\n\nI have a shared library of functions I use in Postgres and I do a drop /\ncreate for an install script. I realize this is a little indiscriminate,\nand at least unwise, but I think postgres should be able to handle this.\n\n\n\n-- \nhttp://www.mohawksoft.com\n", "msg_date": "Sat, 02 Dec 2000 23:03:58 -0500", "msg_from": "mlw <[email protected]>", "msg_from_op": true, "msg_subject": "core dump? OID/database corruption?" }, { "msg_contents": "Given the name of a table, I need to find all foreign keys in that table\nand the table/column that they refer to, along with the action to be\nperformed on update/delete. The following query works, but only when\nthere is 1 foreign key in the table, when there is more than 2 it grows\nexponentially -- which means I am missing a join. However, given my\nlimitied knowledge about the layouts of the postgres system tables, and\nthe pg_trigger not being documented on the web site, I have been unable to\nget the correct query. Is this possible, and if so, what join(s) am I\nmissing?\n\nSELECT pt.tgargs,\npt.tgnargs,\npt.tgdeferrable,\npt.tginitdeferred,\npg_proc.proname,\npg_proc_1.proname\nFROM pg_class pc,\npg_proc pg_proc,\npg_proc pg_proc_1,\npg_trigger pg_trigger,\npg_trigger pg_trigger_1,\npg_proc pp,\npg_trigger pt\nWHERE pt.tgrelid = pc.oid\nAND pp.oid = pt.tgfoid\nAND pg_trigger.tgconstrrelid = pc.oid\nAND pg_proc.oid = pg_trigger.tgfoid\nAND pg_trigger_1.tgfoid = pg_proc_1.oid\nAND pg_trigger_1.tgconstrrelid = pc.oid\nAND ((pc.relname='tblmidterm')\nAND (pp.proname LIKE '%ins')\nAND (pg_proc.proname LIKE '%upd')\nAND (pg_proc_1.proname LIKE '%del'))\n\nMichael Fork - CCNA - MCP - A+\nNetwork Support - Toledo Internet Access - Toledo Ohio\n\n", "msg_date": "Sat, 2 Dec 2000 23:22:54 -0500 (EST)", "msg_from": "Michael Fork <[email protected]>", "msg_from_op": false, "msg_subject": "SQL to retrieve FK's, Update/Delete action, etc." }, { "msg_contents": "mlw <[email protected]> writes:\n> [ drop function on which a functional index is based ]\n> and strange things start to happen.\n\nAll I get is messages like\n\tERROR: fmgr_info: function 402432: cache lookup failed\nwhich is about what I'd expect. If you've seen a coredump in\nthis situation, let's hear a more specific bug report.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 03 Dec 2000 00:49:44 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: core dump? OID/database corruption? " } ]
[ { "msg_contents": "Hi\n\nI'm compiling (not, I'm trying to compile) last version of Postgresql on\nSequent Dynix/ptx ver 4.4.7 system. Under compilation process with gcc (ver\n2.7.2 ported on dynix/pt) is reporting several errors.\n\nIf someone is ready to help me with this process please send me answer.\n\nRadek\n\n\n\n", "msg_date": "Sun, 3 Dec 2000 13:50:31 +0100", "msg_from": "\"Radek Fleks\" <[email protected]>", "msg_from_op": true, "msg_subject": "Postgresql on dynix/ptx system" }, { "msg_contents": "Radek Fleks writes:\n\n> I'm compiling (not, I'm trying to compile) last version of Postgresql on\n> Sequent Dynix/ptx ver 4.4.7 system. Under compilation process with gcc (ver\n> 2.7.2 ported on dynix/pt) is reporting several errors.\n\nIt's not so interesting at this point to port PostgreSQL 7.0.*, given that\nPostgreSQL 7.1 should go beta sometime, er, this year. If you want to\nport 7.1 then you should be looking into the following files and/or\ndirectories for platform specific stuff:\n\nconfigure.in\nsrc/template\nsrc/makefiles\nsrc/include/port\nsrc/Makefile.shlib\nsrc/backend/port/dynloader\nsrc/include/storage/s_lock.h\nsrc/backend/storage/buffer/s_lock.c\n\nOnce you have gotten past the fact that configure will complain about your\nsystem not being supported (which you should fix in configure.in and\nre-run autoconf), showing actual compiler output will help.\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n", "msg_date": "Mon, 4 Dec 2000 18:24:29 +0100 (CET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgresql on dynix/ptx system" } ]
[ { "msg_contents": "I've applied Neale Ferguson's patches for S/390 support, and some fairly\nextensive patches to repair and improve support for the OVERLAPS\noperator. I've increased coverage of this in the regression tests,\nincluding horology, so those platforms which have variants on these test\nresults will need to be evaluated and those results updated.\n\ninitdb required.\n\n - Thomas\n", "msg_date": "Sun, 03 Dec 2000 15:06:15 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": true, "msg_subject": "Patches applied" } ]
[ { "msg_contents": "Peter Eisentraut wrote:\n> \n> mlw writes:\n> \n> > There are hundreds (thousands?) of people that have contributed to the\n> > development of Postgres, either directly with code, or beta testing,\n> > with the assumption that they are benefiting a community. Many would\n> > probably not have done so if they had suspected that what they do is\n> > used in a product that excludes them.\n> \n> With the BSD license it has always been clear that this would be possible,\n> and for as long as I've been around the core/active developers have\n> frequently reiterated that this is a desirable aspect and in fact\n> encouraged. If you don't like that, then you should have read the license\n> before using the product.\n> \n> > I have said before, open source is a social contract, not a business\n> > model.\n> \n> Well, you're free to take the PostgreSQL source and start your own \"social\n> contract\" project; but we don't do that around here.\n\nAnd you don't feel that this is a misappropriation of a public trust? I\nfeel shame for you.\n\n-- \nhttp://www.mohawksoft.com\n", "msg_date": "Sun, 03 Dec 2000 15:30:13 -0500", "msg_from": "mlw <[email protected]>", "msg_from_op": true, "msg_subject": "Re: beta testing version" } ]
[ { "msg_contents": "(I posted this yesterday, but it never appeared. Apologies if it's a \nduplicate to you.)\n\nI've written (& submitted to pgsql-docs) a tutorial on using RI \nfeatures\nand on alter the system catalog to change RI properties for existing\nrelationships.\n\nI needs polishing, etc., but, mostly it needs someone more familiar \nthan I\nto look at the last section, on Hacking RI. All of the changes I \nrecommend\nI've tried in my databases (pg7.0.2 and pg7.1-devel), and haven't \nnoticed\nany problems, but if anyone has any words of \nwarning/advice/additional\ntips, I'd appreciate it.)\n\nIt should be in today's pgsql-docs listings.\n--\nJoel Burton, Director of Information Systems -*- [email protected]\nSupport Center of Washington (www.scw.org)\n", "msg_date": "Sun, 3 Dec 2000 17:30:25 -0500", "msg_from": "\"Joel Burton\" <[email protected]>", "msg_from_op": true, "msg_subject": "RI tutorial needs tech review" } ]
[ { "msg_contents": "Hi,\n\non other RDBMS (Oracle,etc...),there is an index called bitmap index that\ngreatly improve performance compared to btree index for boolean value\n(such as for a sex value,it's either M or F),i would like to know if such\nindex will be implemented inside PostgreSQL.\n", "msg_date": "Mon, 4 Dec 2000 09:09:01 +0100 (CET)", "msg_from": "<[email protected]>", "msg_from_op": true, "msg_subject": "Bitmap index" } ]
[ { "msg_contents": "i have wrote an application dealing with ean13 and ean8 type,how can i\nsubmit it ??\n\n\n", "msg_date": "Mon, 4 Dec 2000 09:09:55 +0100 (CET)", "msg_from": "<[email protected]>", "msg_from_op": true, "msg_subject": "EAN13 for postgresql" }, { "msg_contents": "On Mon, Dec 04, 2000 at 09:09:55AM +0100, [email protected] wrote:\n> i have wrote an application dealing with ean13 and ean8 type,how can i\n> submit it ??\n\nPost a link to your patches here and see if it generates some\ninterest. Some description would be nice too, what you are\nexactly trying to provide?\n\n-- \nmarko\n\n", "msg_date": "Tue, 5 Dec 2000 15:42:17 +0200", "msg_from": "Marko Kreen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: EAN13 for postgresql" } ]
[ { "msg_contents": "\n> Today I inserted (unsigned char) casts into all the <ctype.h> function\n> calls I could find. This issue should be fixed as of current cvs.\n> Please try it again when you have time.\n\nI am a sceptic to the many casts. Would'nt the clean solution be, to use\nunsigned char througout the code ? The casts only help to avoid compiler\nwarnings or errors. They do not solve the underlying problem.\n\nAndreas \n", "msg_date": "Mon, 4 Dec 2000 10:57:05 +0100 ", "msg_from": "Zeugswetter Andreas SB <[email protected]>", "msg_from_op": true, "msg_subject": "AW: broken locale in 7.0.2 without multibyte support (F\n\treeBSD 4.1-RELEASE) ?" }, { "msg_contents": "Zeugswetter Andreas SB <[email protected]> writes:\n>> Today I inserted (unsigned char) casts into all the <ctype.h> function\n>> calls I could find. This issue should be fixed as of current cvs.\n>> Please try it again when you have time.\n\n> I am a sceptic to the many casts. Would'nt the clean solution be, to use\n> unsigned char througout the code ?\n\nNo; see the prior discussion.\n\n> The casts only help to avoid compiler\n> warnings or errors. They do not solve the underlying problem.\n\nYou are mistaken.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 04 Dec 2000 09:53:08 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: AW: broken locale in 7.0.2 without multibyte support (F reeBSD\n\t4.1-RELEASE) ?" } ]
[ { "msg_contents": "Hello all,\n\nI am new to postgreSQL.\nWhen I perform an action on a psql database (e.g. insert into a table),\nsome more action could be induced, via trigger firing:\n - is it possible to know at any time the exact action chain?\n - is it possible to know at any time if the control is inside a\ntrigger (and which one)?\nSorry, I tried to search in www.postgresql.org but I wasn't able to\nfind anything useful.\n\nThese questions arise because I'm trying to keep in sync two identical\npsql databases; I have audited tables and an audit trail. I'm facing the\nproblem of recognising which actions in the trail were due to a trigger\nfiring, rather than explicitly commanded.\n\nThank you in advance\nFabio\n", "msg_date": "Mon, 04 Dec 2000 14:39:34 +0200", "msg_from": "Fabio Nanni <[email protected]>", "msg_from_op": true, "msg_subject": "triggers and actions tree" } ]
[ { "msg_contents": "Hi,\n\n I'm about 99.666667% sure that the lock type choosen in the\n FOR UPDATE case (line 511 of parse_relation.c) should be\n RowExclusiveLock instead of RowShareLock. Actually I get\n \"Deadlock risk\" debug messages when selecting FOR UPDATE and\n then really UPDATE.\n\n Should I change it?\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n\n\n", "msg_date": "Mon, 4 Dec 2000 07:52:50 -0500 (EST)", "msg_from": "Jan Wieck <[email protected]>", "msg_from_op": true, "msg_subject": "Wrong FOR UPDATE lock type" }, { "msg_contents": "Jan Wieck <[email protected]> writes:\n> I'm about 99.666667% sure that the lock type choosen in the\n> FOR UPDATE case (line 511 of parse_relation.c) should be\n> RowExclusiveLock instead of RowShareLock. Actually I get\n> \"Deadlock risk\" debug messages when selecting FOR UPDATE and\n> then really UPDATE.\n\n> Should I change it?\n\nNot sure, but if you do change it, that's *not* the only place. I coded\nthat as RowShareLock because that was what was getting grabbed by the\nexecutor for SELECT FOR UPDATE. I believe the rewriter may need changed\nas well, since it can also be the first grabber of a lock for a rel.\n\nNote also that the docs say SELECT FOR UPDATE gets RowShareLock!\n\nThe \"deadlock risk\" message is not very bright, and I wouldn't suggest\nchanging the code just because of that. I'm not even sure I want to\nleave that check in the release version ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 04 Dec 2000 15:35:47 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Wrong FOR UPDATE lock type " } ]
[ { "msg_contents": "Judging by the information below, taken *directly* from PostgreSQL, Inc.\nwebsite, it appears that they will be releasing all code into the main\nsource code branch -- with the exception of \"Advanced Replication and\nDistributed Information capabilities\" (to which capabilities they are\nreferring is not made clear) which may remain proprietary for up to 24\nmonths \"in order to assist us in recovering development costs and continue\nto provide funding for our other Open Source contributions.\"\n\nI have interpreted this to mean that basic replication (server -> server,\nserver -> client, possibly more) will be available shortly for Postgres\n(with the release of 7.1?) and that those more advanced features will\nfollow behind. This is one of the last features that was missing from\nPostgres (along with recordset returning functions and clusters, among\nothers) that was holding it back from the enterprise market -- and I do\nnot blame PostgreSQL, Inc. one bit for withholding some of the more\nadvanced features to recoup their development costs -- it was *their time*\nand *their money* they spent developing the *product* and it must be\nrecoup'ed for projects like this to make sense in the future (who knows,\nmaybe next they will implement RS returning SP's or clusters, projects\nthat are funded with their profit off the advanced replication and\ndistributed information capabilities that they *may* withhold -- would\npeople still be whining then?)\n\nMichael Fork - CCNA - MCP - A+ \nNetwork Support - Toledo Internet Access - Toledo Ohio\n\n(http://www.pgsql.com/press/PR_5.html)\n\"At the moment we are limiting our test groups to our existing Platinum\nPartners and those clients whose requirements include these\nfeatures.\" advises Jeff MacDonald, VP of Support Services. \"We expect to\nhave the source code tested and ready to contribute to the open source\ncommunity before the middle of October. Until that time we are considering\nrequests from a number of development companies and venture capital groups\nto join us in this process.\"\n\nDavidson explains, \"These initial Replication functions are important to\nalmost every commercial user of PostgreSQL. While we've fully funded all\nof this development ourselves, we will be immediately donating these\ncapabilities to the open source PostgreSQL Global Development Project as\npart of our ongoing commitment to the PostgreSQL community.\" \n\nhttp://www.erserver.com/\neRServer development is currently concentrating on core, universal\nfunctions that will enable individuals and IT professionals to implement\nPostgreSQL ORDBMS solutions for mission critical datawarehousing,\ndatamining, and eCommerce requirements. These initial developments will be\npublished under the PostgreSQL Open Source license, and made available\nthrough our sites, Certified Platinum Partners, and others in PostgreSQL\ncommunity.\n\nAdvanced Replication and Distributed Information capabilities are also\nunder development to meet specific business and competitive requirements\nfor both PostgreSQL, Inc. and clients. Several of these enhanced\nPostgreSQL, Inc. developments may remain proprietary for up to 24 months,\nwith availability limited to clients and partners, in order to assist us\nin recovering development costs and continue to provide funding for our\nother Open Source contributions. \n\nOn Sun, 3 Dec 2000, Hannu Krosing wrote:\n\n> The Hermit Hacker wrote:\n> IIRC, this thread woke up on someone complaining about PostgreSQl inc\n> promising \n> to release some code for replication in mid-october and asking for\n> confirmation \n> that this is just a schedule slip and that the project is still going on\n> and \n> going to be released as open source.\n> \n> What seems to be the answer is: \"NO, we will keep the replication code\n> proprietary\".\n> \n> I have not seen this answer myself, but i've got this impression from\n> the contents \n> of the whole discussion.\n> \n> Do you know if this is the case ?\n> \n> -----------\n> Hannu\n> \n\n\n\n\n\n\n\n\n", "msg_date": "Mon, 4 Dec 2000 08:25:54 -0500 (EST)", "msg_from": "Michael Fork <[email protected]>", "msg_from_op": true, "msg_subject": "Re: beta testing version" } ]
[ { "msg_contents": "\ni have wrote an application dealing with ean13 and ean8 type,how can i\nsubmit it ??\n", "msg_date": "Mon, 4 Dec 2000 16:26:39 +0100 (CET)", "msg_from": "<[email protected]>", "msg_from_op": true, "msg_subject": "Add-on" } ]
[ { "msg_contents": "Hi,\n\non other RDBMS (Oracle,etc...),there is an index called bitmap index that\ngreatly improve performance compared to btree index for boolean value\n(such as for a sex value,it's either M or F),i would like to know if such\nindex will be implemented inside PostgreSQL.\n\nBest regards,\n\nPEJAC Pascal\n", "msg_date": "Mon, 4 Dec 2000 16:28:47 +0100 (CET)", "msg_from": "<[email protected]>", "msg_from_op": true, "msg_subject": "Bitmap index" }, { "msg_contents": "On Mon, Dec 04, 2000 at 04:28:47PM +0100, [email protected] wrote:\n> \n> on other RDBMS (Oracle,etc...),there is an index called bitmap index\n> that greatly improve performance compared to btree index for boolean\n> value (such as for a sex value,it's either M or F),i would like to\n> know if such index will be implemented inside PostgreSQL.\n\nYes, please do send in your implementation for review.\n\nNathan Myers\[email protected]\n", "msg_date": "Mon, 4 Dec 2000 18:02:06 -0800", "msg_from": "[email protected] (Nathan Myers)", "msg_from_op": false, "msg_subject": "Re: Bitmap index" } ]
[ { "msg_contents": "Ean13 and ean8 are bar codes for european.\n\nYou can convert an ISBN or iSSN to Ean13.\nMy addon add a new type and can convert isbn to an EAN\nand calculate th key of ean. More over in few day\nadd on can store the png or jpg images of bar codes\nin blob type or \nTODO: add upc-A upc-E ean128 and other type of bar code\n\nBest regards\n\nPEJAC Pascal\n", "msg_date": "Mon, 4 Dec 2000 16:58:19 +0100 (CET)", "msg_from": "<[email protected]>", "msg_from_op": true, "msg_subject": "re : Re: Add-on" } ]
[ { "msg_contents": "posting for others who may need, hopfuly the searchable mail list works\nin the future\n\ncommenting out the define complex macro allowed me to compile on sco\n5.0.5 using udk compiler on sco, without the c++ stuff. but scos udk\nsolution breaks almost every thing else i compile on sco 5.0.5 when u\ninstall there compatability stuff they introduce 2 sets of libraries one\nfor sco one for unixware compatability, the compatability librarsy the\ngive you DO NOT replace all the shared libraies on the system, notably\n/lib/prot.so or anything under /lib has no compatable libs installed\nunder /udk/usr/lib, the udk compilers both c and c++ use the new\nlibraries by default soon as you need a libray that is not available\nunder /udk/usr/lib your screwed, there is a skunkware version of gcc but\nit passes a non existant option -b to the sco assember, and even the sco\nassebler uses the new udk libs, removing c++ on sco does not fix the\nproblem as sco says it should, since there is no binutils ported to sco\nskunkware with gas and other tools this realy sucks.\n\nnot sure if i can install my old sdk on sco 5.0.5 which was licsensed on\nsco 5.0.2, not sure if sdk and udk can co exist, and how do u manage the\nlibs dirs that are searched autmaticly, or add the -Xo option that sco\nsays will allow stuff to compile with less strict error checking -\nwinblows model stuff compiles and links may - maynot work or unknown\nproblems to test scos udk on sco openserver 5.0.5 breaks all open source\ncode, or at least most of what i want to use.\n\nadvice to future people that want to use sco open server, screw the udk,\nudk compatability\nfor those wishing to use the backwards compatability from unixware to\nsco openserver expect conflicts, and unsuportted libraries, links static\nand pray.\n\nhope like hell caldara slaps some sense into sco fast, or at least port\nbinutils, gcc then maybe u can just license the header files for 100\nbucks instead of buing crapy compiler technoweldgy for 500 bucks when\nall you want is the header files, and \n-- \nMy opinions are my own and not that of my employer even if I am self\nemployed\nTech Net, Inc. --FREE THE MACHINES-- \n651.224.2223\n627 Palace Ave. #2 [email protected] \[email protected]\nSt. Paul, MN 55102-3517 www.tnss.com \nwanted : adsl/cable modem with static ip at reasonable price\naccept-txt: us-ascii,html,pdf\naccept-dat: ascii-delimited,sql insert statments\n", "msg_date": "Mon, 04 Dec 2000 10:37:18 -0600", "msg_from": "\"Arno A. Karner\" <[email protected]>", "msg_from_op": true, "msg_subject": "update on compiling postgres on sco" } ]
[ { "msg_contents": "\n> > I am a sceptic to the many casts. Would'nt the clean \n> solution be, to use\n> > unsigned char througout the code ?\n> \n> No; see the prior discussion.\n> \n> > The casts only help to avoid compiler\n> > warnings or errors. They do not solve the underlying problem.\n> \n> You are mistaken.\n\nYou are of course correct, that they might solve the particular underlying problem,\nsorry, I did not actually read or verify the committed code.\nBut don't they in general obfuscate cases where the callee does want\nunsigned/signed chars ?\n\nMy assumption would be, that we need [un]signed char casts for library functions,\nbut we should not need them for internal code, no ? What is actually the reason \nto have them both in PostgreSQL code ?\n\nMy concern stems from a very bad experience with wrong signedness of chars\non AIX.\n\nAndreas\n", "msg_date": "Mon, 4 Dec 2000 18:08:23 +0100 ", "msg_from": "Zeugswetter Andreas SB <[email protected]>", "msg_from_op": true, "msg_subject": "AW: AW: broken locale in 7.0.2 without multibyte suppor\n\tt (F reeBSD 4.1-RELEASE) ?" }, { "msg_contents": "Zeugswetter Andreas SB <[email protected]> writes:\n> But don't they in general obfuscate cases where the callee does want\n> unsigned/signed chars ?\n\nWell, it's ugly, but I don't think we have much choice. Seems to me\nthat changing to \"unsigned char\" throughout the backend would obfuscate\nthings *more* than coding <ctype.h> calls as\n\n\tchar\t*p;\n\t...\n\tx = tolower((unsigned char) *p);\n\nwhich is what I actually did.\n\nThere are lots of places where \"char\" variables are used that will never\nsee a <ctype.h> call. Do we institute a coding rule that plain \"char\"\nis verboten in *all* cases, whether or not they're relevant to ctype\ncalls? If not, how do we check that \"char\" is being used safely?\nAren't we likely to get compiler warnings from passing \"unsigned char *\"\nto libc functions that are declared to take plain \"char *\"?\n\nI don't think that path is an improvement over a coding rule that ctype\nfunctions must be applied to unsigned chars. IMHO the latter is less\nintrusive overall, and no harder to check for violations.\n\n> My concern stems from a very bad experience with wrong signedness of chars\n> on AIX.\n\nI agree that this is something we'll have to watch. I don't see any\ncleaner answer, though.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 04 Dec 2000 15:24:06 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: AW: AW: broken locale in 7.0.2 without multibyte suppor t (F\n\treeBSD 4.1-RELEASE) ?" } ]
[ { "msg_contents": "> I'm about 99.666667% sure that the lock type choosen in the\n> FOR UPDATE case (line 511 of parse_relation.c) should be\n> RowExclusiveLock instead of RowShareLock. Actually I get\n> \"Deadlock risk\" debug messages when selecting FOR UPDATE and\n> then really UPDATE.\n\nhttp://www.postgresql.org/users-lounge/docs/6.5/user/x3116.htm\n\nRowShareLock\nAcquired by SELECT FOR UPDATE and LOCK TABLE for IN ROW SHARE MODE\nstatements. \n\nConflicts with ExclusiveLock and AccessExclusiveLock modes. \n\nVadim\n", "msg_date": "Mon, 4 Dec 2000 09:51:33 -0800 ", "msg_from": "\"Mikheev, Vadim\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Wrong FOR UPDATE lock type" }, { "msg_contents": "Mikheev, Vadim wrote:\n> > I'm about 99.666667% sure that the lock type choosen in the\n> > FOR UPDATE case (line 511 of parse_relation.c) should be\n> > RowExclusiveLock instead of RowShareLock. Actually I get\n> > \"Deadlock risk\" debug messages when selecting FOR UPDATE and\n> > then really UPDATE.\n>\n> http://www.postgresql.org/users-lounge/docs/6.5/user/x3116.htm\n>\n> RowShareLock\n> Acquired by SELECT FOR UPDATE and LOCK TABLE for IN ROW SHARE MODE\n> statements.\n>\n> Conflicts with ExclusiveLock and AccessExclusiveLock modes.\n\n Tom,\n\n IIRC the \"Deadlock risk\" debug message is from you. I think\n it must get a little smarter. IMHO an application that want's\n to UPDATE something in a transaction but must SELECT the\n row(s) first to do it's own calculation on them, should use\n SELECT FOR UPDATE. Is that debug output really appropriate in\n this case (it raises from RowShareLock to RowExclusiveLock\n because of the UPDATE of the previous FOR UPDATE selected\n row)?\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n\n\n", "msg_date": "Mon, 4 Dec 2000 14:18:25 -0500 (EST)", "msg_from": "Jan Wieck <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Wrong FOR UPDATE lock type" }, { "msg_contents": "Jan Wieck <[email protected]> writes:\n> Tom,\n> IIRC the \"Deadlock risk\" debug message is from you. I think\n> it must get a little smarter. IMHO an application that want's\n> to UPDATE something in a transaction but must SELECT the\n> row(s) first to do it's own calculation on them, should use\n> SELECT FOR UPDATE. Is that debug output really appropriate in\n> this case (it raises from RowShareLock to RowExclusiveLock\n> because of the UPDATE of the previous FOR UPDATE selected\n> row)?\n\nWell, there is a theoretical chance of deadlock --- not against other\ntransactions doing the same thing, since RowShareLock and\nRowExclusiveLock don't conflict, but you could construct deadlock\nscenarios involving other transactions that grab ShareLock or\nShareRowExclusiveLock. So I don't think it's appropriate for the\n\"deadlock risk\" check to ignore RowShareLock->RowExclusiveLock\nupgrades.\n\nBut I'm not sure the check should be enabled in production releases\nanyway. I just put it in as a quick and dirty debug check. Perhaps\nit should be under an #ifdef that's not enabled by default.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 04 Dec 2000 16:51:52 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Wrong FOR UPDATE lock type " } ]
[ { "msg_contents": "> I browsed through the CVS logs and made this list of the important\n> stuff. There's a ton of less important stuff...\n\nShall we consider this the start of the list then? I think there may be\na couple of things already mentioned in the release note stubs for 7.1\ntoo.\n\n - Thomas\n\nAdditional items:\n\nAT TIME ZONE clause for date/time types\nOVERLAPS operator support rewritten\n\nWAL --- fsync reliability without the performance hit\nTOAST --- 8K row limit is no longer significant\nouter joins (per SQL92 syntax, not Oracle's)\nsubselects in FROM clause\nviews and subselects now allow union/intersect/except, order by, limit\nviews containing grouping, aggregates, DISTINCT work now\nbit-string types work now\nfunction manager overhaul: fixes portability problems, NULL-argument\nhandling\nmemory management overhaul: prevent memory leak accumulation during\nqueries\ndrop table and rename table are now rollback-able (transaction-safe)\nextensive overhaul of configure/build mechanism\noverhaul of parameter-setting mechanisms (postmaster flags,\npostmaster.opts,\n\tetc)\nmore efficient large-object implementation\npg_dump can dump large objects now\npg_dump does the right thing with user-added objects in template1\nsupport for binding postmaster's IP socket to a virtual host name\nsupport for placing postmaster's Unix socket file elsewhere than /tmp\nkeep reference counts on syscache entries to avoid dropping still-used\nentries\nProtect against changes in LOCALE environment causing corrupted indexes\nbetter handling of unknown-type literals (default to string type more\nreadily)\ninet/cidr datatypes cleaned up\nLIKE/ESCAPE implemented, also ILIKE (case-insensitive LIKE)\naggregate-function support redesigned: only one transition function now,\n\tcleaner handling of NULLs\nSTDDEV() and VARIANCE() aggregates added\nSUM() and AVG() on integer datatypes use NUMERIC accumulators\nChild tables are now scanned by default -- ie, if foo has children then\n\tSELECT FROM foo means SELECT FROM foo*. Ditto for UPDATE and DELETE.\n\tUse SELECT FROM ONLY foo if you don't want this behavior.\nvacuum analyze does the analyze part without holding exclusive lock\n", "msg_date": "Mon, 04 Dec 2000 19:31:27 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": true, "msg_subject": "[Fwd: Re: [CORE] Going Beta on Monday ...]" }, { "msg_contents": "Macaddr manufacturer table update now in SQL table\nsyslog configurability improvements.\n* Thomas Lockhart <[email protected]> [001204 13:42]:\n> > I browsed through the CVS logs and made this list of the important\n> > stuff. There's a ton of less important stuff...\n> \n> Shall we consider this the start of the list then? I think there may be\n> a couple of things already mentioned in the release note stubs for 7.1\n> too.\n> \n> - Thomas\n> \n> Additional items:\n> \n> AT TIME ZONE clause for date/time types\n> OVERLAPS operator support rewritten\n> \n> WAL --- fsync reliability without the performance hit\n> TOAST --- 8K row limit is no longer significant\n> outer joins (per SQL92 syntax, not Oracle's)\n> subselects in FROM clause\n> views and subselects now allow union/intersect/except, order by, limit\n> views containing grouping, aggregates, DISTINCT work now\n> bit-string types work now\n> function manager overhaul: fixes portability problems, NULL-argument\n> handling\n> memory management overhaul: prevent memory leak accumulation during\n> queries\n> drop table and rename table are now rollback-able (transaction-safe)\n> extensive overhaul of configure/build mechanism\n> overhaul of parameter-setting mechanisms (postmaster flags,\n> postmaster.opts,\n> \tetc)\n> more efficient large-object implementation\n> pg_dump can dump large objects now\n> pg_dump does the right thing with user-added objects in template1\n> support for binding postmaster's IP socket to a virtual host name\n> support for placing postmaster's Unix socket file elsewhere than /tmp\n> keep reference counts on syscache entries to avoid dropping still-used\n> entries\n> Protect against changes in LOCALE environment causing corrupted indexes\n> better handling of unknown-type literals (default to string type more\n> readily)\n> inet/cidr datatypes cleaned up\n> LIKE/ESCAPE implemented, also ILIKE (case-insensitive LIKE)\n> aggregate-function support redesigned: only one transition function now,\n> \tcleaner handling of NULLs\n> STDDEV() and VARIANCE() aggregates added\n> SUM() and AVG() on integer datatypes use NUMERIC accumulators\n> Child tables are now scanned by default -- ie, if foo has children then\n> \tSELECT FROM foo means SELECT FROM foo*. Ditto for UPDATE and DELETE.\n> \tUse SELECT FROM ONLY foo if you don't want this behavior.\n> vacuum analyze does the analyze part without holding exclusive lock\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: [email protected]\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n", "msg_date": "Mon, 4 Dec 2000 13:45:42 -0600", "msg_from": "Larry Rosenman <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [Fwd: Re: [CORE] Going Beta on Monday ...]" }, { "msg_contents": "I will work on a list this week. It will have the same format as usual,\nif that is OK.\n\n> > I browsed through the CVS logs and made this list of the important\n> > stuff. There's a ton of less important stuff...\n> \n> Shall we consider this the start of the list then? I think there may be\n> a couple of things already mentioned in the release note stubs for 7.1\n> too.\n> \n> - Thomas\n> \n> Additional items:\n> \n> AT TIME ZONE clause for date/time types\n> OVERLAPS operator support rewritten\n> \n> WAL --- fsync reliability without the performance hit\n> TOAST --- 8K row limit is no longer significant\n> outer joins (per SQL92 syntax, not Oracle's)\n> subselects in FROM clause\n> views and subselects now allow union/intersect/except, order by, limit\n> views containing grouping, aggregates, DISTINCT work now\n> bit-string types work now\n> function manager overhaul: fixes portability problems, NULL-argument\n> handling\n> memory management overhaul: prevent memory leak accumulation during\n> queries\n> drop table and rename table are now rollback-able (transaction-safe)\n> extensive overhaul of configure/build mechanism\n> overhaul of parameter-setting mechanisms (postmaster flags,\n> postmaster.opts,\n> \tetc)\n> more efficient large-object implementation\n> pg_dump can dump large objects now\n> pg_dump does the right thing with user-added objects in template1\n> support for binding postmaster's IP socket to a virtual host name\n> support for placing postmaster's Unix socket file elsewhere than /tmp\n> keep reference counts on syscache entries to avoid dropping still-used\n> entries\n> Protect against changes in LOCALE environment causing corrupted indexes\n> better handling of unknown-type literals (default to string type more\n> readily)\n> inet/cidr datatypes cleaned up\n> LIKE/ESCAPE implemented, also ILIKE (case-insensitive LIKE)\n> aggregate-function support redesigned: only one transition function now,\n> \tcleaner handling of NULLs\n> STDDEV() and VARIANCE() aggregates added\n> SUM() and AVG() on integer datatypes use NUMERIC accumulators\n> Child tables are now scanned by default -- ie, if foo has children then\n> \tSELECT FROM foo means SELECT FROM foo*. Ditto for UPDATE and DELETE.\n> \tUse SELECT FROM ONLY foo if you don't want this behavior.\n> vacuum analyze does the analyze part without holding exclusive lock\n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 10 Dec 2000 16:57:44 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [Fwd: Re: [CORE] Going Beta on Monday ...]" } ]
[ { "msg_contents": "Hi. Could any kind soul tell me what's amiss here. I'm trying\nto build pg7.0.3 on a friend's box - over the net.\nKind of like driving from the backseat. ;-)\n\nMy src builds but the linker barfs with:\n\nmake[2]: Leaving directory `/usr/local/postgresql-7.0.3/src/backend/utils'\ngcc -I../include -I../backend -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -o postgres access/SUBSYS.o bootstrap/SUBSYS.o catalog/SUBSYS.o commands/SUBSYS.o executor/SUBSYS.o lib/SUBSYS.o libpq/SUBSYS.o main/SUBSYS.o parser/SUBSYS.o nodes/SUBSYS.o optimizer/SUBSYS.o port/SUBSYS.o postmaster/SUBSYS.o regex/SUBSYS.o rewrite/SUBSYS.o storage/SUBSYS.o tcop/SUBSYS.o utils/SUBSYS.o ../utils/version.o -lcrypt -lnsl -ldl -lm -lutil -lncurses -export-dynamic\n/usr/lib/libdl.a(dlsym.o): In function `doit.2':\ndlsym.o(.text+0x22): undefined reference to `_dl_default_scope'\ndlsym.o(.text+0x4c): undefined reference to `_dl_default_scope'\nmake[1]: *** [postgres] Error 1\nmake[1]: Leaving directory `/usr/local/postgresql-7.0.3/src/backend'\nmake: *** [all] Error 2\n\nTIA,\nTom\n\n\n--------------------------------------------------------------------\n SVCMC - Center for Behavioral Health \n--------------------------------------------------------------------\nThomas Good tomg@ { admin | q8 } .nrnet.org\nIS Coordinator / DBA Phone: 718-354-5528 \n Fax: 718-354-5056 \n--------------------------------------------------------------------\nPowered by: PostgreSQL s l a c k w a r e FreeBSD:\n RDBMS |---------- linux The Power To Serve\n--------------------------------------------------------------------\n\n\n", "msg_date": "Mon, 4 Dec 2000 16:30:01 -0500 (EST)", "msg_from": "Thomas Good <[email protected]>", "msg_from_op": true, "msg_subject": "Debian build failing..." } ]
[ { "msg_contents": "Hi everyone,\n\nI've recently encountered a bizzare problem that manifests itself reliably\non my running copy of postgres. I have a system set up to track IPs. The \narrangement uses two mutually-exclusive buckets, one for free IPs and\nthe other for used ones. There are rules set up on the used pool to\nremove IPs from the free on insert, and re-add them on delete.\n\nThe structure of the tables is:\n\nCREATE TABLE \"ips_free\" (\n \"block_id\" int4 NOT NULL,\n \"ip\" inet NOT NULL,\n \"contact_id\" int4,\n \"alloc_type\" int4,\n PRIMARY KEY (\"block_id\", \"ip\")\n);\n\nCREATE TABLE \"ips_used\" (\n \"block_id\" int4 NOT NULL,\n \"ip\" inet NOT NULL,\n \"contact_id\" int4,\n \"alloc_type\" int4,\n PRIMARY KEY (\"block_id\", \"ip\")\n);\n\nThe applicable rule that acts on inset to ips_used is:\n\nCREATE RULE ip_allocated_rule AS \n ON INSERT \n TO ips_used\n DO DELETE FROM ips_free\n WHERE ips_free.block_id = NEW.block_id\n AND ips_free.ip = NEW.ip;\n\nWhen I tried to minimize the total number of queries in a data load, I\ntried to get the block ID (see above for the schema definition) using\nINSERT INTO ... SELECT. A query like\n\nINSERT INTO ips_used \n (\n block_id,\n ip,\n contact_id\n )\nSELECT block_id\n , ip\n , '1000'\n FROM ips_free\n WHERE ip = '10.10.10.10'\n\nsimply reutrns with \"INSERT 0 0\" and in fact removes the IP from the\nfree bucket without adding it to the USED bucket. I really can't\nexplain this behavior and I'm hoping someone can shed a little bit of\nlight on it. \n\nI am running PostgreSQL 7.0.0 on sparc-sun-solaris2.7, compiled by gcc 2.95.2\n\nThanks\n\n\nAlex\n\n\n-- \n Alex G. Perel -=- AP5081\[email protected] -=- [email protected]\n play -=- work \n\t \nDisturbed Networks - Powered exclusively by FreeBSD\n== The Power to Serve -=- http://www.freebsd.org/ \n\n", "msg_date": "Mon, 4 Dec 2000 16:49:27 -0500 (EST)", "msg_from": "Alex Perel <[email protected]>", "msg_from_op": true, "msg_subject": "INSERT INTO ... SELECT problem" }, { "msg_contents": "Alex Perel <[email protected]> writes:\n> CREATE RULE ip_allocated_rule AS \n> ON INSERT \n> TO ips_used\n> DO DELETE FROM ips_free\n> WHERE ips_free.block_id = NEW.block_id\n> AND ips_free.ip = NEW.ip;\n\n> INSERT INTO ips_used \n> (\n> block_id,\n> ip,\n> contact_id\n> )\n> SELECT block_id\n> , ip\n> , '1000'\n> FROM ips_free\n> WHERE ip = '10.10.10.10'\n\nHmm. The rule will generate a query along these lines:\n\nDELETE FROM ips_free\nFROM ips_free ipsfree2\nWHERE ips_free.block_id = ipsfree2.block_id\n AND ips_free.ip = ipsfree2.ip\n AND ipsfree2.ip = '10.10.10.10';\n\n(I'm using ipsfree2 to convey the idea of a self-join similar to\n\"SELECT FROM ips_free, ips_free ipsfree2\" ... I don't believe the\nabove is actually legal syntax for DELETE.)\n\nThis ends up deleting all your ips_free entries for ip = '10.10.10.10',\nwhich seems to be what you want ... but I think the query added by\nthe rule is executed before the actual INSERT, which leaves you with\nnothing to insert.\n\nThere's been some debate in the past about whether an ON INSERT rule\nshould fire before or after the INSERT itself. I lean to the \"after\"\ncamp myself, which would fix this problem for you. However, you are\ntreading right on the hairy edge of circular logic here. You might want\nto think about using a trigger rather than a rule to do the deletes.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 05 Dec 2000 00:39:46 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: INSERT INTO ... SELECT problem " }, { "msg_contents": "On Tue, 5 Dec 2000, Tom Lane wrote:\n\n> Hmm. The rule will generate a query along these lines:\n> \n> DELETE FROM ips_free\n> FROM ips_free ipsfree2\n> WHERE ips_free.block_id = ipsfree2.block_id\n> AND ips_free.ip = ipsfree2.ip\n> AND ipsfree2.ip = '10.10.10.10';\n> \n> (I'm using ipsfree2 to convey the idea of a self-join similar to\n> \"SELECT FROM ips_free, ips_free ipsfree2\" ... I don't believe the\n> above is actually legal syntax for DELETE.)\n> \n> This ends up deleting all your ips_free entries for ip = '10.10.10.10',\n> which seems to be what you want ... but I think the query added by\n> the rule is executed before the actual INSERT, which leaves you with\n> nothing to insert.\n> \n> There's been some debate in the past about whether an ON INSERT rule\n> should fire before or after the INSERT itself. I lean to the \"after\"\n> camp myself, which would fix this problem for you. However, you are\n> treading right on the hairy edge of circular logic here. You might want\n> to think about using a trigger rather than a rule to do the deletes.\n\nThanks for the clarification - this is kind of what I suspected as\nwell, though I really don't understand the backend well enough to have a \nclear picture. I would think that the SELECT takes place first, and the\nresults are passed to the INSERT at which time the rule fires but the results\nof the SELECT are still in memory. I'm certainly wrong, but that's kind\nof along the lines of what I was thinking would happen.\n\nIn any case, I solved the problem by splitting the SELECT off into a\nseperate query and got rid of the headaches that way.\n\nThanks\n\nAlex\n\n-- \n Alex G. Perel -=- AP5081\[email protected] -=- [email protected]\n play -=- work \n\t \nDisturbed Networks - Powered exclusively by FreeBSD\n== The Power to Serve -=- http://www.freebsd.org/ \n\n", "msg_date": "Tue, 5 Dec 2000 11:36:49 -0500 (EST)", "msg_from": "Alex Perel <[email protected]>", "msg_from_op": true, "msg_subject": "Re: INSERT INTO ... SELECT problem " } ]
[ { "msg_contents": "\t*snip*\n> > \n> > Once all the questions regarding \"why not\" have been answered, it would\n> > be good to also ask \"why use threads?\" Do they simplify the code? Do\n> > they offer significant performance or efficiency gains? What do they\n> > give, other than being buzzword compliant?\n> \n\tThe primary advantage that I see is that a single postgres process\ncan benefit from multiple processors. I see little advantage to using thread\nfor client connections.\n", "msg_date": "Mon, 4 Dec 2000 17:48:36 -0600 ", "msg_from": "Matthew <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Using Threads?" }, { "msg_contents": "Matthew wrote:\n> The primary advantage that I see is that a single postgres process\n> can benefit from multiple processors. I see little advantage to using thread\n> for client connections.\n\nMultiprocessors best benefit multiple backends. And the current forked\nmodel lends itself admirably to SMP.\n\nAnd I say that even after using a multithreaded webserver (AOLserver)\nfor three and a half years. Of course, AOLserver also sanely uses the\nmulti process PostgreSQL backends in a pooled fashion, but that's beside\nthe point.\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Mon, 04 Dec 2000 20:18:44 -0500", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Using Threads?" } ]
[ { "msg_contents": "There ya go, I figured it out :) Given the name a table, this query will\nreturn all foreign keys in that table, the table the primary key is in,\nthe name of the primary key, if the are deferrable, if the are initially\ndeffered, and the action to be performed (RESTRICT, SET NULL, etc.). To\nget the foreign keys and primary keys and tables, you must parse the\nnull-terminated pg.tgargs.\n\nWhen I get the equivalent query working for primary keys I will send it\nyour way -- or if you beat me to it, send it my way (I am working on some\nmissing functionality from the ODBC driver)\n\nMichael Fork - CCNA - MCP - A+\nNetwork Support - Toledo Internet Access - Toledo Ohio\n\nSELECT pt.tgargs,\npt.tgnargs,\npt.tgdeferrable,\npt.tginitdeferred,\npg_proc.proname,\npg_proc_1.proname\nFROM pg_class pc,\npg_proc pg_proc,\npg_proc pg_proc_1,\npg_trigger pg_trigger,\npg_trigger pg_trigger_1,\npg_proc pp,\npg_trigger pt\nWHERE pt.tgrelid = pc.oid\nAND pp.oid = pt.tgfoid\nAND pg_trigger.tgconstrrelid = pc.oid\nAND pg_proc.oid = pg_trigger.tgfoid\nAND pg_trigger_1.tgfoid = pg_proc_1.oid\nAND pg_trigger_1.tgconstrrelid = pc.oid\nAND ((pc.relname='<<FOREIGN TABLE>>')\nAND (pp.proname LIKE '%%ins')\nAND (pg_proc.proname LIKE '%%upd')\nAND (pg_proc_1.proname LIKE '%%del')\nAND (pg_trigger.tgrelid=pt.tgconstrrelid)\nAND (pg_trigger_1.tgrelid = pt.tgconstrrelid)) \n\n\nOn Tue, 5 Dec 2000, Christopher Kings-Lynne wrote:\n\n> Hi Michael,\n> \n> I am on the phpPgAdmin development team, and I have been wanting to add this\n> functionality to phpPgAdmin. I will start working with your query as soon\n> as possible, and I will use phpPgAdmin as a testbed for the functionality.\n> \n> I really appreciate having your query as a working basis, because it's\n> really hard trying to figure out the system tables!\n> \n> Chris\n> \n> > -----Original Message-----\n> > From: [email protected]\n> > [mailto:[email protected]]On Behalf Of Michael Fork\n> > Sent: Sunday, December 03, 2000 12:23 PM\n> > To: [email protected]\n> > Subject: [HACKERS] SQL to retrieve FK's, Update/Delete action, etc.\n> >\n> >\n> > Given the name of a table, I need to find all foreign keys in that table\n> > and the table/column that they refer to, along with the action to be\n> > performed on update/delete. The following query works, but only when\n> > there is 1 foreign key in the table, when there is more than 2 it grows\n> > exponentially -- which means I am missing a join. However, given my\n> > limitied knowledge about the layouts of the postgres system tables, and\n> > the pg_trigger not being documented on the web site, I have been unable to\n> > get the correct query. Is this possible, and if so, what join(s) am I\n> > missing?\n> >\n> > SELECT pt.tgargs,\n> > pt.tgnargs,\n> > pt.tgdeferrable,\n> > pt.tginitdeferred,\n> > pg_proc.proname,\n> > pg_proc_1.proname\n> > FROM pg_class pc,\n> > pg_proc pg_proc,\n> > pg_proc pg_proc_1,\n> > pg_trigger pg_trigger,\n> > pg_trigger pg_trigger_1,\n> > pg_proc pp,\n> > pg_trigger pt\n> > WHERE pt.tgrelid = pc.oid\n> > AND pp.oid = pt.tgfoid\n> > AND pg_trigger.tgconstrrelid = pc.oid\n> > AND pg_proc.oid = pg_trigger.tgfoid\n> > AND pg_trigger_1.tgfoid = pg_proc_1.oid\n> > AND pg_trigger_1.tgconstrrelid = pc.oid\n> > AND ((pc.relname='tblmidterm')\n> > AND (pp.proname LIKE '%ins')\n> > AND (pg_proc.proname LIKE '%upd')\n> > AND (pg_proc_1.proname LIKE '%del'))\n> >\n> > Michael Fork - CCNA - MCP - A+\n> > Network Support - Toledo Internet Access - Toledo Ohio\n> >\n> \n\n\n", "msg_date": "Mon, 4 Dec 2000 23:28:32 -0500 (EST)", "msg_from": "Michael Fork <[email protected]>", "msg_from_op": true, "msg_subject": "RE: SQL to retrieve FK's, Update/Delete action, etc. (fwd)" } ]
[ { "msg_contents": "\n> And using the following program for timing thread creation \n> and cleanup:\n> \n> #include <pthread.h>\n> \n> threadfn() { pthread_exit(0); }\n\nI think you would mainly need to test how the system behaves, if \nthe threads and processes actually do some work in parallel, like:\n\nthreadfn() {int i; for (i=0; i<10000000;) {i++}; pthread_exit(0); }\n\nIn a good thread implementation 10000 parallel processes tend to get way less \ncpu than 10000 parallel threads, making threads optimal for the very many clients case\n(like > 3000).\n\nAndreas\n", "msg_date": "Tue, 5 Dec 2000 10:07:37 +0100 ", "msg_from": "Zeugswetter Andreas SB <[email protected]>", "msg_from_op": true, "msg_subject": "AW: Using Threads?" }, { "msg_contents": "On Tue, Dec 05, 2000 at 10:07:37AM +0100, Zeugswetter Andreas SB wrote:\n> > And using the following program for timing thread creation \n> > and cleanup:\n> > \n> > #include <pthread.h>\n> > \n> > threadfn() { pthread_exit(0); }\n> \n> I think you would mainly need to test how the system behaves, if \n> the threads and processes actually do some work in parallel, like:\n> \n> threadfn() {int i; for (i=0; i<10000000;) {i++}; pthread_exit(0); }\n\nThe purpose of the benchmark was to time how long it took to create and\ndestroy a process or thread, nothing more. It was not creating\nprocesses in parallel for precisely that reason. The point in dispute\nwas that threads took much less time to create than processes.\n\n> In a good thread implementation 10000 parallel processes tend to get way less \n> cpu than 10000 parallel threads, making threads optimal for the very many clients case\n> (like > 3000).\n\nWhy do you believe this? In the \"classical\" thread implementation, each\nprocess would get the same amount of CPU, no matter how many threads was\nrunning in it. That would mean that many parallel processes would get\nmore CPU in total than many threads in one process.\n-- \nBruce Guenter <[email protected]> http://em.ca/~bruceg/", "msg_date": "Tue, 5 Dec 2000 09:30:32 -0600", "msg_from": "Bruce Guenter <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Using Threads?" }, { "msg_contents": "I have been watching this thread vs non-threaded discussion and am completely with the\nprocess-only crew for a couple reasons, but lets look at a few things:\n\nThe process vs threads benchmark which showed 160us vs 120us, only did the process\ncreation, not the delayed hit of the \"copy on write\" pages in the new process. Just forking\nis not as simple as forking, once the forked process starts to work, memory that is not\nexplicitly shared is copied to the new process once it is modified. So this is a hit,\npossibly a big hit. Threads are far more efficient, it really is hard to debate.\n\nI can see a number of reasons why a multithreaded version of a database would be good.\nAsynchronous I/O perhaps, or even parallel joins, but with that being said, I think\nstability and work are by far the governing factors. Introducing multiple threads into a\nnon-multithreaded code base invariably breaks everything.\n\nSo, we want to weight the possible performance gains of multithreads vs all the work and\neffort to make them work reliably. The question is fundamentally, where are we spending our\ntime? If we are spending our time in context switches, then multithreading may be a way of\nreducing this, however, in all the applications I have built with postgres, it is always\n(like most databases) I/O bound or bound by computation.\n\nI think the benefits of rewriting code to be multithreaded are seldom worth the work and\nthe risks, unless there is a clear advantage to do so. I think most would agree that any\nincrease in performance gained by going multithreaded would be minimal, and the amount of\nwork to do so would be great.\n\n", "msg_date": "Tue, 05 Dec 2000 12:19:51 -0500", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: Using Threads?" }, { "msg_contents": "[email protected] writes:\n> The process vs threads benchmark which showed 160us vs 120us, only did\n> the process creation, not the delayed hit of the \"copy on write\" pages\n> in the new process. Just forking is not as simple as forking, once the\n> forked process starts to work, memory that is not explicitly shared is\n> copied to the new process once it is modified. So this is a hit,\n> possibly a big hit.\n\nThere aren't going to be all that many data pages needing the COW\ntreatment, because the postmaster uses very little data space of its\nown. I think this would become an issue if we tried to have the\npostmaster pre-cache catalog information for backends, however (see\nmy post elsewhere in this thread).\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 05 Dec 2000 14:52:48 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Using Threads? " }, { "msg_contents": "On Tue, Dec 05, 2000 at 02:52:48PM -0500, Tom Lane wrote:\n> There aren't going to be all that many data pages needing the COW\n> treatment, because the postmaster uses very little data space of its\n> own. I think this would become an issue if we tried to have the\n> postmaster pre-cache catalog information for backends, however (see\n> my post elsewhere in this thread).\n\nWould that pre-cached data not be placed in a SHM segment? Such\nsegments don't do COW, so this would be a non-issue.\n-- \nBruce Guenter <[email protected]> http://em.ca/~bruceg/", "msg_date": "Tue, 5 Dec 2000 14:04:19 -0600", "msg_from": "Bruce Guenter <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Using Threads?" } ]
[ { "msg_contents": "\n> Right. This is very much the guarantee that RAID (non-zero) makes, \n> except \"other than disk hardware failure\" is replaced by \"other than\n> the failure of two drives\". RAID gives you that (very, very \n> substantial\n> boost which is why it is so popular for DB servers). It doesn't give\n> you power failure assurance for much the same reason that PG \n> (or Oracle,\n> etc) can.\n\nAs far as I know (and have tested in excess) Informix IDS does survive \nany power loss without leaving the db in a corrupted state.\nThe basic technology is, that it only relys on writes to one \"file\"\n(raw device in that case), the txlog, which is directly written.\nAll writes to the txlog are basically appends to that log. Meaning that all writes\nare sync writes to the currently active (== last) page. All other IO is not a problem,\nbecause a backup image \"physical log\" is kept for each page that needs to \nbe written. During fast recovery the content of the physical log is restored to the \noriginating pages (thus all pendig IO is undone) before rollforward is started.\n\nAndreas\n", "msg_date": "Tue, 5 Dec 2000 10:12:24 +0100 ", "msg_from": "Zeugswetter Andreas SB <[email protected]>", "msg_from_op": true, "msg_subject": "AW: beta testing version " } ]
[ { "msg_contents": "Here's the query that, given the primary key table, lists all foreign\nkeys, their tables, the RI type, and defereability.\n\nMichael Fork - CCNA - MCP - A+\nNetwork Support - Toledo Internet Access - Toledo Ohio\n\nSELECT pg_trigger.tgargs,\npg_trigger.tgnargs,\npg_trigger.tgdeferrable,\npg_trigger.tginitdeferred,\npg_proc.proname,\npg_proc_1.proname\nFROM pg_class pg_class,\npg_class pg_class_1,\npg_class pg_class_2,\npg_proc pg_proc,\npg_proc pg_proc_1,\npg_trigger pg_trigger,\npg_trigger pg_trigger_1,\npg_trigger pg_trigger_2\nWHERE pg_trigger.tgconstrrelid = pg_class.oid\nAND pg_trigger.tgrelid = pg_class_1.oid\nAND pg_trigger_1.tgfoid = pg_proc_1.oid\nAND pg_trigger_1.tgconstrrelid = pg_class_1.oid\nAND pg_trigger_2.tgconstrrelid = pg_class_2.oid\nAND pg_trigger_2.tgfoid = pg_proc.oid\nAND pg_class_2.oid = pg_trigger.tgrelid\nAND ((pg_class.relname='<<PRIMARY KEY TABLE>>')\nAND (pg_proc.proname Like '%upd')\nAND (pg_proc_1.proname Like '%del')\nAND (pg_trigger_1.tgrelid=pg_trigger.tgconstrrelid)\nAND (pg_trigger_2.tgrelid = pg_trigger.tgconstrrelid))\n\nOn Tue, 5 Dec 2000, Christopher Kings-Lynne wrote:\n\n> Thanks mike - chances are it will be committed to phpPgAdmin by the end of\n> the week!\n> \n> BTW, you may wish to make sure that your email as cc'd to the hacker's list\n> as well.\n> \n> Regards,\n> \n> Chris\n> \n> --\n> Christopher Kings-Lynne\n> Family Health Network (ACN 089 639 243)\n> \n> > -----Original Message-----\n> > From: Michael Fork [mailto:[email protected]]\n> > Sent: Tuesday, December 05, 2000 12:25 PM\n> > To: Christopher Kings-Lynne\n> > Subject: RE: [HACKERS] SQL to retrieve FK's, Update/Delete action, etc.\n> >\n> >\n> > There ya go, I figured it out :) Given the name a table, this query will\n> > return all foreign keys in that table, the table the primary key is in,\n> > the name of the primary key, if the are deferrable, if the are initially\n> > deffered, and the action to be performed (RESTRICT, SET NULL, etc.). To\n> > get the foreign keys and primary keys and tables, you must parse the\n> > null-terminated pg.tgargs.\n> >\n> > When I get the equivalent query working for primary keys I will send it\n> > your way -- or if you beat me to it, send it my way (I am working on some\n> > missing functionality from the ODBC driver)\n> >\n> > Michael Fork - CCNA - MCP - A+\n> > Network Support - Toledo Internet Access - Toledo Ohio\n> >\n> > SELECT pt.tgargs,\n> > pt.tgnargs,\n> > pt.tgdeferrable,\n> > pt.tginitdeferred,\n> > pg_proc.proname,\n> > pg_proc_1.proname\n> > FROM pg_class pc,\n> > pg_proc pg_proc,\n> > pg_proc pg_proc_1,\n> > pg_trigger pg_trigger,\n> > pg_trigger pg_trigger_1,\n> > pg_proc pp,\n> > pg_trigger pt\n> > WHERE pt.tgrelid = pc.oid\n> > AND pp.oid = pt.tgfoid\n> > AND pg_trigger.tgconstrrelid = pc.oid\n> > AND pg_proc.oid = pg_trigger.tgfoid\n> > AND pg_trigger_1.tgfoid = pg_proc_1.oid\n> > AND pg_trigger_1.tgconstrrelid = pc.oid\n> > AND ((pc.relname='<<FOREIGN TABLE>>')\n> > AND (pp.proname LIKE '%%ins')\n> > AND (pg_proc.proname LIKE '%%upd')\n> > AND (pg_proc_1.proname LIKE '%%del')\n> > AND (pg_trigger.tgrelid=pt.tgconstrrelid)\n> > AND (pg_trigger_1.tgrelid = pt.tgconstrrelid))\n> >\n> >\n> > On Tue, 5 Dec 2000, Christopher Kings-Lynne wrote:\n> >\n> > > Hi Michael,\n> > >\n> > > I am on the phpPgAdmin development team, and I have been\n> > wanting to add this\n> > > functionality to phpPgAdmin. I will start working with your\n> > query as soon\n> > > as possible, and I will use phpPgAdmin as a testbed for the\n> > functionality.\n> > >\n> > > I really appreciate having your query as a working basis, because it's\n> > > really hard trying to figure out the system tables!\n> > >\n> > > Chris\n> > >\n> > > > -----Original Message-----\n> > > > From: [email protected]\n> > > > [mailto:[email protected]]On Behalf Of Michael Fork\n> > > > Sent: Sunday, December 03, 2000 12:23 PM\n> > > > To: [email protected]\n> > > > Subject: [HACKERS] SQL to retrieve FK's, Update/Delete action, etc.\n> > > >\n> > > >\n> > > > Given the name of a table, I need to find all foreign keys in\n> > that table\n> > > > and the table/column that they refer to, along with the action to be\n> > > > performed on update/delete. The following query works, but only when\n> > > > there is 1 foreign key in the table, when there is more than\n> > 2 it grows\n> > > > exponentially -- which means I am missing a join. However, given my\n> > > > limitied knowledge about the layouts of the postgres system\n> > tables, and\n> > > > the pg_trigger not being documented on the web site, I have\n> > been unable to\n> > > > get the correct query. Is this possible, and if so, what join(s) am I\n> > > > missing?\n> > > >\n> > > > SELECT pt.tgargs,\n> > > > pt.tgnargs,\n> > > > pt.tgdeferrable,\n> > > > pt.tginitdeferred,\n> > > > pg_proc.proname,\n> > > > pg_proc_1.proname\n> > > > FROM pg_class pc,\n> > > > pg_proc pg_proc,\n> > > > pg_proc pg_proc_1,\n> > > > pg_trigger pg_trigger,\n> > > > pg_trigger pg_trigger_1,\n> > > > pg_proc pp,\n> > > > pg_trigger pt\n> > > > WHERE pt.tgrelid = pc.oid\n> > > > AND pp.oid = pt.tgfoid\n> > > > AND pg_trigger.tgconstrrelid = pc.oid\n> > > > AND pg_proc.oid = pg_trigger.tgfoid\n> > > > AND pg_trigger_1.tgfoid = pg_proc_1.oid\n> > > > AND pg_trigger_1.tgconstrrelid = pc.oid\n> > > > AND ((pc.relname='tblmidterm')\n> > > > AND (pp.proname LIKE '%ins')\n> > > > AND (pg_proc.proname LIKE '%upd')\n> > > > AND (pg_proc_1.proname LIKE '%del'))\n> > > >\n> > > > Michael Fork - CCNA - MCP - A+\n> > > > Network Support - Toledo Internet Access - Toledo Ohio\n> > > >\n> > >\n> >\n> \n\n\n", "msg_date": "Tue, 5 Dec 2000 09:35:38 -0500 (EST)", "msg_from": "Michael Fork <[email protected]>", "msg_from_op": true, "msg_subject": "RE: SQL to retrieve FK's, Update/Delete action, etc. (fwd)" } ]
[ { "msg_contents": "I'm debugging some code here where I get problems related to\nspinlocks, anyhow, while running through the files I noticed\nthat the UNLOCK code seems sort of broken.\n\nWhat I mean is that on machines that have loosely ordered\nmemory models you can have problems because of data that's\nsupposed to be protected by the lock not getting flushed\nout to main memory until possibly after the unlock happens.\n\nI'm pretty sure you guys need memory barrier ops.\n\n-- \n-Alfred Perlstein - [[email protected]|[email protected]]\n\"I have the heart of a child; I keep it in a jar on my desk.\"\n", "msg_date": "Tue, 5 Dec 2000 06:48:13 -0800", "msg_from": "Alfred Perlstein <[email protected]>", "msg_from_op": true, "msg_subject": "Spinlocks may be broken." }, { "msg_contents": "Alfred Perlstein <[email protected]> writes:\n> I'm pretty sure you guys need memory barrier ops.\n\nOn a machine that requires such a thing, the assembly code for UNLOCK\nshould include it. Want to provide a patch?\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 05 Dec 2000 10:24:07 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Spinlocks may be broken. " }, { "msg_contents": "* Tom Lane <[email protected]> [001205 07:24] wrote:\n> Alfred Perlstein <[email protected]> writes:\n> > I'm pretty sure you guys need memory barrier ops.\n> \n> On a machine that requires such a thing, the assembly code for UNLOCK\n> should include it. Want to provide a patch?\n\nMy assembler is extremely rusty, you can probably find such code\nin the NetBSD or Linux kernel for all the archs you want to do.\nI wouldn't feel confident providing a patch, all I have is x86\nhardware.\n\n-- \n-Alfred Perlstein - [[email protected]|[email protected]]\n\"I have the heart of a child; I keep it in a jar on my desk.\"\n", "msg_date": "Tue, 5 Dec 2000 07:26:32 -0800", "msg_from": "Alfred Perlstein <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Spinlocks may be broken." } ]
[ { "msg_contents": "On FreeBSD 4.1.1 and above there's a sysctl tunable called\nkern.ipc.shm_use_phys, when set to 1 it's supposed to\nmake the kernel's handling of shared memory much more\neffecient at the expense or making the shm segment unpageable.\n\nI tried to use this option with 7.0.3 and FreeBSD 4.2 but\nfor some reason spinlocks keep getting mucked up (there's\na log at the tail end of this message).\n\nAnyone using Postgresql on FreeBSD probably wants this to work,\notherwise using extremely large chunks of shm and many backends\nactive can exhaust kernel memory.\n\nI was wondering if any of the more experienced developers could\ntake a look at what's happenening here.\n\nHere's the log, the number in parens is the address of the lock,\non tas() the value printed to the right is the value in _ret,\nfor the others, it's the value before the lock count is set.\n\nS_INIT_LOCK: (0x30048008) -> 0\nS_UNLOCK: (0x30048008) -> 0\nS_INIT_LOCK: (0x3004800c) -> 0\nS_UNLOCK: (0x3004800c) -> 0\nS_INIT_LOCK: (0x30048010) -> 0\nS_UNLOCK: (0x30048010) -> 0\nS_INIT_LOCK: (0x30048011) -> 0\nS_UNLOCK: (0x30048011) -> 0\nS_INIT_LOCK: (0x30048012) -> 0\nS_UNLOCK: (0x30048012) -> 0\nS_INIT_LOCK: (0x30048018) -> 0\nS_UNLOCK: (0x30048018) -> 0\nS_INIT_LOCK: (0x3004801c) -> 0\nS_UNLOCK: (0x3004801c) -> 0\nS_INIT_LOCK: (0x3004801d) -> 1\nS_UNLOCK: (0x3004801d) -> 1\nS_INIT_LOCK: (0x3004801e) -> 0\nS_UNLOCK: (0x3004801e) -> 0\nS_INIT_LOCK: (0x30048024) -> 127\nS_UNLOCK: (0x30048024) -> 127\nS_INIT_LOCK: (0x30048028) -> 255\nS_UNLOCK: (0x30048028) -> 255\nS_INIT_LOCK: (0x30048029) -> 0\nS_UNLOCK: (0x30048029) -> 0\nS_INIT_LOCK: (0x3004802a) -> 0\nS_UNLOCK: (0x3004802a) -> 0\nS_INIT_LOCK: (0x30048030) -> 1\nS_UNLOCK: (0x30048030) -> 1\nS_INIT_LOCK: (0x30048034) -> 0\nS_UNLOCK: (0x30048034) -> 0\nS_INIT_LOCK: (0x30048035) -> 0\nS_UNLOCK: (0x30048035) -> 0\nS_INIT_LOCK: (0x30048036) -> 0\nS_UNLOCK: (0x30048036) -> 0\nS_INIT_LOCK: (0x3004803c) -> 50\nS_UNLOCK: (0x3004803c) -> 50\nS_INIT_LOCK: (0x30048040) -> 10\nS_UNLOCK: (0x30048040) -> 10\nS_INIT_LOCK: (0x30048041) -> 0\nS_UNLOCK: (0x30048041) -> 0\nS_INIT_LOCK: (0x30048042) -> 0\nS_UNLOCK: (0x30048042) -> 0\nS_INIT_LOCK: (0x30048048) -> 1\nS_UNLOCK: (0x30048048) -> 1\nS_INIT_LOCK: (0x3004804c) -> 80\nS_UNLOCK: (0x3004804c) -> 80\nS_INIT_LOCK: (0x3004804d) -> 1\nS_UNLOCK: (0x3004804d) -> 1\nS_INIT_LOCK: (0x3004804e) -> 0\nS_UNLOCK: (0x3004804e) -> 0\nS_INIT_LOCK: (0x30048054) -> 0\nS_UNLOCK: (0x30048054) -> 0\nS_INIT_LOCK: (0x30048058) -> 1\nS_UNLOCK: (0x30048058) -> 1\nS_INIT_LOCK: (0x30048059) -> 1\nS_UNLOCK: (0x30048059) -> 1\nS_INIT_LOCK: (0x3004805a) -> 0\nS_UNLOCK: (0x3004805a) -> 0\nS_INIT_LOCK: (0x30048060) -> 0\nS_UNLOCK: (0x30048060) -> 0\nS_INIT_LOCK: (0x30048064) -> 0\nS_UNLOCK: (0x30048064) -> 0\nS_INIT_LOCK: (0x30048065) -> 0\nS_UNLOCK: (0x30048065) -> 0\nS_INIT_LOCK: (0x30048066) -> 0\nS_UNLOCK: (0x30048066) -> 0\nS_INIT_LOCK: (0x3004806c) -> 0\nS_UNLOCK: (0x3004806c) -> 0\nS_INIT_LOCK: (0x30048070) -> 0\nS_UNLOCK: (0x30048070) -> 0\nS_INIT_LOCK: (0x30048071) -> 0\nS_UNLOCK: (0x30048071) -> 0\nS_INIT_LOCK: (0x30048072) -> 0\nS_UNLOCK: (0x30048072) -> 0\nS_INIT_LOCK: (0x30048078) -> 0\nS_UNLOCK: (0x30048078) -> 0\nS_INIT_LOCK: (0x3004807c) -> 0\nS_UNLOCK: (0x3004807c) -> 0\nS_INIT_LOCK: (0x3004807d) -> 0\nS_UNLOCK: (0x3004807d) -> 0\nS_INIT_LOCK: (0x3004807e) -> 0\nS_UNLOCK: (0x3004807e) -> 0\ntas (0x30048054) -> 0\ntas (0x30048059) -> 0\ntas (0x30048058) -> 0\nS_UNLOCK: (0x30048054) -> 1\ntas (0x30048048) -> 0\ntas (0x3004804d) -> 0\ntas (0x3004804c) -> 0\nS_UNLOCK: (0x30048048) -> 1\ntas (0x30048048) -> 0\nS_UNLOCK: (0x3004804c) -> 1\nS_UNLOCK: (0x3004804d) -> 1\nS_UNLOCK: (0x30048048) -> 1\ntas (0x30048048) -> 0\ntas (0x3004804d) -> 0\ntas (0x3004804c) -> 0\nS_UNLOCK: (0x30048048) -> 1\ntas (0x30048048) -> 0\nS_UNLOCK: (0x3004804c) -> 1\nS_UNLOCK: (0x3004804d) -> 1\nS_UNLOCK: (0x30048048) -> 1\ntas (0x30048048) -> 0\ntas (0x3004804d) -> 4\ntas (0x3004804d) -> 1\ntas (0x3004804d) -> 1\ntas (0x3004804d) -> 1\ntas (0x3004804d) -> 1\ntas (0x3004804d) -> 1\ntas (0x3004804d) -> 1\ntas (0x3004804d) -> 1\n\nrepeats (it's stuck)\n\n\n-- \n-Alfred Perlstein - [[email protected]|[email protected]]\n\"I have the heart of a child; I keep it in a jar on my desk.\"\n", "msg_date": "Tue, 5 Dec 2000 07:14:58 -0800", "msg_from": "Alfred Perlstein <[email protected]>", "msg_from_op": true, "msg_subject": "Need help with phys backed shm segments (Postgresql+FreeBSD)." }, { "msg_contents": "Alfred Perlstein <[email protected]> writes:\n> Here's the log, the number in parens is the address of the lock,\n> on tas() the value printed to the right is the value in _ret,\n> for the others, it's the value before the lock count is set.\n\nThis looks to be the trace of a SpinAcquire()\n(see src/backend/storage/ipc/spin.c):\n\n> tas (0x30048048) -> 0\n> tas (0x3004804d) -> 0\n> tas (0x3004804c) -> 0\n> S_UNLOCK: (0x30048048) -> 1\n\nfollowed by SpinRelease():\n\n> tas (0x30048048) -> 0\n> S_UNLOCK: (0x3004804c) -> 1\n> S_UNLOCK: (0x3004804d) -> 1\n> S_UNLOCK: (0x30048048) -> 1\n\nfollowed by a failed attempt to reacquire the same SLock:\n\n> tas (0x30048048) -> 0\n> tas (0x3004804d) -> 4\n> tas (0x3004804d) -> 1\n> tas (0x3004804d) -> 1\n> tas (0x3004804d) -> 1\n> tas (0x3004804d) -> 1\n\nAnd that looks completely broken :-( ... something's clobbered the\nexlock field of the SLock struct, apparently. Are you sure this\nkernel feature you're trying to use actually works?\n\nBTW, if you're wondering why an SLock needs to contain *three*\nhardware spinlocks, the answer is that it doesn't. This code has\nbeen greatly simplified in current sources...\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 05 Dec 2000 10:43:10 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Need help with phys backed shm segments (Postgresql+FreeBSD). " }, { "msg_contents": "* Tom Lane <[email protected]> [001205 07:43] wrote:\n> Alfred Perlstein <[email protected]> writes:\n> > Here's the log, the number in parens is the address of the lock,\n> > on tas() the value printed to the right is the value in _ret,\n> > for the others, it's the value before the lock count is set.\n> \n> This looks to be the trace of a SpinAcquire()\n> (see src/backend/storage/ipc/spin.c):\n\nYes, those are my debug printfs :).\n\n> > tas (0x30048048) -> 0\n> > tas (0x3004804d) -> 0\n> > tas (0x3004804c) -> 0\n> > S_UNLOCK: (0x30048048) -> 1\n> \n> followed by SpinRelease():\n> \n> > tas (0x30048048) -> 0\n> > S_UNLOCK: (0x3004804c) -> 1\n> > S_UNLOCK: (0x3004804d) -> 1\n> > S_UNLOCK: (0x30048048) -> 1\n> \n> followed by a failed attempt to reacquire the same SLock:\n> \n> > tas (0x30048048) -> 0\n> > tas (0x3004804d) -> 4\n> > tas (0x3004804d) -> 1\n> > tas (0x3004804d) -> 1\n> > tas (0x3004804d) -> 1\n> > tas (0x3004804d) -> 1\n> \n> And that looks completely broken :-( ... something's clobbered the\n> exlock field of the SLock struct, apparently. Are you sure this\n> kernel feature you're trying to use actually works?\n\nNo I'm not sure actually. :) I'll look into it further, but I\nwas wondering if there was something I could do to debug the\nlocks better. I think I'll add some S_MAGIC or something in\nthe struct to see if the whole thing is getting clobbered or\nwhat... If you have any suggestions let me know.\n\n> BTW, if you're wondering why an SLock needs to contain *three*\n> hardware spinlocks, the answer is that it doesn't. This code has\n> been greatly simplified in current sources...\n\nIt did look a bit strange...\n\n-- \n-Alfred Perlstein - [[email protected]|[email protected]]\n\"I have the heart of a child; I keep it in a jar on my desk.\"\n", "msg_date": "Tue, 5 Dec 2000 07:47:13 -0800", "msg_from": "Alfred Perlstein <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Need help with phys backed shm segments (Postgresql+FreeBSD)." }, { "msg_contents": "Alfred Perlstein <[email protected]> writes:\n> No I'm not sure actually. :) I'll look into it further, but I\n> was wondering if there was something I could do to debug the\n> locks better. I think I'll add some S_MAGIC or something in\n> the struct to see if the whole thing is getting clobbered or\n> what... If you have any suggestions let me know.\n\nSeems like a plan. In current sources I have moved the SLock struct\ndeclaration out of header files and into spin.c; it doesn't really\nneed to be known anywhere else. You could probably do the same in\n7.0.*, which would greatly simplify changing the struct around to\nsee what's happening.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 05 Dec 2000 11:01:53 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Need help with phys backed shm segments (Postgresql+FreeBSD). " }, { "msg_contents": "BTW, I just remembered that in 7.0.*, the SLocks that are managed by\nSpinAcquire() all live in their own little shm segment. On a machine\nwhere slock_t is char, it'd likely only amount to 128 bytes or so.\nMaybe you are seeing some bug in FreeBSD's handling of tiny shm\nsegments?\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 05 Dec 2000 11:12:32 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Need help with phys backed shm segments (Postgresql+FreeBSD). " }, { "msg_contents": "* Tom Lane <[email protected]> [001205 08:37] wrote:\n> BTW, I just remembered that in 7.0.*, the SLocks that are managed by\n> SpinAcquire() all live in their own little shm segment. On a machine\n> where slock_t is char, it'd likely only amount to 128 bytes or so.\n> Maybe you are seeing some bug in FreeBSD's handling of tiny shm\n> segments?\n\nGood call, i think I found it! :)\n\n-- \n-Alfred Perlstein - [[email protected]|[email protected]]\n\"I have the heart of a child; I keep it in a jar on my desk.\"\n", "msg_date": "Tue, 5 Dec 2000 12:04:09 -0800", "msg_from": "Alfred Perlstein <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Need help with phys backed shm segments (Postgresql+FreeBSD)." }, { "msg_contents": "* Alfred Perlstein <[email protected]> [001205 12:30] wrote:\n> * Tom Lane <[email protected]> [001205 08:37] wrote:\n> > BTW, I just remembered that in 7.0.*, the SLocks that are managed by\n> > SpinAcquire() all live in their own little shm segment. On a machine\n> > where slock_t is char, it'd likely only amount to 128 bytes or so.\n> > Maybe you are seeing some bug in FreeBSD's handling of tiny shm\n> > segments?\n> \n> Good call, i think I found it! :)\n\nHere's the patch I'm using on FreeBSD, it seems to work, if any\nother FreeBSD'ers want to try it out, just apply the patch:\ncd /usr/src/sys/vm ; patch < patchfile\n\nand recompile and boot with a new kernel, then do this:\n\nsysctl -w kern.ipc.shm_use_phys=1\n\nor add:\nkern.ipc.shm_use_phys=1 \nto /etc/sysctl.conf\n\nLet me know if it works.\n\nthanks,\n-Alfred\n\nIndex: phys_pager.c\n===================================================================\nRCS file: /home/ncvs/src/sys/vm/phys_pager.c,v\nretrieving revision 1.3.2.1\ndiff -u -u -r1.3.2.1 phys_pager.c\n--- phys_pager.c\t2000/08/04 22:31:11\t1.3.2.1\n+++ phys_pager.c\t2000/12/05 20:13:25\n@@ -83,7 +83,7 @@\n \t\t * Allocate object and associate it with the pager.\n \t\t */\n \t\tobject = vm_object_allocate(OBJT_PHYS,\n-\t\t\tOFF_TO_IDX(foff + size));\n+\t\t\tOFF_TO_IDX(foff + PAGE_MASK + size));\n \t\tobject->handle = handle;\n \t\tTAILQ_INSERT_TAIL(&phys_pager_object_list, object,\n \t\t pager_object_list);\n", "msg_date": "Tue, 5 Dec 2000 13:04:45 -0800", "msg_from": "Alfred Perlstein <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Need help with phys backed shm segments (Postgresql+FreeBSD)." }, { "msg_contents": "Alfred,\n\ndo you have any numbers with and without your patch ?\nI mean performance. You may use pg_check utility.\n\n\tOleg\nOn Tue, 5 Dec 2000, Alfred Perlstein wrote:\n\n> Date: Tue, 5 Dec 2000 13:04:45 -0800\n> From: Alfred Perlstein <[email protected]>\n> To: Tom Lane <[email protected]>\n> Cc: [email protected]\n> Subject: Re: [HACKERS] Need help with phys backed shm segments (Postgresql+FreeBSD).\n> \n> * Alfred Perlstein <[email protected]> [001205 12:30] wrote:\n> > * Tom Lane <[email protected]> [001205 08:37] wrote:\n> > > BTW, I just remembered that in 7.0.*, the SLocks that are managed by\n> > > SpinAcquire() all live in their own little shm segment. On a machine\n> > > where slock_t is char, it'd likely only amount to 128 bytes or so.\n> > > Maybe you are seeing some bug in FreeBSD's handling of tiny shm\n> > > segments?\n> > \n> > Good call, i think I found it! :)\n> \n> Here's the patch I'm using on FreeBSD, it seems to work, if any\n> other FreeBSD'ers want to try it out, just apply the patch:\n> cd /usr/src/sys/vm ; patch < patchfile\n> \n> and recompile and boot with a new kernel, then do this:\n> \n> sysctl -w kern.ipc.shm_use_phys=1\n> \n> or add:\n> kern.ipc.shm_use_phys=1 \n> to /etc/sysctl.conf\n> \n> Let me know if it works.\n> \n> thanks,\n> -Alfred\n> \n> Index: phys_pager.c\n> ===================================================================\n> RCS file: /home/ncvs/src/sys/vm/phys_pager.c,v\n> retrieving revision 1.3.2.1\n> diff -u -u -r1.3.2.1 phys_pager.c\n> --- phys_pager.c\t2000/08/04 22:31:11\t1.3.2.1\n> +++ phys_pager.c\t2000/12/05 20:13:25\n> @@ -83,7 +83,7 @@\n> \t\t * Allocate object and associate it with the pager.\n> \t\t */\n> \t\tobject = vm_object_allocate(OBJT_PHYS,\n> -\t\t\tOFF_TO_IDX(foff + size));\n> +\t\t\tOFF_TO_IDX(foff + PAGE_MASK + size));\n> \t\tobject->handle = handle;\n> \t\tTAILQ_INSERT_TAIL(&phys_pager_object_list, object,\n> \t\t pager_object_list);\n> \n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Wed, 6 Dec 2000 00:20:52 +0300 (GMT)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Need help with phys backed shm segments (Postgresql+FreeBSD)." }, { "msg_contents": "Just as interesting\n\nOn Tue, 5 Dec 2000, Alfred Perlstein wrote:\n\n> * Alfred Perlstein <[email protected]> [001205 12:30] wrote:\n> > * Tom Lane <[email protected]> [001205 08:37] wrote:\n> > > BTW, I just remembered that in 7.0.*, the SLocks that are managed by\n> > > SpinAcquire() all live in their own little shm segment. On a machine\n> > > where slock_t is char, it'd likely only amount to 128 bytes or so.\n> > > Maybe you are seeing some bug in FreeBSD's handling of tiny shm\n> > > segments?\n> >\n> > Good call, i think I found it! :)\n>\n> Here's the patch I'm using on FreeBSD, it seems to work, if any\n> other FreeBSD'ers want to try it out, just apply the patch:\n> cd /usr/src/sys/vm ; patch < patchfile\n>\n> and recompile and boot with a new kernel, then do this:\n>\n> sysctl -w kern.ipc.shm_use_phys=1\n>\n> or add:\n> kern.ipc.shm_use_phys=1\n> to /etc/sysctl.conf\n>\n> Let me know if it works.\n>\n> thanks,\n> -Alfred\n>\n> Index: phys_pager.c\n> ===================================================================\n> RCS file: /home/ncvs/src/sys/vm/phys_pager.c,v\n> retrieving revision 1.3.2.1\n> diff -u -u -r1.3.2.1 phys_pager.c\n> --- phys_pager.c\t2000/08/04 22:31:11\t1.3.2.1\n> +++ phys_pager.c\t2000/12/05 20:13:25\n> @@ -83,7 +83,7 @@\n> \t\t * Allocate object and associate it with the pager.\n> \t\t */\n> \t\tobject = vm_object_allocate(OBJT_PHYS,\n> -\t\t\tOFF_TO_IDX(foff + size));\n> +\t\t\tOFF_TO_IDX(foff + PAGE_MASK + size));\n> \t\tobject->handle = handle;\n> \t\tTAILQ_INSERT_TAIL(&phys_pager_object_list, object,\n> \t\t pager_object_list);\n>\n>\n\nRandy Jonasz\nSoftware Engineer\nClick2net Inc.\nWeb: http://www.click2net.com\nPhone: (905) 271-3550\n\n\"You cannot possibly pay a philosopher what he's worth,\nbut try your best\" -- Aristotle\n\n", "msg_date": "Tue, 5 Dec 2000 16:36:25 -0500 (EST)", "msg_from": "Randy Jonasz <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Need help with phys backed shm segments (Postgresql+FreeBSD)." }, { "msg_contents": "* Oleg Bartunov <[email protected]> [001205 13:33] wrote:\n> Alfred,\n> \n> do you have any numbers with and without your patch ?\n> I mean performance. You may use pg_check utility.\n\nEr, I just made the patch a couple of hours ago, and I'm also\ndealing with some other FreeBSD issues right now. I will report\non it as soon as I can.\n\nTheoretically You'll only see performance gains when doing fork(),\nthe real intent here is to allow for giant segments, without\nkern.ipc.shm_use_phys=1 running let's say 768meg (out of 1gig)\nshared memory segments will probably cause performance problems\nbecause of the amount of swap structures needed per-process to\nmanage swappable segments.\n\nI'm going to be enabling this on one of our boxes and see if it\nmakes a noticeable difference. I'll let you guys know.\n\n> > Date: Tue, 5 Dec 2000 13:04:45 -0800\n> > From: Alfred Perlstein <[email protected]>\n> > To: Tom Lane <[email protected]>\n> > Cc: [email protected]\n> > Subject: Re: [HACKERS] Need help with phys backed shm segments (Postgresql+FreeBSD).\n> > \n> > Here's the patch I'm using on FreeBSD, it seems to work, if any\n> > other FreeBSD'ers want to try it out, just apply the patch:\n> > cd /usr/src/sys/vm ; patch < patchfile\n> > \n> > and recompile and boot with a new kernel, then do this:\n> > \n> > sysctl -w kern.ipc.shm_use_phys=1\n> > \n> > or add:\n> > kern.ipc.shm_use_phys=1 \n> > to /etc/sysctl.conf\n> > \n> > Let me know if it works.\n> > \n> > thanks,\n> > -Alfred\n", "msg_date": "Tue, 5 Dec 2000 14:52:33 -0800", "msg_from": "Alfred Perlstein <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Need help with phys backed shm segments (Postgresql+FreeBSD)." } ]
[ { "msg_contents": "> As far as I know (and have tested in excess) Informix IDS \n> does survive any power loss without leaving the db in a\n> corrupted state. The basic technology is, that it only relys\n> on writes to one \"file\" (raw device in that case), the txlog,\n> which is directly written. All writes to the txlog are basically\n> appends to that log. Meaning that all writes are sync writes to\n> the currently active (== last) page. All other IO is not a problem,\n> because a backup image \"physical log\" is kept for each page \n> that needs to be written. During fast recovery the content of the\n> physical log is restored to the originating pages (thus all pendig\n> IO is undone) before rollforward is started.\n\nSounds great! We can follow this way: when first after last checkpoint\nupdate to a page being logged, XLOG code can log not AM specific update\nrecord but entire page (creating backup \"physical log\"). During after\ncrash recovery such pages will be redone first, ensuring page consistency\nfor further redo ops. This means bigger log, of course.\n\nInitdb will not be required for these code changes, so it can be\nimplemented in any 7.1.X, X >=1.\n\nThanks, Andreas!\n\nVadim\n", "msg_date": "Tue, 5 Dec 2000 10:43:03 -0800 ", "msg_from": "\"Mikheev, Vadim\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: beta testing version " }, { "msg_contents": "On Tue, Dec 05, 2000 at 10:43:03AM -0800, Mikheev, Vadim wrote:\n> > As far as I know (and have tested in excess) Informix IDS \n> > does survive any power loss without leaving the db in a\n> > corrupted state. The basic technology is, that it only relys\n> > on writes to one \"file\" (raw device in that case), the txlog,\n> > which is directly written. All writes to the txlog are basically\n> > appends to that log. Meaning that all writes are sync writes to\n> > the currently active (== last) page. All other IO is not a problem,\n> > because a backup image \"physical log\" is kept for each page \n> > that needs to be written. During fast recovery the content of the\n> > physical log is restored to the originating pages (thus all pendig\n> > IO is undone) before rollforward is started.\n> \n> Sounds great! We can follow this way: when first after last checkpoint\n> update to a page being logged, XLOG code can log not AM specific update\n> record but entire page (creating backup \"physical log\"). During after\n> crash recovery such pages will be redone first, ensuring page consistency\n> for further redo ops. This means bigger log, of course.\n \nBe sure to include a CRC of each part of the block that you hope\nto replay individually.\n\nNathan Myers\[email protected]\n", "msg_date": "Tue, 5 Dec 2000 13:22:26 -0800", "msg_from": "[email protected] (Nathan Myers)", "msg_from_op": false, "msg_subject": "Re: beta testing version" } ]