threads
listlengths
1
2.99k
[ { "msg_contents": "\nTrying to work through the hang problem that I've been experiencing with\nv7.0, and keep adding new tools. Tom suggested lsof, and running it\nagainst the postmaster shows the above 'error' .. once it happens, it\ndoesn't appear to go away. Last hang, had two of those errors ... don't\nknow if they mean anything or are related, but figured I'd ask:\n\npgsql% cat cores/lsof.4969 \nCOMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME\npostgres 4969 pgsql cwd VDIR 13,131084 1024 2 /pgsql\npostgres 4969 pgsql rtd VDIR 13,131072 512 2 /\npostgres 4969 pgsql txt VREG 13,131084 4669590 103169 /pgsql/bin/postgres\npostgres 4969 pgsql txt VREG 13,131076 76648 212707 /usr/libexec/ld-elf.so.1\npostgres 4969 pgsql txt VREG 13,131076 11128 56672 /usr/lib/libdescrypt.so.2\npostgres 4969 pgsql txt VREG 13,131076 118044 56673 /usr/lib/libm.so.2\npostgres 4969 pgsql txt VREG 13,131076 148316 56532 /usr/lib/libreadline.so.4\npostgres 4969 pgsql txt VREG 13,131076 251416 56674 /usr/lib/libncurses.so.5\npostgres 4969 pgsql txt VREG 13,131076 550996 56746 /usr/lib/libc.so.4\npostgres 4969 pgsql 0r VCHR 2,2 0t0 7967 /dev/null\npostgres 4969 pgsql 1w VREG 13,131084 857139 761872 /pgsql/logs/postmaster.5432.4968\npostgres 4969 pgsql 2w VREG 13,131084 857139 761872 /pgsql/logs/postmaster.5432.4968\npostgres 4969 pgsql 3u IPv4 0xd462a720 0t0 TCP *:5432 (LISTEN)\npostgres 4969 pgsql 4u unix 0xd44a7780 0t0 ->(none) \npostgres 4969 pgsql 5u IPv4 0xd4631500 0t0 TCP pgsql.tht.net:5432->smaug.vex.net:61189 (ESTABLISHED)\npostgres 4969 pgsql 6u IPv4 0t0 TCP can't read inpcb at 0x00000000 \npostgres 4969 pgsql 7u IPv4 0t0 TCP can't read inpcb at 0x00000000 \npostgres 4969 pgsql 8u IPv4 0xd46300c0 0t0 TCP pgsql.tht.net:1046->smaug.vex.net:auth (ESTABLISHED)\n\n\n", "msg_date": "Wed, 10 May 2000 10:04:36 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "lsof: can't read inpcb at 0x00000000" } ]
[ { "msg_contents": "\nhas anyone played with/tested this in v7.0? I'm investigating the hanging\nproblem, and it just happened ... when I do an lsof on the process, it\nshows these two:\n\npostgres 4969 pgsql 5u IPv4 0xd4631500 0t0 TCP pgsql.tht.net:5432->smaug.vex.net:61189 (ESTABLISHED)\npostgres 4969 pgsql 8u IPv4 0xd46300c0 0t0 TCP pgsql.tht.net:1046->smaug.vex.net:auth (ESTABLISHED)\n\nit doesn't appear to lock it up every time though ... this time it\n*eventually* came back again, but, afterwards, if you do another lsof,\nthere is one more line with that \"can't read inpcb...\" error on it ...\n\ni pg_hba.conf, that host has:\n\nhost trends_acctng 216.126.72.30 255.255.255.255 ident sameuser\n\nAnd its the only time we have ident being used ... \n\nright now, its the only theory I ahve to work with ... \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Wed, 10 May 2000 10:41:31 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "pg_hba.conf && ident ..." }, { "msg_contents": "The Hermit Hacker <[email protected]> writes:\n> i pg_hba.conf, that host has:\n> host trends_acctng 216.126.72.30 255.255.255.255 ident sameuser\n> And its the only time we have ident being used ... \n> right now, its the only theory I ahve to work with ... \n\nBingo. All your cores show the thing waiting inside the ident code:\n\n(gdb) bt\n#0 0x18263890 in recvfrom () from /usr/lib/libc.so.4\n#1 0x1825062b in recv () from /usr/lib/libc.so.4\n#2 0x80ad4d0 in ident (remote_ip_addr={s_addr = 508067544}, local_ip_addr={\n s_addr = 56131288}, remote_port=27631, local_port=14357, \n ident_failed=0xbfbfeeef \"�\\004\\023 \\b,\\207\\024\\b\\212\\217(\\030\\223�\\203￿\\204￿|�\\n\\b�\\214+\\0304￿P\", \n ident_username=0xbfbfeef0 \"\\004\\023 \\b,\\207\\024\\b\\212\\217(\\030\\223�\\203￿\\204￿|�\\n\\b�\\214+\\0304￿P\") at hba.c:635\n#3 0x80ad912 in authident (raddr=0x82011ac, laddr=0x8201140, \n postgres_username=0x8201261 \"db\", auth_arg=0x8201304 \"sameuser\")\n at hba.c:869\n#4 0x80ac5b9 in be_recvauth (port=0x8201000) at auth.c:523\n#5 0x80e0c4a in readStartupPacket (arg=0x8201000, len=292, pkt=0x820101c)\n at postmaster.c:1214\n#6 0x80aeb67 in PacketReceiveFragment (port=0x8201000) at pqpacket.c:102\n#7 0x80e08ad in ServerLoop () at postmaster.c:982\n#8 0x80e039a in PostmasterMain (argc=13, argv=0xbfbffbc4) at postmaster.c:723\n#9 0x80aee43 in main (argc=13, argv=0xbfbffbc4) at main.c:93\n#10 0x8063393 in _start ()\n\nLooking at the code, there doesn't seem to be any defense against a\nbroken ident server --- there is no timeout or anything being used here!\nUgh. Has it always been like this?\n\nAnyway, I think the immediate fix for you is to stop using ident auth\nfor that host, at least till we can improve this code...\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 10 May 2000 10:27:13 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_hba.conf && ident ... " }, { "msg_contents": "On Wed, 10 May 2000, Tom Lane wrote:\n\n> The Hermit Hacker <[email protected]> writes:\n> > i pg_hba.conf, that host has:\n> > host trends_acctng 216.126.72.30 255.255.255.255 ident sameuser\n> > And its the only time we have ident being used ... \n> > right now, its the only theory I ahve to work with ... \n> \n> Bingo. All your cores show the thing waiting inside the ident code:\n> \n> (gdb) bt\n> #0 0x18263890 in recvfrom () from /usr/lib/libc.so.4\n> #1 0x1825062b in recv () from /usr/lib/libc.so.4\n> #2 0x80ad4d0 in ident (remote_ip_addr={s_addr = 508067544}, local_ip_addr={\n> s_addr = 56131288}, remote_port=27631, local_port=14357, \n> ident_failed=0xbfbfeeef \"�\\004\\023 \\b,\\207\\024\\b\\212\\217(\\030\\223�\\203￿\\204￿|�\\n\\b�\\214+\\0304￿P\", \n> ident_username=0xbfbfeef0 \"\\004\\023 \\b,\\207\\024\\b\\212\\217(\\030\\223�\\203￿\\204￿|�\\n\\b�\\214+\\0304￿P\") at hba.c:635\n> #3 0x80ad912 in authident (raddr=0x82011ac, laddr=0x8201140, \n> postgres_username=0x8201261 \"db\", auth_arg=0x8201304 \"sameuser\")\n> at hba.c:869\n> #4 0x80ac5b9 in be_recvauth (port=0x8201000) at auth.c:523\n> #5 0x80e0c4a in readStartupPacket (arg=0x8201000, len=292, pkt=0x820101c)\n> at postmaster.c:1214\n> #6 0x80aeb67 in PacketReceiveFragment (port=0x8201000) at pqpacket.c:102\n> #7 0x80e08ad in ServerLoop () at postmaster.c:982\n> #8 0x80e039a in PostmasterMain (argc=13, argv=0xbfbffbc4) at postmaster.c:723\n> #9 0x80aee43 in main (argc=13, argv=0xbfbffbc4) at main.c:93\n> #10 0x8063393 in _start ()\n> \n> Looking at the code, there doesn't seem to be any defense against a\n> broken ident server --- there is no timeout or anything being used here!\n> Ugh. Has it always been like this?\n> \n> Anyway, I think the immediate fix for you is to stop using ident auth\n> for that host, at least till we can improve this code...\n\nOnce I started scanning with lsof and saw the auth stuff, I clued in and\nwe disabled the ident stuff ... looking at your backtrace above, I should\nhave clued in sooner, as I *saw* the ident on line 2, but didn't *see* it\n:(\n\nThanks ...\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Wed, 10 May 2000 11:34:02 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "Re: pg_hba.conf && ident ... " }, { "msg_contents": "Tom Lane writes:\n> The Hermit Hacker <[email protected]> writes:\n> > i pg_hba.conf, that host has:\n> > host trends_acctng 216.126.72.30 255.255.255.255 ident sameuser\n> > And its the only time we have ident being used ... \n> > right now, its the only theory I ahve to work with ... \n> \n> Bingo. All your cores show the thing waiting inside the ident code:\n[...]\n> Looking at the code, there doesn't seem to be any defense against a\n> broken ident server --- there is no timeout or anything being used here!\n> Ugh. Has it always been like this?\n> \n> Anyway, I think the immediate fix for you is to stop using ident auth\n> for that host, at least till we can improve this code...\n\nI came across this problem a year and a half ago. In my case, the\nproblem was that the client was connecting more than the default limit\nof 40 times per minute so inetd was suspending the auth/identd service.\nI raised the limit by changing to \"nowait.500\" and that problem went\naway. I'd thought that I'd fixed PostgreSQL itself too but looking\nback in my mail logs I can only find my patch which fixes the problem\nwith sending ident requests from a server with an IP alias. I may have\nforgotten to send in the patch (or even to write one) for the \"ident\nsynchronous in postmaster\" problem itself. Sorry. I'll look harder.\n\n--Malcolm\n\n-- \nMalcolm Beattie <[email protected]>\nUnix Systems Programmer\nOxford University Computing Services\n", "msg_date": "Wed, 10 May 2000 16:51:35 +0100", "msg_from": "Malcolm Beattie <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_hba.conf && ident ..." }, { "msg_contents": "Malcolm Beattie <[email protected]> writes:\n> I'd thought that I'd fixed PostgreSQL itself too but looking\n> back in my mail logs I can only find my patch which fixes the problem\n> with sending ident requests from a server with an IP alias. I may have\n> forgotten to send in the patch (or even to write one) for the \"ident\n> synchronous in postmaster\" problem itself. Sorry. I'll look harder.\n\nYes, I see your alias patch in there, but that doesn't have anything to\ndo with the problem of a nonresponding ident server. I agree with Jan\nthat a really good fix would allow the postmaster to return to its outer\nevent loop while waiting for the ident response. It'd be a nontrivial\nrewrite though... anyone use ident enough to want to tackle it?\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 10 May 2000 12:09:29 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_hba.conf && ident ... " }, { "msg_contents": "Tom Lane writes:\n> Malcolm Beattie <[email protected]> writes:\n> > I'd thought that I'd fixed PostgreSQL itself too but looking\n> > back in my mail logs I can only find my patch which fixes the problem\n> > with sending ident requests from a server with an IP alias. I may have\n> > forgotten to send in the patch (or even to write one) for the \"ident\n> > synchronous in postmaster\" problem itself. Sorry. I'll look harder.\n> \n> Yes, I see your alias patch in there, but that doesn't have anything to\n> do with the problem of a nonresponding ident server. I agree with Jan\n> that a really good fix would allow the postmaster to return to its outer\n> event loop while waiting for the ident response. It'd be a nontrivial\n> rewrite though... anyone use ident enough to want to tackle it?\n\nIt looks like the whole pg_hba thing isn't really designed to be\nasynchronous or event-driven. A cheap and cheerful fix would be to\nreplace the blocking connect/send/recv in ident() in hba.c with\nfoo_timeout ones (for foo one of connect/send/recv). Basically, set\nO_NONBLOCK on the socket with fcntl and have foo_timeout() do\n ...\n FD_SET(ourfd, &fds);\n tv.tv_sec = TIMEOUT;\n foo(...);\n if (select(ourfd+1, &fds, &fds, 0, &tv) == -1)\n\treturn -1;\n return foo(...);\nAt least you then have an upper bound of about 3*TIMEOUT on how long\nthe postmaster is busy. It would still be susceptible to a denial of\nservice attack though. The other option would be an alarm() timeout\nwhich could wrap the entire ident process but doing alarms portably\nand safely is weird on some platforms depending on what else is going\non at the time.\n\n--Malcolm\n\n-- \nMalcolm Beattie <[email protected]>\nUnix Systems Programmer\nOxford University Computing Services\n", "msg_date": "Wed, 10 May 2000 17:28:52 +0100", "msg_from": "Malcolm Beattie <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_hba.conf && ident ..." }, { "msg_contents": "Malcolm Beattie <[email protected]> writes:\n> It looks like the whole pg_hba thing isn't really designed to be\n> asynchronous or event-driven.\n\nNope, the module would need a pretty thorough rewrite ...\n\n> A cheap and cheerful fix would be to\n> replace the blocking connect/send/recv in ident() in hba.c with\n> foo_timeout ones (for foo one of connect/send/recv).\n\nThat was what I was thinking too, unless we find a volunteer to do\nthe bigger job. I don't particularly care to spend that much time\non this problem myself.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 10 May 2000 12:43:49 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_hba.conf && ident ... " } ]
[ { "msg_contents": "I've cleaned up the FTP site, removing the RC5 files, as well as\nseveral old, obsolete directories and files.\n\nI moved some files around too, but only to fit in with the current\nscheme of having releases in their own directories.\n\nLet me know if I moved too much or not enough...\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Wed, 10 May 2000 14:06:15 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": true, "msg_subject": "FTP site" } ]
[ { "msg_contents": "These are exerpts from a message from Tatsuo Ishii dated January 26, on\nthe subject of fragile code in the multibyte routines:\n\n---- begin ----\nDefensive programming saves the system but does not user. Once\ncorrupted data is stored in the system, it's totally useless for the\nuser anyway. What about validating data *before* inserting it into a\ntable?\n---- end ----\n\n---- begin ----\n> >Here it is. With this patch, copy out should be happy even with the\n> >wrong data. I'm not sure if it could be displayed correctly, though.\n> \n> Thank you very much. However, I think even this is too optimistic:\n> \n> >! \tif (*s & 0x80)\n> \n> Shouldn't it be something like:\n> \n> if ((*s & 0x80) && (*(s+1) & 0x80))\n> \n> Even though \"\\242\\242\\242\\0\" is an invalid EUC sequence, it still shouldn't be\n> allowed to break the software.\n\nThanks for the suggestion. More robust code is always good.\n---- end ----\n\nMore robust code may always be good, but \"good\" apparently doesn't always go\ninto the tree. Imagine my surprise, while upgrading a production server\nfrom 6.5.3 to 7.0, when the data dumped from the old database failed to load\ninto the new database (well, crashed the backend, to be specific).\n\nApparently the \"validate your own damn data\" sentiment of the first excerpt\nabove has prevailed, because, on inspection, the MB code is just as fragile\nas it was five months ago.\n\nI was forced to perform emergency repairs to my database dump file to fool a \nnon-multibyte 7.0 into accepting it. Since EUC_CN is compatible with \nLatin-1, and since the benefits of multibyte are small compared to the \nrisks, I intend to stick with unibyte Postgres henceforth.\n\nI would, though, recommend a warning in the \"INSTALL\" file along the lines of:\n\n \"WARNING: Use of improperly-encoded text with multi-byte support enabled\n WILL lead to data corruption and/or loss. Do not enable multi-byte support\n unless you intend to fully validate your own damn data.\"\n\n\t-Michael Robinson\n\n", "msg_date": "Wed, 10 May 2000 22:08:19 +0800 (+0800)", "msg_from": "Michael Robinson <[email protected]>", "msg_from_op": true, "msg_subject": "Multibyte still broken" }, { "msg_contents": "Michael Robinson <[email protected]> writes:\n> More robust code may always be good, but \"good\" apparently doesn't always go\n> into the tree. Imagine my surprise, while upgrading a production server\n> from 6.5.3 to 7.0, when the data dumped from the old database failed to load\n> into the new database (well, crashed the backend, to be specific).\n\nSounds like someone failed to follow through on applying that fix :-(.\nPatches have been known to slip through the cracks before, especially\nwhen not fully worked out and formally submitted.\n\n> I would, though, recommend a warning in the \"INSTALL\" file along the lines of:\n\n> \"WARNING: Use of improperly-encoded text with multi-byte support enabled\n> WILL lead to data corruption and/or loss. Do not enable multi-byte support\n> unless you intend to fully validate your own damn data.\"\n\nSorry you had trouble, but the above seems uncalled-for. What would be\nmore productive is a submitted patch to apply against 7.0.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 10 May 2000 11:08:45 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Multibyte still broken " }, { "msg_contents": "\nOk, it appears the database backend is being stupid again, I'm getting all\nkinds of errors.\n\n From my PHP code, I'm getting this kind of error now:\n\n\nheap_beginscan:\n!RelationIsValid(relation) in classes/db_pgsql.inc on line 63\nDatabase error: Invalid SQL: select\nuser_id_hash,session_id,user_id,agency_id,date_part('epoch',created) as\nstart,date_part('epoch',updated) as age from users_online where\nuser_id_hash='9de5085eac89de8be0169b6dc6801524'\nPostgreSQL Error: 1 (ERROR: heap_beginscan: !RelationIsValid(relation) )\nSession halted\n\nI got \"Sorry, too many clients already\" -- a little while ago..\n\nI'm running at debug level 3, but have nothing to show for it in the log\nfile except this :\n\nNOTICE: Message from PostgreSQL backend:\n The Postmaster has informed me that some other backend died\nabnormally and possibly corrupted shared memory.\n I have rolled back the current transaction and am going to terminate\nyour database system connection and exit.\n Please reconnect to the database system and repeat your query.\n/usr/local/pgsql/bin/postmaster: CleanupProc: sending SIGUSR1 to process\n44639\n/usr/local/pgsql/bin/postmaster: CleanupProc: reinitializing shared memory\nand semaphores\nshmem_exit(0) [#0]\n\nSo.. Now that I've got it to break, any ideas on what I can do to figure out\nwhy it's happening? This is still a 6.5.3 server.. (I weas going to upgrade\ntoday believe it or not.)\n\n\n- Mitch\n\n\"The only real failure is quitting.\"\n\n\n", "msg_date": "Wed, 10 May 2000 11:57:28 -0400", "msg_from": "\"Mitch Vincent\" <[email protected]>", "msg_from_op": false, "msg_subject": "Great, big errors ... Again." }, { "msg_contents": "On Wed, 10 May 2000, Mitch Vincent wrote:\n\n> \n> Ok, it appears the database backend is being stupid again, I'm getting all\n> kinds of errors.\n> \n> >From my PHP code, I'm getting this kind of error now:\n> \n> \n> heap_beginscan:\n> !RelationIsValid(relation) in classes/db_pgsql.inc on line 63\n> Database error: Invalid SQL: select\n> user_id_hash,session_id,user_id,agency_id,date_part('epoch',created) as\n> start,date_part('epoch',updated) as age from users_online where\n> user_id_hash='9de5085eac89de8be0169b6dc6801524'\n> PostgreSQL Error: 1 (ERROR: heap_beginscan: !RelationIsValid(relation) )\n> Session halted\n> \n> I got \"Sorry, too many clients already\" -- a little while ago..\n> \n> I'm running at debug level 3, but have nothing to show for it in the log\n> file except this :\n> \n> NOTICE: Message from PostgreSQL backend:\n> The Postmaster has informed me that some other backend died\n> abnormally and possibly corrupted shared memory.\n> I have rolled back the current transaction and am going to terminate\n> your database system connection and exit.\n> Please reconnect to the database system and repeat your query.\n> /usr/local/pgsql/bin/postmaster: CleanupProc: sending SIGUSR1 to process\n> 44639\n> /usr/local/pgsql/bin/postmaster: CleanupProc: reinitializing shared memory\n> and semaphores\n> shmem_exit(0) [#0]\n> \n> So.. Now that I've got it to break, any ideas on what I can do to figure out\n> why it's happening? This is still a 6.5.3 server.. (I weas going to upgrade\n> today believe it or not.)\n\nMy suggestions at this time, with v7.0 relesed (and assuming you aren't\nusing ident?) is to get the upgrade over and done with, so that we can\ndebug that. Its hard to spend the resources on debugging v6.5.3 when\na) you are planning on upgrading anyway and b) it might already be fixed\nin v7.0 :(\n\nAt least if you can generate the errors with v7.0, its fresh in ppls minds\n...\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n\n", "msg_date": "Wed, 10 May 2000 14:01:11 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Great, big errors ... Again." }, { "msg_contents": "Already going...\n\n- Mitch\n\n\"The only real failure is quitting.\"\n\n\n----- Original Message -----\nFrom: The Hermit Hacker <[email protected]>\nTo: Mitch Vincent <[email protected]>\nCc: <[email protected]>\nSent: Wednesday, May 10, 2000 1:01 PM\nSubject: Re: [HACKERS] Great, big errors ... Again.\n\n\n> On Wed, 10 May 2000, Mitch Vincent wrote:\n>\n> >\n> > Ok, it appears the database backend is being stupid again, I'm getting\nall\n> > kinds of errors.\n> >\n> > >From my PHP code, I'm getting this kind of error now:\n> >\n> >\n> > heap_beginscan:\n> > !RelationIsValid(relation) in classes/db_pgsql.inc on line 63\n> > Database error: Invalid SQL: select\n> > user_id_hash,session_id,user_id,agency_id,date_part('epoch',created) as\n> > start,date_part('epoch',updated) as age from users_online where\n> > user_id_hash='9de5085eac89de8be0169b6dc6801524'\n> > PostgreSQL Error: 1 (ERROR: heap_beginscan: !RelationIsValid(relation) )\n> > Session halted\n> >\n> > I got \"Sorry, too many clients already\" -- a little while ago..\n> >\n> > I'm running at debug level 3, but have nothing to show for it in the log\n> > file except this :\n> >\n> > NOTICE: Message from PostgreSQL backend:\n> > The Postmaster has informed me that some other backend died\n> > abnormally and possibly corrupted shared memory.\n> > I have rolled back the current transaction and am going to\nterminate\n> > your database system connection and exit.\n> > Please reconnect to the database system and repeat your query.\n> > /usr/local/pgsql/bin/postmaster: CleanupProc: sending SIGUSR1 to process\n> > 44639\n> > /usr/local/pgsql/bin/postmaster: CleanupProc: reinitializing shared\nmemory\n> > and semaphores\n> > shmem_exit(0) [#0]\n> >\n> > So.. Now that I've got it to break, any ideas on what I can do to figure\nout\n> > why it's happening? This is still a 6.5.3 server.. (I weas going to\nupgrade\n> > today believe it or not.)\n>\n> My suggestions at this time, with v7.0 relesed (and assuming you aren't\n> using ident?) is to get the upgrade over and done with, so that we can\n> debug that. Its hard to spend the resources on debugging v6.5.3 when\n> a) you are planning on upgrading anyway and b) it might already be fixed\n> in v7.0 :(\n>\n> At least if you can generate the errors with v7.0, its fresh in ppls minds\n> ...\n>\n> Marc G. Fournier ICQ#7615664 IRC Nick:\nScrappy\n> Systems Administrator @ hub.org\n> primary: [email protected] secondary:\nscrappy@{freebsd|postgresql}.org\n>\n>\n\n", "msg_date": "Wed, 10 May 2000 13:06:46 -0400", "msg_from": "\"Mitch Vincent\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Great, big errors ... Again." }, { "msg_contents": "> More robust code may always be good, but \"good\" apparently doesn't always go\n> into the tree. Imagine my surprise, while upgrading a production server\n> from 6.5.3 to 7.0, when the data dumped from the old database failed to load\n> into the new database (well, crashed the backend, to be specific).\n> \n> Apparently the \"validate your own damn data\" sentiment of the first excerpt\n> above has prevailed, because, on inspection, the MB code is just as fragile\n> as it was five months ago.\n> \n> I was forced to perform emergency repairs to my database dump file to fool a \n> non-multibyte 7.0 into accepting it. Since EUC_CN is compatible with \n> Latin-1, and since the benefits of multibyte are small compared to the \n> risks, I intend to stick with unibyte Postgres henceforth.\n> \n> I would, though, recommend a warning in the \"INSTALL\" file along the lines of:\n> \n> \"WARNING: Use of improperly-encoded text with multi-byte support enabled\n> WILL lead to data corruption and/or loss. Do not enable multi-byte support\n> unless you intend to fully validate your own damn data.\"\n\nSorry for the problem. I forgot about issue:-<\n\nWhat I'm thinking now to fix the problem you found is that doing data\nvalidataion in the text/var/char input functions, rather than tweaking\nthe mb functions. If corrupted MB string was found, then call\nelog(ERROR) to abort the transation. Will appear in 7.0.1 unless\nsomeone objects.\n--\nTatsuo Ishii\n", "msg_date": "Thu, 11 May 2000 10:07:19 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Multibyte still broken" }, { "msg_contents": "Tom Lane <[email protected]> writes:\n>Sorry you had trouble,\n\nTrouble I had foreseen, which I had brought to the mailing list, and for\nwhich I had provided an explanation and fix, by the way.\n\n>but the above seems uncalled-for. What would be\n>more productive is a submitted patch to apply against 7.0.\n\nActually, I think there are three separate issues here.\n\n1. There was a potentially serious problem, and it didn't get on Bruce's\n master TODO list, which is the authoritative reference for what in Postgres\n needs to be fixed. I take responsibility for not seeing to it that this\n was done. \n\n2. By its nature, the multi-byte code is poorly overseen, and poorly \n exercised. This is understandable, as the overwhelming majority of \n Postgres installations don't want or need it. However, the result is that\n it's effectively a \"contrib\"-quality component in the Postgres core.\n\n3. The multi-byte support, as it currently exists, is founded on a particular\n philosophy, one which I argue is not the most pragmatic. In the East Asia\n that I know and love, multibyte support, as a rule, is an ugly hack on top\n of unibyte tools and infrastructure. The exception is end-to-end Unicode\n systems, which can be relied on to produce predictable data. However, I've\n never encountered a native Simplified Chinese GB application in which it\n was the least bit difficult to produce \"illegal\" code sequences. In a\n real-world environment, with email attachments, cut-and-paste, and whatnot,\n it's practically inevitable.\n\n Thus, I cannot accept a situation where my database aborts on, or\n otherwise rejects data that is produced and accepted by all other tools\n in the work environment, and will stick with a unibyte build as long as\n this is the case.\n\n\t-Michael Robinson\n\n", "msg_date": "Thu, 11 May 2000 22:06:50 +0800 (+0800)", "msg_from": "Michael Robinson <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Multibyte still broken" }, { "msg_contents": "> Thus, I cannot accept a situation where my database aborts on, or\n> otherwise rejects data that is produced and accepted by all other tools\n> in the work environment, and will stick with a unibyte build as long as\n> this is the case.\n\nWe are planning on working on NATIONAL CHARACTER and CHARACTER SET\nfeatures for the next release, which would move MB into a supported\nfeature which can coexist with ascii in the standard installation.\nWould you be interested in contributing to that, either with code or\nsimply with implementation ideas and testing? I know that Tatsuo is\nplanning on participating, and I'm looking forward to working on it\ntoo, though as you point out those of us who can get by with ascii\ntext don't know how to exercise MB capabilities so I'll need\nsuggestions and help.\n\nRegards.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Thu, 11 May 2000 14:58:30 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Multibyte still broken" }, { "msg_contents": "> 3. The multi-byte support, as it currently exists, is founded on a particular\n> philosophy, one which I argue is not the most pragmatic. In the East Asia\n> that I know and love, multibyte support, as a rule, is an ugly hack on top\n> of unibyte tools and infrastructure. The exception is end-to-end Unicode\n> systems, which can be relied on to produce predictable data. However, I've\n> never encountered a native Simplified Chinese GB application in which it\n> was the least bit difficult to produce \"illegal\" code sequences. In a\n> real-world environment, with email attachments, cut-and-paste, and whatnot,\n> it's practically inevitable.\n\nI am supprised to hear that you have so poor quality tools that\nproduce illegal code sequences of Simplified Chinese. In Japan, as far\nas I know, we never have such a low quality tools which generate\nillegal Japanese charaters just because they are not accepted in the\nmarket, even in the case of email attachments, or cut-and-past or\nwhatever. I would like to hear from one living in others countries\nsuch as Taiwan or Korea about their situations.\n\nAnyway, IMHO letting users know that they have corrupted data is much\nbetter than making them believe that their data are ok.\n--\nTatsuo Ishii\n", "msg_date": "Fri, 12 May 2000 00:44:05 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Multibyte still broken" }, { "msg_contents": "Thomas Lockhart <[email protected]> writes:\n>We are planning on working on NATIONAL CHARACTER and CHARACTER SET\n>features for the next release, which would move MB into a supported\n>feature which can coexist with ascii in the standard installation.\n>Would you be interested in contributing to that, either with code or\n>simply with implementation ideas and testing?\n\nIt will be my pleasure to provide as much support as my circumstances permit.\nUnfortunately, though, my circumstances lately don't permit a whole lot :-(.\n\n\t-Michael\n\n", "msg_date": "Thu, 11 May 2000 23:48:20 +0800 (+0800)", "msg_from": "Michael Robinson <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Multibyte still broken" }, { "msg_contents": "> Tom Lane <[email protected]> writes:\n> >Sorry you had trouble,\n> \n> Trouble I had foreseen, which I had brought to the mailing list, and for\n> which I had provided an explanation and fix, by the way.\n> \n> >but the above seems uncalled-for. What would be\n> >more productive is a submitted patch to apply against 7.0.\n> \n> Actually, I think there are three separate issues here.\n> \n> 1. There was a potentially serious problem, and it didn't get on Bruce's\n> master TODO list, which is the authoritative reference for what in Postgres\n> needs to be fixed. I take responsibility for not seeing to it that this\n> was done. \n\nPlease tell me what to add.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 11 May 2000 12:53:32 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Multibyte still broken" }, { "msg_contents": "Tatsuo Ishii <[email protected]> writes:\n>I am supprised to hear that you have so poor quality tools that\n>produce illegal code sequences of Simplified Chinese. In Japan, as far\n>as I know, we never have such a low quality tools which generate\n>illegal Japanese charaters just because they are not accepted in the\n>market, even in the case of email attachments, or cut-and-past or\n>whatever.\n\nThe problem is not that the tools produce \"illegal characters\". The problem\nis that, as an EUC code, GB permits the coexistance of standard ascii\ncharacters with double-byte hanzi characters. Furthermore, most Chinese \nsoftware is an operating-system \"hack\" on top of English-language software\nbased on a Latin-1 character set (the Chinese software market is underserved\ncompared to Japan, so we have to cope as best we can).\n\nThe result is that it is possible to, for example, insert a carriage return\nor ASCII comma into the middle of a hanzi, which breaks the alignment for all \nthe hanzi on the rest of the line. It's also possible, in non-native Chinese\napplications, to select one byte of a hanzi character in a cut or copy \noperation.\n\nSo the problem is that the tools do not uniformly respect the integrity of\na double-byte hanzi character, but rather treat it as two individual Latin-1\ncharacters.\n\nThe important point, though, is that all tools, whether native Chinese or\n\"hacked\" English, accept the resulting invalid code sequences consistently,\nrobustly, and without complaint.\n\n\t-Michael\n\n", "msg_date": "Fri, 12 May 2000 01:56:15 +0800 (+0800)", "msg_from": "Michael Robinson <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Multibyte still broken" }, { "msg_contents": "I am going to need a confirmation from someone else before adding it to\nthe TODO list.\n\n> Bruce Momjian <[email protected]> writes:\n> >> 1. There was a potentially serious problem, and it didn't get on Bruce's\n> >> master TODO list, which is the authoritative reference for what in Postgres\n> >> needs to be fixed. I take responsibility for not seeing to it that this\n> >> was done. \n> >\n> >Please tell me what to add.\n> \n> TODO: Multibyte encoding/decoding routines need to handle invalid character\n> sequences safely (and, if possible, gracefully).\n> \n> \t-Michael Robinson\n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 11 May 2000 14:02:07 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Multibyte still broken" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n>> 1. There was a potentially serious problem, and it didn't get on Bruce's\n>> master TODO list, which is the authoritative reference for what in Postgres\n>> needs to be fixed. I take responsibility for not seeing to it that this\n>> was done. \n>\n>Please tell me what to add.\n\nTODO: Multibyte encoding/decoding routines need to handle invalid character\n sequences safely (and, if possible, gracefully).\n\n\t-Michael Robinson\n\n", "msg_date": "Fri, 12 May 2000 02:02:59 +0800 (+0800)", "msg_from": "Michael Robinson <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Multibyte still broken" } ]
[ { "msg_contents": "On Wed, 10 May 2000, Jan Wieck wrote:\n\n> Tom Lane wrote:\n> > Bingo. All your cores show the thing waiting inside the ident code:\n> >\n> > [...]\n> >\n> > Looking at the code, there doesn't seem to be any defense against a\n> > broken ident server --- there is no timeout or anything being used here!\n> > Ugh. Has it always been like this?\n> >\n> > Anyway, I think the immediate fix for you is to stop using ident auth\n> > for that host, at least till we can improve this code...\n> \n> Looks like the entire communication with a new client is\n> handled in a nonblocking manner via select(2) in\n> ServerLoop(). I think the ident lookup belongs to there too,\n> and this improvement isn't something for a quick hack. It\n> takes a little longer to be well tested.\n> \n> Let's try it for 7.0.1 or 7.0.2. Clearly is a bugfix IMHO.\n> \n> Also we might think about using some kind of timeout after\n> which a new connection should either get rejected or succeeds\n> in backend start. Just to prevent a bogus client from\n> creating a forever dangling connection.\n\nCool, our first DOS :)\n\n\n", "msg_date": "Wed, 10 May 2000 12:58:56 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "Re: pg_hba.conf && ident ..." } ]
[ { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> On Tue, 9 May 2000, Tom Lane wrote:\n>> dnl Check tr flags to convert from lower to upper case\n\n>> Does anyone recall why this test is in there to begin with?\n\n> I don't see the results of this test being used anywhere at all, so I'd\n> say yank it. If your system doesn't support tr '[A-Z]' '[a-z]' the\n> configure script will fail to run anyway, as it uses this contruct\n> indiscriminately.\n\nThe results *are* used, in backend/utils/Gen_fmgrtab.sh.in (and\napparently nowhere else). But the data being processed there is just\nbuiltin function names, so I'm at a loss why someone thought that it'd\nbe worth testing for a locale-specific variant of 'tr'. I agree, I'm\npretty strongly tempted to yank it.\n\nBut we haven't yet figured out Travis' problem: why is the configure\ntest failing? Useless or not, I don't see why it's falling over...\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 10 May 2000 12:02:43 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Re: Problems compiling version 7 " }, { "msg_contents": "I've found part of the problem. I put \"echo $TR\" right before the line\nthat is failing. $TR is set to \"/usr/ucb/tr\" during the\nconfiguration. The directory ucb does not exist. If execute \"which\ntr\" at the command line, I get \"/usr/bin//tr.\" Why is $TR getting set to\n/usr/ucb/tr?\n\nBy the way, I do not get this problem when compiling the last version of\nPostgresql on this same machine. I'm in the processof upgrading. That\ncompile was fine.\n\nThanks,\n\n----------------------------------------------------------------\nTravis Bauer | CS Grad Student | IU |www.cs.indiana.edu/~trbauer\n----------------------------------------------------------------\n\n> \n> But we haven't yet figured out Travis' problem: why is the configure\n> test failing? Useless or not, I don't see why it's falling over...\n> \n> \t\t\tregards, tom lane\n\n", "msg_date": "Wed, 10 May 2000 12:11:43 -0500 (EST)", "msg_from": "Travis Bauer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: Problems compiling version 7 " }, { "msg_contents": "Tom Lane writes:\n\n> >> dnl Check tr flags to convert from lower to upper case\n\n> The results *are* used, in backend/utils/Gen_fmgrtab.sh.in (and\n> apparently nowhere else).\n\nAh, I see. Substituting into source files directly from configure ... very\nevil...\n\n(Before you ask why: What if I change Gen_fmgrtab.sh.in, do I have to\nre-configure?)\n\n> But we haven't yet figured out Travis' problem: why is the configure\n> test failing? Useless or not, I don't see why it's falling over...\n\nUnfortunately he cut off the line where it says `checking for tr'.\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Wed, 10 May 2000 23:25:29 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: Problems compiling version 7 " }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> Ah, I see. Substituting into source files directly from configure ... very\n> evil...\n> (Before you ask why: What if I change Gen_fmgrtab.sh.in, do I have to\n> re-configure?)\n\nYup, or at least re-run config.status. I've griped that configure\nwrites far too many files myself. But I didn't have much luck\nconvincing the other developers that it's a bad idea to set things up\nthat way, rather than writing just a small number of config files.\n\nWe're going to have to deal with the problem though if we ever want\nto be able to build in a separate directory tree...\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 10 May 2000 18:24:19 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Re: Problems compiling version 7 " } ]
[ { "msg_contents": "On the Postgres website main page, the current version is listed as 7.0.\nHowever, when you go to the download page, it says 6.5.3 and I think the\nlink is pointing to 6.5.3. I can see the 7.0 tar distribution on the FTP\nsite, but was wondering if the HTML pages just hadn't been updated.\n\n\n-Tony\n\n\n", "msg_date": "Wed, 10 May 2000 12:42:24 -0700", "msg_from": "\"G. Anthony Reina\" <[email protected]>", "msg_from_op": true, "msg_subject": "Is 7.0 up?" }, { "msg_contents": "On Wed, 10 May 2000, G. Anthony Reina wrote:\n\n> On the Postgres website main page, the current version is listed as 7.0.\n> However, when you go to the download page, it says 6.5.3 and I think the\n> link is pointing to 6.5.3. I can see the 7.0 tar distribution on the FTP\n> site, but was wondering if the HTML pages just hadn't been updated.\n\nOops! Working on that now.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] http://www.pop4.net\n 128K ISDN from $22.00/mo - 56K Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Wed, 10 May 2000 15:47:13 -0400 (EDT)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Is 7.0 up?" }, { "msg_contents": "> On Wed, 10 May 2000, G. Anthony Reina wrote:\n> \n> > On the Postgres website main page, the current version is listed as 7.0.\n> > However, when you go to the download page, it says 6.5.3 and I think the\n> > link is pointing to 6.5.3. I can see the 7.0 tar distribution on the FTP\n> > site, but was wondering if the HTML pages just hadn't been updated.\n> \n> Oops! Working on that now.\n> \n\nMan, we are fast.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 10 May 2000 15:56:18 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Is 7.0 up?" } ]
[ { "msg_contents": "This reminds me of a problem I once had.\n\nI was trying to \"make\" the documentation, but zcat kept failing because the\nSolaris version does not work the same as the GNU version. So I installed\nthe GNU zcat, ran configure again, but still make was failing because of\nzcat...\n\nI found that once configure has found an item it is looking for, it caches\nit in config.cache. From then on, even if you do a \"make clean\", configure\nstill uses things from config.cache.\n\nTo have configure essentially start from scratch and \"re-find\" things, you\nmust run \"make distclean\", which will also delete config.cache. After doing\nthis, and running configure again, everything worked fine.\n\nHope this helps,\n\nPhil Culberson\nDAT Services\n\n\n-----Original Message-----\nFrom: Peter Eisentraut [mailto:[email protected]]\nSent: Wednesday, May 10, 2000 2:25 PM\nTo: Tom Lane\nCc: Travis Bauer; [email protected];\[email protected]\nSubject: Re: [HACKERS] Re: [GENERAL] Problems compiling version 7 \n\n\nTom Lane writes:\n\n> >> dnl Check tr flags to convert from lower to upper case\n\n> The results *are* used, in backend/utils/Gen_fmgrtab.sh.in (and\n> apparently nowhere else).\n\nAh, I see. Substituting into source files directly from configure ... very\nevil...\n\n(Before you ask why: What if I change Gen_fmgrtab.sh.in, do I have to\nre-configure?)\n\n> But we haven't yet figured out Travis' problem: why is the configure\n> test failing? Useless or not, I don't see why it's falling over...\n\nUnfortunately he cut off the line where it says `checking for tr'.\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n", "msg_date": "Wed, 10 May 2000 15:04:44 -0700", "msg_from": "\"Culberson, Philip\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] Re: Problems compiling version 7 " }, { "msg_contents": "Thanks for all the advice. The following was _exactly_ the problem. I\nhad tried to compile this source code in a recent version of solaris. It\ncompiled flawlessly, but postmaster wouldn't start up. So I just did a\nsecure shell to a linux machine, ran \"make clean\" from the same directory,\nand ran configure. I thought make clean would erase all except the\ndistribution files. It compiled, installed, and I used psql\nto get into the database. I haven't tested the ODBC or the JDBC\ninterfaces yet (I'm using both in my projects), but all seems to be\nworking.\n\nThanks,\n\n----------------------------------------------------------------\nTravis Bauer | CS Grad Student | IU |www.cs.indiana.edu/~trbauer\n----------------------------------------------------------------\n\nOn Wed, 10 May 2000, Culberson, Philip wrote:\n\n> \n> I found that once configure has found an item it is looking for, it caches\n> it in config.cache. From then on, even if you do a \"make clean\", configure\n> still uses things from config.cache.\n> \n\n", "msg_date": "Wed, 10 May 2000 22:00:38 -0500 (EST)", "msg_from": "Travis Bauer <[email protected]>", "msg_from_op": false, "msg_subject": "RE: [HACKERS] Re: Problems compiling version 7 - solved" }, { "msg_contents": "\nIn general this can be solved by doing a \n\n\tmake distclean\n\nwhich should be included in the makefile, which is supposed to kill all \nof the config.cache and generated targets created in the configure step.\n\nSorry if this is already answered, just joined.\n\n-bill\n\nOn Wed, 10 May 2000, Travis Bauer wrote:\n\n> Thanks for all the advice. The following was _exactly_ the problem. I\n> had tried to compile this source code in a recent version of solaris. It\n> compiled flawlessly, but postmaster wouldn't start up. So I just did a\n> secure shell to a linux machine, ran \"make clean\" from the same directory,\n> and ran configure. I thought make clean would erase all except the\n> distribution files. It compiled, installed, and I used psql\n> to get into the database. I haven't tested the ODBC or the JDBC\n> interfaces yet (I'm using both in my projects), but all seems to be\n> working.\n> \n> Thanks,\n> \n> ----------------------------------------------------------------\n> Travis Bauer | CS Grad Student | IU |www.cs.indiana.edu/~trbauer\n> ----------------------------------------------------------------\n> \n> On Wed, 10 May 2000, Culberson, Philip wrote:\n> \n> > \n> > I found that once configure has found an item it is looking for, it caches\n> > it in config.cache. From then on, even if you do a \"make clean\", configure\n> > still uses things from config.cache.\n> > \n> \n> \n", "msg_date": "Wed, 10 May 2000 20:25:41 -0700 (PDT)", "msg_from": "\"William J. Mills\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: [HACKERS] Re: Problems compiling version 7 - solved" }, { "msg_contents": "Travis Bauer <[email protected]> writes:\n> Thanks for all the advice. The following was _exactly_ the problem. I\n> had tried to compile this source code in a recent version of solaris. It\n> compiled flawlessly, but postmaster wouldn't start up. So I just did a\n> secure shell to a linux machine, ran \"make clean\" from the same directory,\n> and ran configure. I thought make clean would erase all except the\n> distribution files.\n\nAh, bingo. \"make clean\" doesn't erase the configure-produced files,\nand in particular not config.cache, so you were getting polluted by\nconfigure test results from the other platform. \"make distclean\" is\nthe thing to do before reconfiguring. AFAIK this is a pretty\nstandard convention for applications that use configure scripts.\n\n(Actually, it'd probably work to just delete config.cache, but you may\nas well do the \"make distclean\" instead. I know that works.)\n\nNext question: do you feel like pursuing the failure on solaris?\nThat should've worked...\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 11 May 2000 01:36:47 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: Problems compiling version 7 - solved " }, { "msg_contents": "Hi,\n\nWhat is the recommended way to reload a database in 6.5.3?\n\nRight now I tried something like:\npsql -f testfile -u test > /dev/null 2>&1 \n(then enter username and password, and wait)\n(testfile is a dump of a 36MB database)\n\nAnd I think it works, but while it is running it affects the performance of\nother unrelated databases severely. Should this be happening? Top only\nshows four or so backends running, but the webapps are like grinding down\nto 20 seconds or more per page. Normally our webapps can do 12 hits/sec or\nso, so such a drastic drop is surprising. This is on a PIII 500MHz 512MB,\nSCSI 9GB HDD system.\n\nAlso when I did a 30MB psql level copy (\\copy) into a test database and\ntable, somehow the performance of _other_ databases was bad until I did a\nvacuum (which took ages). This was rather surprising. \n\nI'll probably switch to 7.0 soon and see if the issue crops up. I hope\ncopies are much faster in 7.0. Should I turn off fsync when doing bulk\ncopies? I'm worried that if the backend crashes duing the copy, the other\ndatabases may get corrupted.\n\nAny comments?\n\nThanks,\nLink.\n\n", "msg_date": "Thu, 11 May 2000 15:32:27 +0800", "msg_from": "Lincoln Yeoh <[email protected]>", "msg_from_op": false, "msg_subject": "Dumping and reloading stuff in 6.5.3" }, { "msg_contents": "Tom,\n\nI read the FAQ on solaris, and the error I got was the IPCMemoryCreate\nerror mentioned there. If I asked the system admin's they may reboot a\nmachine for me to fix that error, but it's just as easy for me to use\nLinux box on our network as it is to use a Solaris box. \n\nThanks again for all the help.\n\n----------------------------------------------------------------\nTravis Bauer | CS Grad Student | IU |www.cs.indiana.edu/~trbauer\n----------------------------------------------------------------\n\nOn Thu, 11 May 2000, Tom Lane wrote:\n\n> \n> Next question: do you feel like pursuing the failure on solaris?\n> That should've worked...\n> \n> \t\t\tregards, tom lane\n> \n\n\n\n", "msg_date": "Thu, 11 May 2000 08:39:32 -0500 (EST)", "msg_from": "Travis Bauer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: Problems compiling version 7 - solved " }, { "msg_contents": "Lincoln Yeoh wrote:\n> \n> Actually, say if I use postgresql on a journalling file system, would this\n> problem go away? e.g. I just lose data not written, but the database will\n> be at a known uncorrupted state, consistent with logs.\n\nHmm, someone may have to correct me on this but last time I checked all\nthe\njournalling filesystems currently available journalled *filesystem* \n*metadata*. IOW, after a crash, the filesystem structure will be intact,\nbut your database maybe completely corrupt.\n\nAnyway, how does the filesystem driver know what is a consistant state \nfor the database? For that postgres would have to do it's own\njournalling,\nright?\n-- \nMartijn van Oosterhout <[email protected]>\nhttp://cupid.suninternet.com/~kleptog/\n", "msg_date": "Tue, 16 May 2000 00:12:25 +1000", "msg_from": "Martijn van Oosterhout <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Dumping and reloading stuff in 6.5.3" }, { "msg_contents": "Lincoln Yeoh <[email protected]> writes:\n> Actually, say if I use postgresql on a journalling file system, would this\n> problem go away?\n\nNot unless your kernel guarantees to write dirty buffers to disk in the\norder they were dirtied, which would be a fairly surprising thing for it\nto guarantee IMHO; that's not how Unix buffer caches normally behave.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 15 May 2000 10:15:53 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Dumping and reloading stuff in 6.5.3 " }, { "msg_contents": "> Lincoln Yeoh wrote:\n> > \n> > Actually, say if I use postgresql on a journalling file system, would this\n> > problem go away? e.g. I just lose data not written, but the database will\n> > be at a known uncorrupted state, consistent with logs.\n> \n> Hmm, someone may have to correct me on this but last time I checked all\n> the\n> journalling filesystems currently available journalled *filesystem* \n> *metadata*. IOW, after a crash, the filesystem structure will be intact,\n> but your database maybe completely corrupt.\n\nThe buffers remain sitting in the file system buffers, not on disk in a\ndick crash. That is the problem. Journaling does not change this.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 15 May 2000 14:32:45 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Dumping and reloading stuff in 6.5.3" } ]
[ { "msg_contents": "=# select count(*) from ref_old;\n count \n-------\n 10595\n(1 row)\n\n=# select count(*) from ref_new;\n count \n-------\n 22997\n(1 row)\n\n=# select ref_id from ref_old except select ref_id from ref_new;\n\nTakes over 10 minutes, probably closer to half an hour.\n\nI've also tried using 'NOT IN ( select ref_id from ref_new )'\n\nref_id is an int4, this is on Postgresql 7.0.\n\nThis confuses me because the way I'd plan to execute this query would\nbe something like this: (pseudo code)\n\nresult retval;\nsort(ref_old);\nsort(ref_new);\ni = k = 0;\nwhile (i < count(ref_old)) {\n\twhile(ref_old[i] > ref_new[k])\n\t\tk++;\n\twhile(ref_old[i] == ref_new[k])\n\t\ti++;\n\twhile(ref_old[i] < ref_new[k])\n\t\tstore(&retval, ref_old[i++]);\n}\nreturn (retval);\n\nI can't imagine this algorithm would take over 10 minutes on my\nhardware. Can anyone shed some light on what's going on here?\n\nIs there a way to formulate my SQL to get Postgresql to follow\nthis algorithm?\n\nthanks,\n-- \n-Alfred Perlstein - [[email protected]|[email protected]]\n\"I have the heart of a child; I keep it in a jar on my desk.\"\n", "msg_date": "Wed, 10 May 2000 15:35:12 -0700", "msg_from": "Alfred Perlstein <[email protected]>", "msg_from_op": true, "msg_subject": "Unable to get acceptable performance from EXCEPT" }, { "msg_contents": "Alfred Perlstein <[email protected]> writes:\n> =# select ref_id from ref_old except select ref_id from ref_new;\n> Takes over 10 minutes, probably closer to half an hour.\n> I've also tried using 'NOT IN ( select ref_id from ref_new )'\n\nYup. EXCEPT is effectively translated to a NOT IN, if I recall\ncorrectly, and neither IN ( sub-select ) nor NOT IN ( sub-select )\nare implemented very efficiently. Basically you get O(N^2) behavior\nbecause the inner select is rescanned for each outer tuple.\n\nWe have a TODO list item to try to be smarter about this...\n\n> Is there a way to formulate my SQL to get Postgresql to follow\n> this algorithm [ kind of like a mergejoin ]\n\nNo, but you could try\n\nselect ref_id from ref_old where not exists\n(select ref_id from ref_new where ref_id = ref_old.ref_id);\n\nwhich would at least be smart enough to consider using an index\non ref_new(ref_id) instead of a sequential scan.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 10 May 2000 18:38:49 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Unable to get acceptable performance from EXCEPT " }, { "msg_contents": "* Tom Lane <[email protected]> [000510 16:22] wrote:\n> Alfred Perlstein <[email protected]> writes:\n> > =# select ref_id from ref_old except select ref_id from ref_new;\n> > Takes over 10 minutes, probably closer to half an hour.\n> > I've also tried using 'NOT IN ( select ref_id from ref_new )'\n> \n> Yup. EXCEPT is effectively translated to a NOT IN, if I recall\n> correctly, and neither IN ( sub-select ) nor NOT IN ( sub-select )\n> are implemented very efficiently. Basically you get O(N^2) behavior\n> because the inner select is rescanned for each outer tuple.\n> \n> We have a TODO list item to try to be smarter about this...\n> \n> > Is there a way to formulate my SQL to get Postgresql to follow\n> > this algorithm [ kind of like a mergejoin ]\n> \n> No, but you could try\n> \n> select ref_id from ref_old where not exists\n> (select ref_id from ref_new where ref_id = ref_old.ref_id);\n> \n> which would at least be smart enough to consider using an index\n> on ref_new(ref_id) instead of a sequential scan.\n\nWhich cuts the query time down to less than a second!\n\nthanks!\n\nReady for the evil magic?\n\nselect\n distinct(o.ref_id)\nfrom\n ref_link o\nwhere\n o.stat_date < '2000-04-26 12:12:41-07'\n AND not exists\n (\n select\n n.ref_id\n from\n ref_link n\n where\n n.stat_date >= '2000-04-26 12:12:41-07'\n AND n.ref_id = o.ref_id\n )\n;\n\nThanks a ton.\n\n-- \n-Alfred Perlstein - [[email protected]|[email protected]]\n\"I have the heart of a child; I keep it in a jar on my desk.\"\n", "msg_date": "Wed, 10 May 2000 17:10:43 -0700", "msg_from": "Alfred Perlstein <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Unable to get acceptable performance from EXCEPT" } ]
[ { "msg_contents": "I did a distclean on 7.0, and ran 'wc' on all the *.[chly] files, and\ngot a much larger number than what we got from Berkeley.\n\n\t376175\n\nSeems someone has been busy. :-)\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 10 May 2000 21:42:26 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Now 376175 lines of code" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> I did a distclean on 7.0, and ran 'wc' on all the *.[chly] files, and\n> got a much larger number than what we got from Berkeley.\n> \t376175\n> Seems someone has been busy. :-)\n\nForgive a newbie --- what was the count for the original Berkeley code?\nDo you have the same numbers for other milestones?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 11 May 2000 01:45:31 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Now 376175 lines of code " }, { "msg_contents": "> Bruce Momjian <[email protected]> writes:\n> > I did a distclean on 7.0, and ran 'wc' on all the *.[chly] files, and\n> > got a much larger number than what we got from Berkeley.\n> > \t376175\n> > Seems someone has been busy. :-)\n> \n> Forgive a newbie --- what was the count for the original Berkeley code?\n> Do you have the same numbers for other milestones?\n\n250,000\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 11 May 2000 10:47:06 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Now 376175 lines of code" }, { "msg_contents": "On Thu, May 11, 2000 at 01:45:31AM -0400, Tom Lane wrote:\n> Bruce Momjian <[email protected]> writes:\n> > I did a distclean on 7.0, and ran 'wc' on all the *.[chly] files, and\n> > got a much larger number than what we got from Berkeley.\n> > \t376175\n> > Seems someone has been busy. :-)\n> \n> Forgive a newbie --- what was the count for the original Berkeley code?\n> Do you have the same numbers for other milestones?\n\nNot that I'm a big believer in kloc as a measure of productivity (oh,\nBruce just said busy, didn't he? That's a different story...), I happen\nto have a couple historical trees laying around, starting with the last\none I found at Berkeley:\n\n postgres-v4r2 244581\n postgres95-1.09 178976\n postgresql-6.1.1 200709\n postgresql-6.3.2 260809\n postgresql-6.4.0 297479\n postgresql-6.4.2 297918\n postgresql-6.5.3 331278 \n\nWell, more than a couple trees, I guess (actually I unpacked tarballs\nfor most of these)\n\nHTH,\nRoss\n-- \nRoss J. Reedstrom, Ph.D., <[email protected]> \nNSBRI Research Scientist/Programmer\nComputer and Information Technology Institute\nRice University, 6100 S. Main St., Houston, TX 77005\n", "msg_date": "Thu, 11 May 2000 10:23:42 -0500", "msg_from": "\"Ross J. Reedstrom\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Now 376175 lines of code" }, { "msg_contents": "I found these numbers quite interesting.\n\n\n> On Thu, May 11, 2000 at 01:45:31AM -0400, Tom Lane wrote:\n> > Bruce Momjian <[email protected]> writes:\n> > > I did a distclean on 7.0, and ran 'wc' on all the *.[chly] files, and\n> > > got a much larger number than what we got from Berkeley.\n> > > \t376175\n> > > Seems someone has been busy. :-)\n> > \n> > Forgive a newbie --- what was the count for the original Berkeley code?\n> > Do you have the same numbers for other milestones?\n> \n> Not that I'm a big believer in kloc as a measure of productivity (oh,\n> Bruce just said busy, didn't he? That's a different story...), I happen\n> to have a couple historical trees laying around, starting with the last\n> one I found at Berkeley:\n> \n> postgres-v4r2 244581\n> postgres95-1.09 178976\n> postgresql-6.1.1 200709\n> postgresql-6.3.2 260809\n> postgresql-6.4.0 297479\n> postgresql-6.4.2 297918\n> postgresql-6.5.3 331278 \n> \n> Well, more than a couple trees, I guess (actually I unpacked tarballs\n> for most of these)\n> \n> HTH,\n> Ross\n> -- \n> Ross J. Reedstrom, Ph.D., <[email protected]> \n> NSBRI Research Scientist/Programmer\n> Computer and Information Technology Institute\n> Rice University, 6100 S. Main St., Houston, TX 77005\n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 13 Jun 2000 03:38:10 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Now 376175 lines of code" }, { "msg_contents": "Never mind. I see I ran it already on 7.0 and got 376k. You used my\nidential script to get these numbers. I will use your nice numbers for\na presentation at the show in two weeks. Thanks a lot.\n\n\n\n> On Thu, May 11, 2000 at 01:45:31AM -0400, Tom Lane wrote:\n> > Bruce Momjian <[email protected]> writes:\n> > > I did a distclean on 7.0, and ran 'wc' on all the *.[chly] files, and\n> > > got a much larger number than what we got from Berkeley.\n> > > \t376175\n> > > Seems someone has been busy. :-)\n> > \n> > Forgive a newbie --- what was the count for the original Berkeley code?\n> > Do you have the same numbers for other milestones?\n> \n> Not that I'm a big believer in kloc as a measure of productivity (oh,\n> Bruce just said busy, didn't he? That's a different story...), I happen\n> to have a couple historical trees laying around, starting with the last\n> one I found at Berkeley:\n> \n> postgres-v4r2 244581\n> postgres95-1.09 178976\n> postgresql-6.1.1 200709\n> postgresql-6.3.2 260809\n> postgresql-6.4.0 297479\n> postgresql-6.4.2 297918\n> postgresql-6.5.3 331278 \n> \n> Well, more than a couple trees, I guess (actually I unpacked tarballs\n> for most of these)\n> \n> HTH,\n> Ross\n> -- \n> Ross J. Reedstrom, Ph.D., <[email protected]> \n> NSBRI Research Scientist/Programmer\n> Computer and Information Technology Institute\n> Rice University, 6100 S. Main St., Houston, TX 77005\n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 19 Oct 2000 21:03:00 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Now 376175 lines of code" } ]
[ { "msg_contents": "Did anyone see this patch? I sure didn't. I see it in the patches\narchive, but did not receive the e-mail.\n\nIs the patches list working? Marc?\n\n---------------------------------------------------------------------------\n", "msg_date": "Wed, 10 May 2000 23:27:42 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Patches list broken?" }, { "msg_contents": "\ni don't have you subscribed to -patches ... I have 62 listed subscribers,\nand you don't appear to be one of them ... you are on every other list\nthough:\n\n pgsql-hackers:\n pgsql-general:\n pgsql-admin:\n pgsql-sql:\n pgsql-core:\n pgsql-ports:\n pgsql-docs:\n pgsql-announce:\n pgsql-bugs:\n pgsql-loophole:\n\nOn Wed, 10 May 2000, Bruce Momjian wrote:\n\n> Did anyone see this patch? I sure didn't. I see it in the patches\n> archive, but did not receive the e-mail.\n> \n> Is the patches list working? Marc?\n> \n> ---------------------------------------------------------------------------\n> \n> >From [email protected] Mon May 8 13:28:54 2000\n> Received: from walter.doc.ic.ac.uk (IDENT:VjgMrPQKQlAhOUagIW0/[email protected] [146.169.2.50])\n> by hub.org (8.9.3/8.9.3) with ESMTP id NAA78580\n> for <[email protected]>; Mon, 8 May 2000 13:27:41 -0400 (EDT)\n> (envelope-from [email protected])\n> Received: from [146.169.51.42] (helo=kungfu.doc.ic.ac.uk ident=mw)\n> by walter.doc.ic.ac.uk with esmtp (Exim 1.890 #1)\n> for [email protected]\n> id 12orKe-0000J8-00; Mon, 8 May 2000 18:28:36 +0100\n> Date: Mon, 8 May 2000 18:27:40 +0100 (BST)\n> From: Mike Wyer <[email protected]>\n> To: [email protected]\n> Subject: kerberos 5 patch against 7.0RC5\n> Message-ID: <[email protected]>\n> MIME-Version: 1.0\n> Content-Type: TEXT/PLAIN; charset=US-ASCII\n> X-Archive-Number: 200005/24\n> \n> You can find it after my sig. Hideous abuse of netiquette, but needs\n> must ...\n> \n> Most (nearly all) of the work was done by David Wragg <[email protected]>\n> \n> -- \n> Bruce Momjian | http://www.op.net/~candle\n> [email protected] | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Thu, 11 May 2000 01:00:21 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Patches list broken?" }, { "msg_contents": "I'm studying database management on my own with a college text book, and along\nthe way, I've been writing an html file using <dl> to create a definition list\nof terms from the book with my own elaborated explanations that breakdown the\nterseness. I am making examples, and procedures for some things, having to\ndo with normal forms and other things.\n\nIf a glossary of database design/management terms can fit into any of the\nPostgreSQL documentation, I'd be happy to offer it. What I have so far is at:\nhttp://www.comptechnews.com/~reaster/dbdesign.html\n\nMaybe it will be worthy if I keep working on it.\n\nRobert\n\n\n", "msg_date": "Thu, 11 May 2000 00:03:14 -0400", "msg_from": "\"Robert B. Easter\" <[email protected]>", "msg_from_op": false, "msg_subject": "Database Management/Design terms, glossary of" }, { "msg_contents": "> I'm studying database management on my own with a college text book, and along\n> the way, I've been writing an html file using <dl> to create a definition list\n> of terms from the book with my own elaborated explanations that breakdown the\n> terseness. I am making examples, and procedures for some things, having to\n> do with normal forms and other things.\n> If a glossary of database design/management terms can fit into any of the\n> PostgreSQL documentation, I'd be happy to offer it. What I have so far is at:\n> http://www.comptechnews.com/~reaster/dbdesign.html\n\nThat would be a very nice addition to the docs. To incorporate the\ninfo into the main docs we would simple modify the markup to\nSGML/DocBook, then automatically format into hardcopy and html.\n\nPlease consider submitting it when you think it is ready. You might\nalso solicit contributions from others, to help share the load.\n\nRegards.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Thu, 11 May 2000 05:31:22 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Database Management/Design terms, glossary of" }, { "msg_contents": "The Hermit Hacker <[email protected]> writes:\n> i don't have you subscribed to -patches\n\n> On Wed, 10 May 2000, Bruce Momjian wrote:\n>> Did anyone see this patch? I sure didn't. I see it in the patches\n>> archive, but did not receive the e-mail.\n\nI do not recall seeing it either, and I most certainly *was* subscribed\nto -patches ... since Bruce is generally agreed to be our lead\npatch-applier, I'd be more than a little startled to hear that he hasn't\nbeen subscribed there ... so it sounds like majordomo has dropped some\nsubscriptions :-(. Do you have auto-drop-on-any-bounce features\nenabled?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 11 May 2000 01:51:57 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Patches list broken? " }, { "msg_contents": "On Thu, May 11, 2000 at 01:51:57AM -0400, Tom Lane wrote:\n> The Hermit Hacker <[email protected]> writes:\n> > i don't have you subscribed to -patches\n> \n> > On Wed, 10 May 2000, Bruce Momjian wrote:\n> >> Did anyone see this patch? I sure didn't. I see it in the patches\n> >> archive, but did not receive the e-mail.\n> \n> I do not recall seeing it either, and I most certainly *was* subscribed\n> to -patches ... since Bruce is generally agreed to be our lead\n> patch-applier, I'd be more than a little startled to hear that he hasn't\n> been subscribed there ... so it sounds like majordomo has dropped some\n> subscriptions :-(. Do you have auto-drop-on-any-bounce features\n> enabled?\n\nI certainly used to be on patches, never unsubscribed, but have also never\nreceived a single mail from said list since the move to majordomo 2...\n\nPatrick\n", "msg_date": "Thu, 11 May 2000 11:53:15 +0100", "msg_from": "Patrick Welche <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Patches list broken?" }, { "msg_contents": "On Thu, 11 May 2000, Tom Lane wrote:\n\n> The Hermit Hacker <[email protected]> writes:\n> > i don't have you subscribed to -patches\n> \n> > On Wed, 10 May 2000, Bruce Momjian wrote:\n> >> Did anyone see this patch? I sure didn't. I see it in the patches\n> >> archive, but did not receive the e-mail.\n> \n> I do not recall seeing it either, and I most certainly *was* subscribed\n> to -patches ... since Bruce is generally agreed to be our lead\n> patch-applier, I'd be more than a little startled to hear that he hasn't\n> been subscribed there ... so it sounds like majordomo has dropped some\n> subscriptions :-(. Do you have auto-drop-on-any-bounce features\n> enabled?\n\nNope, they've been working on it, but all they have so far is a bounce\nmessage that gets sent to me instead of the normal MAILER-DAEMON errors\n...\n\n\n", "msg_date": "Thu, 11 May 2000 08:49:18 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Patches list broken? " }, { "msg_contents": "\nHey, in your case, I only have you on loophole ... how are you even\nreading this thread? :)\n\n\nOn Thu, 11 May 2000, Patrick Welche wrote:\n\n> On Thu, May 11, 2000 at 01:51:57AM -0400, Tom Lane wrote:\n> > The Hermit Hacker <[email protected]> writes:\n> > > i don't have you subscribed to -patches\n> > \n> > > On Wed, 10 May 2000, Bruce Momjian wrote:\n> > >> Did anyone see this patch? I sure didn't. I see it in the patches\n> > >> archive, but did not receive the e-mail.\n> > \n> > I do not recall seeing it either, and I most certainly *was* subscribed\n> > to -patches ... since Bruce is generally agreed to be our lead\n> > patch-applier, I'd be more than a little startled to hear that he hasn't\n> > been subscribed there ... so it sounds like majordomo has dropped some\n> > subscriptions :-(. Do you have auto-drop-on-any-bounce features\n> > enabled?\n> \n> I certainly used to be on patches, never unsubscribed, but have also never\n> received a single mail from said list since the move to majordomo 2...\n> \n> Patrick\n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Thu, 11 May 2000 08:50:35 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Patches list broken?" }, { "msg_contents": "I have added this to our FAQ. Thanks.\n\n\n> I'm studying database management on my own with a college text book, and along\n> the way, I've been writing an html file using <dl> to create a definition list\n> of terms from the book with my own elaborated explanations that breakdown the\n> terseness. I am making examples, and procedures for some things, having to\n> do with normal forms and other things.\n> \n> If a glossary of database design/management terms can fit into any of the\n> PostgreSQL documentation, I'd be happy to offer it. What I have so far is at:\n> http://www.comptechnews.com/~reaster/dbdesign.html\n> \n> Maybe it will be worthy if I keep working on it.\n> \n> Robert\n> \n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 29 Sep 2000 23:05:13 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Database Management/Design terms, glossary of" } ]
[ { "msg_contents": "I found that plpgsql did not have the proper mapping of\ndatetime->timestamp and timespan->interval. It had the old 6.5.*\nreverse mappings. Fix in CVS and will appear in 7.0.1.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 11 May 2000 00:06:58 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Fix for plpgsql and timestamp/interval" } ]
[ { "msg_contents": "Can someone comment on this patch? It appears it should be applied.\n\n\n> As postgresql 7.0 grew more imminent, I decided to try compiling the\n> latest release candidate 7.0RC4. The following diff against the 7.0RC4\n> tree accomplishes the following things:\n> \n> \t1) pl/tcl no longer builds correctly since it doesn't use the\n> \t Tcl header directory passed in via --with-includes=\"tcldir\".\n> \t The configure script saves this directory in \"CPPFLAGS\", so\n> \t I patched Makefile.global.in to substitute for CPPFLAGS, and\n> \t pl/tcl/Makefile to include CPPFLAGS in its header search path.\n> \n> \t2) bin/pgtclsh and pl/tcl use pgsql headers but don't properly\n> \t use -I$(LIBPQDIR) to include the correct header directory.\n> \t This has been fixed.\n> \n> \t3) I couldn't find anything in the tree which still needs\n> \t ncurses/curses. I removed the check for those libraries from\n> \t configure.in. If I'm wrong please let me know.\n> \n> \t4) Properly build the shared object in the odbc directory on\n> \t NetBSD. Well, at least it works on NetBSD/ELF.\n> \n> I hope all of this makes it into 7.0.\n> \n> \tThanks!\n> \n> --\tJohnny C. Lam <[email protected]>\n> \tDepartment of Statistics, Carnegie Mellon University\n> \thttp://www.stat.cmu.edu/~lamj/\n> \n> --- Makefile.global.in.orig\tSun Apr 16 23:44:56 2000\n> +++ Makefile.global.in\tThu May 4 19:31:24 2000\n> @@ -208,8 +208,9 @@\n> YACC= @YACC@\n> LEX= @LEX@\n> AROPT= @AROPT@\n> -CFLAGS= -I$(SRCDIR)/include -I$(SRCDIR)/backend @CPPFLAGS@ @CFLAGS@\n> +CFLAGS= -I$(SRCDIR)/include -I$(SRCDIR)/backend $(CPPFLAGS) @CFLAGS@\n> CFLAGS_SL= @SHARED_LIB@\n> +CPPFLAGS= @CPPFLAGS@\n> LIBS= @LIBS@\n> LDFLAGS= @LDFLAGS@ $(LIBS)\n> LDREL= -r\n> --- bin/pgtclsh/Makefile.orig\tTue Mar 7 20:58:21 2000\n> +++ bin/pgtclsh/Makefile\tThu May 4 19:31:24 2000\n> @@ -22,7 +22,7 @@\n> include Makefile.tkdefs\n> endif\n> \n> -CFLAGS+= $(X_CFLAGS) -I$(LIBPGTCLDIR)\n> +CFLAGS+= $(X_CFLAGS) -I$(LIBPQDIR) -I$(LIBPGTCLDIR)\n> \n> ifdef KRBVERS\n> LDFLAGS+= $(KRBLIBS)\n> --- configure.in.orig\tWed May 3 15:10:55 2000\n> +++ configure.in\tThu May 4 19:31:24 2000\n> @@ -655,10 +655,6 @@\n> AC_SUBST(YFLAGS)\n> \n> AC_CHECK_LIB(sfio, main)\n> -for curses in ncurses curses ; do\n> -\tAC_CHECK_LIB(${curses}, main,\n> -\t\t[LIBS=\"-l${curses} $LIBS\"; break])\n> -done\n> AC_CHECK_LIB(termcap, main)\n> AC_CHECK_LIB(readline, main)\n> AC_CHECK_LIB(readline, using_history, AC_DEFINE(HAVE_HISTORY_IN_READLINE),\n> --- interfaces/odbc/psqlodbc.c.orig\tMon Dec 28 20:49:57 1998\n> +++ interfaces/odbc/psqlodbc.c\tThu May 4 19:31:24 2000\n> @@ -33,8 +33,14 @@\n> \n> GLOBAL_VALUES globals;\n> \n> -BOOL _init(void);\n> -BOOL _fini(void);\n> +#ifdef linux\n> +# define STATIC\n> +#else\n> +# define STATIC\tstatic\n> +#endif\n> +\n> +STATIC BOOL _init(void);\n> +STATIC BOOL _fini(void);\n> RETCODE SQL_API SQLDummyOrdinal(void);\n> \n> #ifdef WIN32\n> @@ -98,7 +104,7 @@\n> #endif\n> \n> /* These two functions do shared library initialziation on UNIX, well at least\n> - * on Linux. I don't know about other systems.\n> + * on Linux and some of the BSDs. I don't know about other systems.\n> */\n> BOOL\n> _init(void)\n> --- pl/tcl/Makefile.orig\tSat Apr 29 13:45:42 2000\n> +++ pl/tcl/Makefile\tThu May 4 19:31:24 2000\n> @@ -70,7 +70,7 @@\n> \n> CFLAGS+= $(TCL_SHLIB_CFLAGS) $(TCL_DEFS)\n> \n> -CFLAGS+= -I$(SRCDIR)/include -I$(SRCDIR)/backend\n> +CFLAGS+= -I$(SRCDIR)/include -I$(SRCDIR)/backend -I$(LIBPQDIR) $(CPPFLAGS)\n> \n> #\n> # Uncomment the following to enable the unknown command lookup\n> --- pl/tcl/Makefile.orig\tSat Apr 29 13:45:42 2000\n> +++ pl/tcl/Makefile\tThu May 4 19:21:57 2000\n> @@ -70,7 +70,7 @@\n> \n> CFLAGS+= $(TCL_SHLIB_CFLAGS) $(TCL_DEFS)\n> \n> -CFLAGS+= -I$(SRCDIR)/include -I$(SRCDIR)/backend\n> +CFLAGS+= -I$(SRCDIR)/include -I$(SRCDIR)/backend -I$(LIBPQDIR) $(CPPFLAGS)\n> \n> #\n> # Uncomment the following to enable the unknown command lookup\n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 11 May 2000 00:26:25 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Patches to fix compilations on NetBSD" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> Can someone comment on this patch? It appears it should be applied.\n\nAt least some of it is likely to break changes that I made to ensure\nthat pltcl and plperl would build correctly when Tcl and Perl have\nbeen compiled with a different compiler than was selected for the\nPostgres build. It is *not* appropriate to import Postgres compiler\nswitches into these subsystems that may be getting built with a\ndifferent compiler. (-I is pretty universal, so it's safe, but\nthe outer CPPFLAGS might contain stuff that's not safe at all.)\n\n>> 3) I couldn't find anything in the tree which still needs\n>> ncurses/curses. I removed the check for those libraries from\n>> configure.in. If I'm wrong please let me know.\n\nHmm, might be OK or not. There used to be curses-dependent code\nin libpq's \"fe-print\" module, no? Is that gone?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 11 May 2000 02:04:30 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Patches to fix compilations on NetBSD " } ]
[ { "msg_contents": "Hi,\n\n The FETCH help seems to be odd in PostgreSQL-7.0.\n\nprompt> psql\npostgres=# \\h fetch\nCommand: FETCH\nDescription: Gets rows using a cursor\nSyntax:\nFETCH [ selector ] [ count ] { IN | FROM } cursor\nFETCH [ RELATIVE ] [ { [ # | ALL | NEXT | PRIOR ] } ] FROM ] cursor\n ~~~~~~~\n\n Probably the next syntax would be correct, right ?\n-----\nSyntax:\nFETCH [ FORWARD | BACKWARD | RELATIVE ]\n [ { [ # | ALL | NEXT | PRIOR ] } ] { IN | FROM } cursor\nFETCH cursor\n-----\n\n--\nRegards,\nSAKAIDA Masaaki -- Osaka, Japan\n\n\n", "msg_date": "Thu, 11 May 2000 15:53:58 +0900", "msg_from": "SAKAIDA Masaaki <[email protected]>", "msg_from_op": true, "msg_subject": "FETCH help" }, { "msg_contents": "> Hi,\n> \n> The FETCH help seems to be odd in PostgreSQL-7.0.\n> \n> prompt> psql\n> postgres=# \\h fetch\n> Command: FETCH\n> Description: Gets rows using a cursor\n> Syntax:\n> FETCH [ selector ] [ count ] { IN | FROM } cursor\n> FETCH [ RELATIVE ] [ { [ # | ALL | NEXT | PRIOR ] } ] FROM ] cursor\n\nYes, it was a mess. New display looks like below. SGML fixed too. It\nnow displays in brief and long formats.\n\n---------------------------------------------------------------------------\n\ntest=> \\h fetch\nCommand: FETCH\nDescription: Gets rows using a cursor\nSyntax:\nFETCH [ direction ] [ count ] { IN | FROM } cursor\nFETCH [ FORWARD | BACKWARD | RELATIVE ] [ # | ALL | NEXT | PRIOR ] { IN | FROM }\n cursor\n\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 11 May 2000 13:50:38 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: FETCH help" } ]
[ { "msg_contents": "\nWhat causes this in the output of the postmaster?\n\npq_recvbuf: recv() failed, errno 19\n\nAccording to errno.h it means the operation's not supported by the device.\nWhat kind of unsupported operation is it trying to perform?\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] http://www.pop4.net\n 128K ISDN from $22.00/mo - 56K Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Thu, 11 May 2000 06:44:25 -0400 (EDT)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": true, "msg_subject": "message in logs" } ]
[ { "msg_contents": "Hi,\n\n When the next COPY command is specified, psql seems to stop \nproceeding. Nothing can be operated.\n\nprompt> psql\npostgres=# \\h copy\nCommand: COPY\nDescription: Copies data between files and tables\nSyntax:\n..(snip)..\nCOPY [ BINARY ] table [ WITH OIDS ]\n TO { 'filename' | stdout }\n [ [USING] DELIMITERS 'delimiter' ]\n [ WITH NULL AS 'null string' ]\n\npostgres=# copy test to stdout;\n1 sakaida kobe\n2 haru tokyo\n3 nobu osaka\n\npostgres=# copy binary test to '/tmp/test.dat';\nCOPY\npostgres=# copy binary test to stdout; <====== error???\n....Nothing can be operated.....\n\n\n Of course, it isn't right to specify such a COPY command. \nHowever, an appropriate treatment seems to be necessary.\n\n--\nRegards,\nSAKAIDA Masaaki -- Osaka, Japan\n\n\n", "msg_date": "Thu, 11 May 2000 20:05:53 +0900", "msg_from": "SAKAIDA Masaaki <[email protected]>", "msg_from_op": true, "msg_subject": "COPY BINARY to STDOUT" } ]
[ { "msg_contents": "On Thu, 11 May 2000, Peter Eisentraut wrote:\n\n> On Wed, 10 May 2000, Bruce Momjian wrote:\n> \n> > Oh. Can you look at the setproctitle code on your platform and see\n> > how it implements it?\n> \n> In the end I believe we'll make the world a better place if we're using\n> setproctitle when available. Sendmail does. If someone really complains\n> about speed loss we can always make it an option not to use it, but I\n> won't buy into that until I see some profiling evidence.\n\nIts a rare thing, but definitely agree with Peter here :)\n\n\n", "msg_date": "Thu, 11 May 2000 08:48:13 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "Re: setproctitle() no longer used?" }, { "msg_contents": "[Charset iso-8859-1 unsupported, filtering to ASCII...]\n> On Wed, 10 May 2000, Bruce Momjian wrote:\n> \n> > Oh. Can you look at the setproctitle code on your platform and see\n> > how it implements it?\n> \n> In the end I believe we'll make the world a better place if we're using\n> setproctitle when available. Sendmail does. If someone really complains\n> about speed loss we can always make it an option not to use it, but I\n> won't buy into that until I see some profiling evidence.\n\nGood point. I just did it because it worked on BSD, and I don't have\nsetproctitle. If sendmail does it by default if setproctitle exists,\nmaybe that is the way to go.\n\nBSDI 4.01 doesn't have setproctitle, but I don't need it because I can\njust assign anything in there.\n\nI saw the FreeBSD code, and it seems they have added a sysctl() call in\nthe library setproctitle code to handle it.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 11 May 2000 11:28:12 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: setproctitle() no longer used?" }, { "msg_contents": "> On Thu, 11 May 2000, Peter Eisentraut wrote:\n> \n> > On Wed, 10 May 2000, Bruce Momjian wrote:\n> > \n> > > Oh. Can you look at the setproctitle code on your platform and see\n> > > how it implements it?\n> > \n> > In the end I believe we'll make the world a better place if we're using\n> > setproctitle when available. Sendmail does. If someone really complains\n> > about speed loss we can always make it an option not to use it, but I\n> > won't buy into that until I see some profiling evidence.\n> \n> Its a rare thing, but definitely agree with Peter here :)\n\nI think I do too. Let me add this to the TODO list:\n\n\t* use setproctitle() if it exists for 'ps' display of status\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 11 May 2000 11:32:00 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: setproctitle() no longer used?" } ]
[ { "msg_contents": "\nOkay, not sure if this is a bug in v6.5.3 or v7.0, but the same query\nrunning on the *same* data, but v6.5.3 vs 7.0 ...\n\nSELECT pl.code,p.description,p.price\n FROM po pu,products p, po_list pl\n WHERE pl.po_num = 118\n AND pl.code = p.code\nORDER BY p.description;\n\n\nproduces 2 records (expected) under v6.5.3) but under v7 produces 224\n... what it appears to do is repeat each of those 2 records 112 times,\nwhich is the size of the 'po' table ...\n\nSince we aren't actually *using* the 'po' table in that query, I removed\nit in the v7.0 system and it comes back with the two records I'm expecting\n...\n\nSo, my first guess is that v6.5.3 was less strict as far as tables listed\nin the FROM directive then v7.0, so it just ignored it if it wasn't\nactually used ... but want to make sure it *isn't* a bug we've introduced\nwith v7.0 ...\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Thu, 11 May 2000 10:18:15 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "query results different in v7.0 vs v6.5.3 ..." }, { "msg_contents": "The Hermit Hacker <[email protected]> writes:\n> Okay, not sure if this is a bug in v6.5.3 or v7.0, but the same query\n> running on the *same* data, but v6.5.3 vs 7.0 ...\n\n> SELECT pl.code,p.description,p.price\n> FROM po pu,products p, po_list pl\n> WHERE pl.po_num = 118\n> AND pl.code = p.code\n> ORDER BY p.description;\n\n> produces 2 records (expected) under v6.5.3) but under v7 produces 224\n> ... what it appears to do is repeat each of those 2 records 112 times,\n> which is the size of the 'po' table ...\n\n> Since we aren't actually *using* the 'po' table in that query, I removed\n> it in the v7.0 system and it comes back with the two records I'm expecting\n\n> So, my first guess is that v6.5.3 was less strict as far as tables listed\n> in the FROM directive then v7.0, so it just ignored it if it wasn't\n> actually used ... but want to make sure it *isn't* a bug we've introduced\n> with v7.0 ...\n\nNo, this is a bug we *removed* in 7.0. Since the query is joining 'po'\nwith no join constraint, you ought to get a cross-product result.\nEarlier versions dropped 'po' from the join because it wasn't explicitly\nreferred to elsewhere in the query, but I can't see any way that that's\ncorrect behavior under SQL92.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 11 May 2000 13:11:25 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: query results different in v7.0 vs v6.5.3 ... " } ]
[ { "msg_contents": "\n> Okay, not sure if this is a bug in v6.5.3 or v7.0, but the same query\n> running on the *same* data, but v6.5.3 vs 7.0 ...\n> \n> SELECT pl.code,p.description,p.price\n> FROM po pu,products p, po_list pl\n> WHERE pl.po_num = 118\n> AND pl.code = p.code\n> ORDER BY p.description;\n> \n> \n> produces 2 records (expected) under v6.5.3) but under v7 produces 224\n> ... what it appears to do is repeat each of those 2 records 112 times,\n> which is the size of the 'po' table ...\n> \n> Since we aren't actually *using* the 'po' table in that \n> query, I removed\n> it in the v7.0 system and it comes back with the two records \n> I'm expecting\n> ...\n> \n> So, my first guess is that v6.5.3 was less strict as far as \n> tables listed\n> in the FROM directive then v7.0, so it just ignored it if it wasn't\n> actually used ... but want to make sure it *isn't* a bug \n> we've introduced\n> with v7.0 ...\n\nNo, was a bug in 6.5\n\nAndreas\n", "msg_date": "Thu, 11 May 2000 16:24:34 +0200", "msg_from": "Zeugswetter Andreas SB <[email protected]>", "msg_from_op": true, "msg_subject": "AW: query results different in v7.0 vs v6.5.3 ..." } ]
[ { "msg_contents": "I checked the changelog for 7.0 and it doesn't look like this is fixed\nyet.\n\nIn 6.4.x and 6.5.x if you delete a large number of rows (say 100,000 -\n1,000,000) then hit vacuum, the vacuum will run literally forever.\n\nIf you drop the indexes on the table, vacuuming takes only minutes, but\nthat's a pain in the neck.\n\nThis problem kept my site down for some 12 HOURS last nite:\n\n24244 ? S 0:00 psql db_gotocity\n24245 ? R 951:34 /usr/local/pgsql/bin/postgres localhost tim\ndb_gotocity \n\n...before I finally killed the vacuum process, manually removed the\npg_vlock, dropped the indexes, then vacuumed again, and re-indexed.\n\nWill this be fixed?\n\nTim\n\n-- \nFounder - PHPBuilder.com / Geocrawler.com\nLead Developer - SourceForge\nVA Linux Systems\n408-542-5723\n", "msg_date": "Thu, 11 May 2000 14:57:09 +0000", "msg_from": "Tim Perdue <[email protected]>", "msg_from_op": true, "msg_subject": "Eternal vacuuming...." }, { "msg_contents": "Thomas Lockhart wrote:\n> \n> > In 6.4.x and 6.5.x if you delete a large number of rows (say 100,000 -\n> > 1,000,000) then hit vacuum, the vacuum will run literally forever.\n> > ...before I finally killed the vacuum process, manually removed the\n> > pg_vlock, dropped the indexes, then vacuumed again, and re-indexed.\n> > Will this be fixed?\n> \n> Patches? ;)\n\nHehehe - I say the same thing when someone complains about SourceForge.\n\nNow you know I'm a huge postgres hugger - but PHP is my strength and you\nwould not like any C patches I'd submit anyway.\n\n> Just thinking here: could we add an option to vacuum so that it would\n> drop and recreate indices \"automatically\"? We already have the ability\n> to chain multiple internal commands together, so that would just\n> require snarfing the names and properties of indices in the parser\n> backend and then doing the drops and creates on the fly.\n\nThis seems like a hack to me personally. Can someone figure out why the\nvacuum runs forever and fix it? Probably a logic flaw somewhere?\n\nTim\n\n-- \nFounder - PHPBuilder.com / Geocrawler.com\nLead Developer - SourceForge\nVA Linux Systems\n408-542-5723\n", "msg_date": "Thu, 11 May 2000 15:33:31 +0000", "msg_from": "Tim Perdue <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Eternal vacuuming...." }, { "msg_contents": "> In 6.4.x and 6.5.x if you delete a large number of rows (say 100,000 -\n> 1,000,000) then hit vacuum, the vacuum will run literally forever.\n> ...before I finally killed the vacuum process, manually removed the\n> pg_vlock, dropped the indexes, then vacuumed again, and re-indexed.\n> Will this be fixed?\n\nPatches? ;)\n\nJust thinking here: could we add an option to vacuum so that it would\ndrop and recreate indices \"automatically\"? We already have the ability\nto chain multiple internal commands together, so that would just\nrequire snarfing the names and properties of indices in the parser\nbackend and then doing the drops and creates on the fly.\n\nA real problem with this is that those commands are currently not\nrollback-able, so if something quits in the middle (or someone kills\nthe vacuum process; I've heard of this happening ;) then you are left\nwithout indices in sort of a hidden way.\n\nNot sure what the prospects are of making these DDL statements\ntransactionally secure though I know we've had some discussions of\nthis on -hackers.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Thu, 11 May 2000 16:17:42 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Eternal vacuuming...." }, { "msg_contents": "* Thomas Lockhart <[email protected]> [000511 09:55] wrote:\n> > In 6.4.x and 6.5.x if you delete a large number of rows (say 100,000 -\n> > 1,000,000) then hit vacuum, the vacuum will run literally forever.\n> > ...before I finally killed the vacuum process, manually removed the\n> > pg_vlock, dropped the indexes, then vacuumed again, and re-indexed.\n> > Will this be fixed?\n> \n> Patches? ;)\n> \n> Just thinking here: could we add an option to vacuum so that it would\n> drop and recreate indices \"automatically\"?\n\nI'm hoping automatically means some algorithm: When heap + N < index\nie. when it's really needed.\n\n> We already have the ability\n> to chain multiple internal commands together, so that would just\n> require snarfing the names and properties of indices in the parser\n> backend and then doing the drops and creates on the fly.\n> \n> A real problem with this is that those commands are currently not\n> rollback-able, so if something quits in the middle (or someone kills\n> the vacuum process; I've heard of this happening ;) then you are left\n> without indices in sort of a hidden way.\n> \n> Not sure what the prospects are of making these DDL statements\n> transactionally secure though I know we've had some discussions of\n> this on -hackers.\n\nOne could do it in the opposite direction, rename the old index,\ncreate a new index, drop the old. If the worst happens you then\nhave two indexes, perhaps the database could warn about this somehow.\n\nIn fact, one could have a system table that is things to be deleted\nat startup. Put the name of the old index into it and at startup\nthe database could nuke the old index. It's pretty hackish, but\nwould work pretty ok.\n\nIt does seem possible to have two indeces on a single column.\n\n-- \n-Alfred Perlstein - [[email protected]|[email protected]]\n\"I have the heart of a child; I keep it in a jar on my desk.\"\n", "msg_date": "Thu, 11 May 2000 10:11:05 -0700", "msg_from": "Alfred Perlstein <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Eternal vacuuming...." }, { "msg_contents": "On Thu, 11 May 2000, Tim Perdue wrote:\n\n> This seems like a hack to me personally. Can someone figure out why the\n> vacuum runs forever and fix it? Probably a logic flaw somewhere?\n\nI run on a >9million tuple database, and growing, in <10 minutes or so\n... its the search engine for the archives, and I'm finding that if I do a\n'vacuum verbose', I'm getting alot of deletes (updated records) ...\n\nnot quite the same number that you are reporting, mind you, but ...\n\nwhat does a 'vacuum verbose' show for you? and youa ren't doing a 'vacuum\nanalyze', are you?\n\n\n", "msg_date": "Thu, 11 May 2000 13:25:44 -0400 (EDT)", "msg_from": "\"Marc G. Fournier\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Eternal vacuuming...." }, { "msg_contents": "> > In 6.4.x and 6.5.x if you delete a large number of rows (say 100,000 -\n> > 1,000,000) then hit vacuum, the vacuum will run literally forever.\n> > ...before I finally killed the vacuum process, manually removed the\n> > pg_vlock, dropped the indexes, then vacuumed again, and re-indexed.\n> > Will this be fixed?\n> \n> Patches? ;)\n> \n> Just thinking here: could we add an option to vacuum so that it would\n> drop and recreate indices \"automatically\"? We already have the ability\n> to chain multiple internal commands together, so that would just\n> require snarfing the names and properties of indices in the parser\n> backend and then doing the drops and creates on the fly.\n\nWe could vacuum the heap table, and conditionally update or recreate the\nindex depending on how many tuple we needed to move during vacuum of the\nheap.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 11 May 2000 13:28:05 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Eternal vacuuming...." }, { "msg_contents": "\"Marc G. Fournier\" wrote:\n> what does a 'vacuum verbose' show for you? and youa ren't doing a 'vacuum\n> analyze', are you?\n\nI believe I did 'vacuum analyze'. If info from 'vacuum verbose' would be\nuseful to your team, I can try to set up and reproduce this. I would\nhave to create a 3-million row table with an index on it, then delete\n832,000 rows which I did last nite, then try again.\n\nTim\n\n-- \nFounder - PHPBuilder.com / Geocrawler.com\nLead Developer - SourceForge\nVA Linux Systems\n408-542-5723\n", "msg_date": "Thu, 11 May 2000 18:03:01 +0000", "msg_from": "Tim Perdue <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Eternal vacuuming...." }, { "msg_contents": "On Thu, 11 May 2000, Tim Perdue wrote:\n\n> \"Marc G. Fournier\" wrote:\n> > what does a 'vacuum verbose' show for you? and youa ren't doing a 'vacuum\n> > analyze', are you?\n> \n> I believe I did 'vacuum analyze'. If info from 'vacuum verbose' would be\n> useful to your team, I can try to set up and reproduce this. I would\n> have to create a 3-million row table with an index on it, then delete\n> 832,000 rows which I did last nite, then try again.\n\nOkay, vacuum analyze is, from my experiences, atrociously slow ... it\n*feels* faster, at least, if you do a simple vacuum first, then do the\nanalyze, but that might be just perception ...\n\nCan you try just a simple 'vacuum verbose' first, without the analyze, and\nsee if that also takes 12hrs?\n\nAlso, what are you running this on? Memory? CPU?\n\nMarc G. Fournier [email protected]\nSystems Administrator @ hub.org \nscrappy@{postgresql|isc}.org ICQ#7615664\n\n", "msg_date": "Thu, 11 May 2000 15:04:43 -0400 (EDT)", "msg_from": "\"Marc G. Fournier\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Eternal vacuuming...." } ]
[ { "msg_contents": "> > What I'm thinking now to fix the problem you found is that doing data\n> > validataion in the text/var/char input functions, rather than tweaking\n> > the mb functions.\n> \n> Could you explain why? I'd sure like it better if the mb code checked for\n> mb things.\n\nIt's simple: \n\nDo not allow corrupted data imported into database.\n\nTo acomplish this, of course we have to call existing MB functions, though.\n--\nTatsuo Ishii\n", "msg_date": "Thu, 11 May 2000 23:58:10 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Multibyte still broken" } ]
[ { "msg_contents": "[Charset iso-8859-1 unsupported, filtering to ASCII...]\n> On Wed, 10 May 2000, Bruce Momjian wrote:\n> \n> > It is a nifty BSD one. If you assign argv[0] in the program to a\n> > string, it shows in ps.\n> > \n> > \targv[0] = \"new ps string\";\n> > \n> > The Linux method is:\n> \n> Maybe I should add that this could more accurately be called the `SysV\n> method' and also works on SysV-derived models. Heubrid (a.k.a. hogwash)\n> systems such as Solaris and HPUX may support both or none, depending on\n> the time of day.\n\nOK.\n\n> \n> > \n> > \tstrcpy(argv[0], \"new ps string\");\n> > \n> > In the second case, you are actually writing into the environment area\n> > use to store args. Not real great, but it works on Linux.\n> \n> You just copy the environment somewhere else before you do that. Or don't\n> use the environment. Not a big deal.\n\nBut do they really do that? The scary part about the Linux code in\npg_status.h is that is just zeros out all the argv bytes and starts\nwriting, and that is environment memory.\n\nNow, I do some tricks in postmaster.c so I know I have at least 5\nelements to argv[], but I never do anything that makes sure I have\nenough environment space to start copying strings in there.\n\n\targv[0] = \"string\"\n\tstrcpy(argv[0], \"string\");\n\nThe first makes argv[0] point into user-space memory, while the second\nwrites into environment memory. (Allowing ps to dynamically read\nargv[0] memory that is pointing to user-space is a kvm() trick.)\n\nLinux people seem to be happy with their version, but frankly, I would\nnot allow it on BSD. I would _at_ _least_ put something in postmaster.c\nso I _knew_ that argv[0] had a reasonable size for me to copy into it.\n(Massimo write the Linux code, I believe.)\n\nGuess it is more a philosophical issue. Linux folks like it because it\nworks, while I don't because it is not bullet-proof.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 11 May 2000 11:25:47 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: setproctitle() no longer used?" } ]
[ { "msg_contents": "On Thu, 11 May 2000, Alfred Perlstein wrote:\n\n> Yesterday's upgrade to 7.0 went almost completely smoothly except\n> somehow some date types weren't being parsed by the plpgsql\n> interpreter.\n> \n> Bruce saved my behind by quickly offering a patch. (Thanks!)\n> \n> This was really great, the only problem is that I know several\n> large features are scheduled for 7.1 that may for brief periods of\n> time have postgresql acting unstable during the development cycle.\n> \n> Unfortunatly I may have to cvsup to fix some issue that happens\n> with 7.0 but I don't want to play/risk the new features, just get\n> the bugfixes.\n> \n> I would really like to see a 7.0 Branch, akin to the 6.5.3-patches\n> branch which would give users the option to only cvsup bugfixes\n> rather than code that incorperates emerging new features which may\n> upset stability.\n\nThis will happen at the same time as v7.0.1 is released ... up until then,\nthere will be no new features added to the source tree ... our sort of\n'quiet time' to see how things work out, watch for bugs, etc ...\n\n> On a somewhat related note, is there a way to have the postgresql\n> cvs commit messages mailed to me? I can't seem to find a list.\n\[email protected]\n\n\n", "msg_date": "Thu, 11 May 2000 13:27:32 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Some CVS stuff, A 7.0-stable branch? and a mailing list?" } ]
[ { "msg_contents": "on 5/11/00 1:32 PM, Vince Vielhaber at [email protected] wrote:\n\n> \n> I've begun work on the User's Lounge and Developer's Corner. EVERYONE\n> have a look and let me know if there's things to be added - I already\n> know some links don't work, in come cases it's intentional. The URLs\n> are:\n\nI think it's great to have a community of users for PostgreSQL. Why not use\nOpenACS? We've got a solid system, runs on top of PG7.0.... We'll help set\nit up if you need it!\n\n\n-Ben\n\n", "msg_date": "Thu, 11 May 2000 13:32:04 -0400", "msg_from": "Benjamin Adida <[email protected]>", "msg_from_op": true, "msg_subject": "Re: User's Lounge and Developer's Corner" }, { "msg_contents": "\nI've begun work on the User's Lounge and Developer's Corner. EVERYONE\nhave a look and let me know if there's things to be added - I already \nknow some links don't work, in come cases it's intentional. The URLs\nare:\n\nhttp://www.postgresql.org/users-lounge/index.html\nhttp://www.postgresql.org/devel-corner/index.html\n\nSend suggestions to [email protected]\n\nNOTE: it may not look good on a cell fone, but that may change!\n\nVince. Daytonbound in less than one week! Who else is going?\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] http://www.pop4.net\n 128K ISDN from $22.00/mo - 56K Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Thu, 11 May 2000 13:32:34 -0400 (EDT)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": false, "msg_subject": "User's Lounge and Developer's Corner" }, { "msg_contents": "On Thu, 11 May 2000, Benjamin Adida wrote:\n\n> on 5/11/00 1:32 PM, Vince Vielhaber at [email protected] wrote:\n> \n> > \n> > I've begun work on the User's Lounge and Developer's Corner. EVERYONE\n> > have a look and let me know if there's things to be added - I already\n> > know some links don't work, in come cases it's intentional. The URLs\n> > are:\n> \n> I think it's great to have a community of users for PostgreSQL. Why\n> not use OpenACS? We've got a solid system, runs on top of PG7.0....\n> We'll help set it up if you need it!\n\nWill OpenACS run on Apache?\n\n\n", "msg_date": "Thu, 11 May 2000 13:42:32 -0400 (EDT)", "msg_from": "\"Marc G. Fournier\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: User's Lounge and Developer's Corner" }, { "msg_contents": "> on 5/11/00 1:32 PM, Vince Vielhaber at [email protected] wrote:\n> \n> > \n> > I've begun work on the User's Lounge and Developer's Corner. EVERYONE\n> > have a look and let me know if there's things to be added - I already\n> > know some links don't work, in come cases it's intentional. The URLs\n> > are:\n> \n> I think it's great to have a community of users for PostgreSQL. Why not use\n> OpenACS? We've got a solid system, runs on top of PG7.0.... We'll help set\n> it up if you need it!\n\nNow there's a great idea!\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 11 May 2000 13:47:31 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: User's Lounge and Developer's Corner" }, { "msg_contents": "On Thu, 11 May 2000, Bruce Momjian wrote:\n\n> > on 5/11/00 1:32 PM, Vince Vielhaber at [email protected] wrote:\n> > \n> > > \n> > > I've begun work on the User's Lounge and Developer's Corner. EVERYONE\n> > > have a look and let me know if there's things to be added - I already\n> > > know some links don't work, in come cases it's intentional. The URLs\n> > > are:\n> > \n> > I think it's great to have a community of users for PostgreSQL. Why not use\n> > OpenACS? We've got a solid system, runs on top of PG7.0.... We'll help set\n> > it up if you need it!\n> \n> Now there's a great idea!\n\nThat it would, if it could work with Apache ... but, I believe Don Baccus\nmentioend that it can't even co-exist on the same machine as Apache :(\n\nBenjamin ... ?\n\n", "msg_date": "Thu, 11 May 2000 13:54:56 -0400 (EDT)", "msg_from": "\"Marc G. Fournier\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: User's Lounge and Developer's Corner" } ]
[ { "msg_contents": "Argh! I can't reproduce this:\n\nNOTICE: Message from PostgreSQL backend:\n The Postmaster has informed me that some other backend died abnormally and possibly corrupted shared memory.\n I have rolled back the current transaction and am going to terminate your database system connection and exit.\n Please reconnect to the database system and repeat your query.\nNOTICE: Message from PostgreSQL backend:\n The Postmaster has informed me that some other backend died abnormally and possibly corrupted shared memory.\n I have rolled back the current transaction and am going to terminate your database system connection and exit.\n Please reconnect to the database system and repeat your query.\nNOTICE: Message from PostgreSQL backend:\n The Postmaster has informed me that some other backend died abnormally and possibly corrupted shared memory.\n I have rolled back the current transaction and am going to terminate your database system connection and exit.\n Please reconnect to the database system and repeat your query.\npqReadData() -- backend closed the channel unexpectedly.\n This probably means the backend terminated abnormally\n before or while processing the request.\nThe connection to the server was lost. Attempting reset: Failed.\n\n\n\n\nBasically I was running two instances of psql, in one I issued:\n\n one two\n\nbegin;\nlock data; -- some table\n lock data;^C -- cancel\n select * from data;^C -- cancel\nend;\n \n lock data;^C -- HUNG then aborted\n\nIt's annoying that I can't seem to reproduce this, and I know LOCKs\nare only to be requested during a transaction, but it did happen.\n\nthanks,\n-- \n-Alfred Perlstein - [[email protected]|[email protected]]\n\"I have the heart of a child; I keep it in a jar on my desk.\"\n", "msg_date": "Thu, 11 May 2000 10:35:18 -0700", "msg_from": "Alfred Perlstein <[email protected]>", "msg_from_op": true, "msg_subject": "Orphaned locks in 7.0?" }, { "msg_contents": "Alfred Perlstein <[email protected]> writes:\n> Argh! I can't reproduce this:\n\nWas a core file left behind? Can you get a backtrace from it?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 11 May 2000 13:54:41 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Orphaned locks in 7.0? " }, { "msg_contents": "* Tom Lane <[email protected]> [000511 11:26] wrote:\n> Alfred Perlstein <[email protected]> writes:\n> > Argh! I can't reproduce this:\n> \n> Was a core file left behind? Can you get a backtrace from it?\n\nI enabled assertion checking and debug, here we go:\n\nCore was generated by `postgres'.\nProgram terminated with signal 6, Abort trap.\nReading symbols from /usr/lib/libcrypt.so.2...done.\nReading symbols from /usr/lib/libm.so.2...done.\nReading symbols from /usr/lib/libreadline.so.4...done.\nReading symbols from /usr/lib/libncurses.so.5...done.\nReading symbols from /usr/lib/libc.so.4...done.\nReading symbols from /usr/libexec/ld-elf.so.1...done.\n#0 0x48281fd8 in kill () from /usr/lib/libc.so.4\n(gdb) bt\n#0 0x48281fd8 in kill () from /usr/lib/libc.so.4\n#1 0x482bb4a2 in abort () from /usr/lib/libc.so.4\n#2 0x8143c53 in ExcAbort () at excabort.c:27\n#3 0x8143bd2 in ExcUnCaught (excP=0x81a5708, detail=0, data=0x0, \n message=0x8189040 \"!((result->nHolding > 0) && (result->holders[lockmode] >= 0))\") at exc.c:170\n#4 0x8143c19 in ExcRaise (excP=0x81a5708, detail=0, data=0x0, \n message=0x8189040 \"!((result->nHolding > 0) && (result->holders[lockmode] >= 0))\") at exc.c:187\n#5 0x8143308 in ExceptionalCondition (\n conditionName=0x8189040 \"!((result->nHolding > 0) && (result->holders[lockmode] >= 0))\", exceptionP=0x81a5708, detail=0x0, fileName=0x8188e0c \"lock.c\", \n lineNumber=617) at assert.c:73\n#6 0x810422e in LockAcquire (lockmethod=1, locktag=0xbfbfe808, lockmode=1)\n at lock.c:617\n#7 0x81036d1 in LockRelation (relation=0x8471ba0, lockmode=1) at lmgr.c:148\n#8 0x8071957 in heap_open (relationId=1249, lockmode=1) at heapam.c:551\n#9 0x813e329 in SearchSysCache (cache=0x847c018, v1=8490746, v2=136106584, \n v3=0, v4=0) at catcache.c:1009\n#10 0x8142210 in SearchSysCacheTuple (cacheId=4, key1=8490746, key2=136106584, \n key3=0, key4=0) at syscache.c:532\n#11 0x80cfc5d in make_var (pstate=0x81cd020, relid=8490746, \n refname=0x81cd180 \"data\", attrname=0x81cd258 \"referer\") at parse_node.c:202\n#12 0x80d12be in expandAll (pstate=0x81cd020, relname=0x81cd180 \"data\", \n ref=0x81cd158, this_resno=0x81cd020) at parse_relation.c:408\n#13 0x80d25b8 in ExpandAllTables (pstate=0x81cd020) at parse_target.c:444\n#14 0x80d213b in transformTargetList (pstate=0x81cd020, targetlist=0x81ccea8)\n at parse_target.c:139\n#15 0x80c0ef6 in transformSelectStmt (pstate=0x81cd020, stmt=0x81ccf50)\n at analyze.c:1423\n#16 0x80bf780 in transformStmt (pstate=0x81cd020, parseTree=0x81ccf50)\n at analyze.c:238\n#17 0x80bf3c2 in parse_analyze (pl=0x81cd008, parentParseState=0x0)\n at analyze.c:75\n#18 0x80cafa1 in parser (str=0x8469018 \"select * from data;\", typev=0x0, \n nargs=0) at parser.c:64\n#19 0x8109923 in pg_parse_and_rewrite (\n query_string=0x8469018 \"select * from data;\", typev=0x0, nargs=0, \n aclOverride=0 '\\000') at postgres.c:395\n#20 0x8109bcb in pg_exec_query_dest (\n query_string=0x8469018 \"select * from data;\", dest=Remote, aclOverride=0)\n at postgres.c:580\n#21 0x8109b91 in pg_exec_query (query_string=0x8469018 \"select * from data;\")\n at postgres.c:562\n#22 0x810ab4a in PostgresMain (argc=7, argv=0xbfbff138, real_argc=8, \n real_argv=0xbfbffb98) at postgres.c:1590\n#23 0x80f00d6 in DoBackend (port=0x8463000) at postmaster.c:2006\n#24 0x80efc7d in BackendStartup (port=0x8463000) at postmaster.c:1775\n#25 0x80eeea1 in ServerLoop () at postmaster.c:1035\n#26 0x80ee88a in PostmasterMain (argc=8, argv=0xbfbffb98) at postmaster.c:723\n#27 0x80bf327 in main (argc=8, argv=0xbfbffb98) at main.c:93\n#28 0x80633d5 in _start ()\n\n(gdb) up\n#6 0x810422e in LockAcquire (lockmethod=1, locktag=0xbfbfe808, lockmode=1)\n at lock.c:617\n617 Assert((result->nHolding > 0) && (result->holders[lockmode] >= 0));\n(gdb) list\n612 XID_PRINT(\"LockAcquire: new\", result);\n613 }\n614 else\n615 {\n616 XID_PRINT(\"LockAcquire: found\", result);\n617 Assert((result->nHolding > 0) && (result->holders[lockmode] >= 0));\n618 Assert(result->nHolding <= lock->nActive);\n619 }\n620 \n621 /* ----------------\n(gdb) \n\nSeems to be what brought things down.\n\nIf you need anything else, let me know.\n\n-- \n-Alfred Perlstein - [[email protected]|[email protected]]\n\"I have the heart of a child; I keep it in a jar on my desk.\"\n", "msg_date": "Thu, 11 May 2000 11:47:00 -0700", "msg_from": "Alfred Perlstein <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Orphaned locks in 7.0?" }, { "msg_contents": "* Hiroshi Inoue <[email protected]> [000515 02:07] wrote:\n> > -----Original Message-----\n> > From: [email protected] [mailto:[email protected]]On\n> > Behalf Of Alfred Perlstein\n> > \n> > Basically I was running two instances of psql, in one I issued:\n> > \n> > one two\n> > \n> > begin;\n> > lock data; -- some table\n> > lock data;^C -- cancel\n> > select * from data;^C -- cancel\n> > end;\n> > \n> > lock data;^C -- HUNG then aborted\n> > \n> > It's annoying that I can't seem to reproduce this, and I know LOCKs\n> > are only to be requested during a transaction, but it did happen.\n> >\n> \n> Could the following example explain your HUNG problem ?\n> \n> Session-1\n> \t# begin;\n> \tBEGIN\n> \t=# lock t;\n> \tLOCK TABLE\n> \n> Session-2\n> \t=# begin;\n> \tBEGIN\n> \t=# lock t; \n> \t[blocked] ^C\n> \tCancel request sent\n> \tERROR: Query cancel requested while waiting lock\n> \treindex=# select * from t;\n> \t[blocked]\n> \n> Session-1\n> \t=# commit;\n> \tCOMMIT\n> \n> Session-2\n> \tERROR: LockRelation: LockAcquire failed\n> \t=# abort;\n> \tROLLBACK\n> \t=# lock t;\n> \t[blocked]\n\nThat looks pretty much like the sequence of events that lead up to\nthe problem, the problem is that I was just manually testing out\nthe way locks work and didn't write down the exact steps I took.\n\nThis is probably exactly the right steps though.\n\n-- \n-Alfred Perlstein - [[email protected]|[email protected]]\n\"I have the heart of a child; I keep it in a jar on my desk.\"\n", "msg_date": "Mon, 15 May 2000 09:23:58 -0700", "msg_from": "Alfred Perlstein <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Orphaned locks in 7.0?" } ]
[ { "msg_contents": "We're finding that Postgresql is definetly cool, however our current\nproject requires improvements and additional functionality in the\ncore Postgresql engine, it also requires that I learn more about\nthe Postgresql core.\n\nWe need things like optimizing EXCEPT and a fastpath for\nINSERT-if-not-present-else-UPDATE.\n\nSince I also need at least a basic understanding of how the feature\nwas implemented I think we could also get started on something like\na Postgresql-hacker-howto.\n\nAll work on the Postgresql engine would be immediately contributed\nback to the project.\n\nI already have authorization to fund the work, if anyone with the\ntime and ability to work on such features could privately send me\na quote of how much time they can afford and the amount of compensation\nthey would need I can get things started over here.\n\nOf course we'd like your undivided attention to our project, however\nwe've been able to work out successful part time telecommuting\ncontracts in the past which may be preferable.\n\nEven if a current project is taking up all your time, a quote of\nwhen you would have some time available for such work could get\nthe ball rolling over here for us to sponsor some Postgresql\ndevelopment.\n\nI also realize the new Landmark.com support stuff is coming soon,\ndoes anyone have a contact there that may be able to hook us up\nwith someone who can work on the Postgresql core as described above?\n\nthanks,\n--\n-Alfred Perlstein - [[email protected]|[email protected]]\n\"I have the heart of a child; I keep it in a jar on my desk.\"\n", "msg_date": "Thu, 11 May 2000 12:22:13 -0700", "msg_from": "Alfred Perlstein <[email protected]>", "msg_from_op": true, "msg_subject": "Pgsql core contract?" } ]
[ { "msg_contents": "> > OK, this is making me rethink my suggestion in the book of using type()\n> > to do typecasts. Seems I should recommend CAST (val AS type), as wordy\n> > as it is, or maybe val::type?\n> \n> CAST(val AS type) is defined in SQL92. istm that the others are\n> available at the whim of our current implementation, since when push\n> comes to shove we might have to choose between having one of our\n> non-standard mechanisms or having some other new features.\n\nOK, I am going to use CAST everywhere, except in one place where I have\nnested casts, which is just too hard to read, so I will use :: and\nmention the CAST section.\n\n> \n> An example is SQL3 enumerated types, which use the double-colon\n> notation, but with value and type reversed from our syntax :(\n> \n> - Thomas\n> \n> -- \n> Thomas Lockhart\t\t\t\[email protected]\n> South Pasadena, California\n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 15 May 2000 09:52:04 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Cast of numeric()" }, { "msg_contents": "> OK, this is making me rethink my suggestion in the book of using type()\n> to do typecasts. Seems I should recommend CAST (val AS type), as wordy\n> as it is, or maybe val::type?\n\nCAST(val AS type) is defined in SQL92. istm that the others are\navailable at the whim of our current implementation, since when push\ncomes to shove we might have to choose between having one of our\nnon-standard mechanisms or having some other new features.\n\nAn example is SQL3 enumerated types, which use the double-colon\nnotation, but with value and type reversed from our syntax :(\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Mon, 15 May 2000 13:56:00 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Cast of numeric()" }, { "msg_contents": "Bruce Momjian writes:\n\n> OK, this is making me rethink my suggestion in the book of using type()\n> to do typecasts. Seems I should recommend CAST (val AS type), as wordy\n> as it is, or maybe val::type?\n\nOkay, while we're discussing the implicit type conversions, let's also\nlook at the explicit ones. There are currently four different ways to\nmake a cast:\n\n1. 'foo'::date\n2. CAST('foo' AS DATE)\n3. date('foo')\n4. DATE 'foo'\n\nIt has been observed before that 1. is in conflict with SQL3 so it should\nprobably be considered obsolescent.\n\nNr. 2 may be wordy but hey that's SQL. Arguably it's also the clearest.\n(Remember that most SQL queries are issued by programs that need to be\nmaintained, not lazy humans.)\n\nThe third is something that only C++ folks could ever have come up with.\n:) Seriously, as has been observed, it will not work for types that\nrequire type modifiers, or conversely you cannot provide type modifiers\nfor types that could use one. Furthermore, it does not take into account\ntype name aliases (integer vs int4, etc.). Also, couldn't a cast from type\nA to B be done via text so that in fact no function B(A) would have to\nexist for the cast to work?\n\nThe last notation is tricky to merge into the PostgreSQL type system\nbecause in standard SQL this isn't a cast at all. While 2. first makes\n'foo' character and then converts it to date, 4. will make it a date right\naway. This is perhaps like 10000L vs (long)10000 in C and might actually\nmake a difference in theory. In any case, that notation does not work in\ngeneral anyway (for a start: type modifiers, numbers, expressions in place\nof 'foo').\n\nSo what I would humbly propose unto you as the recommended syntax in\npractice and documentation is this:\n\n* To specify the type of a text-based _literal_ (such as lseg, date, inet)\nuse\n\n\tTYPE 'value'\n\n* To evaluate an expression and the convert it to a different data type\nuse\n\n\tCAST(expr AS type)\n\nEverything else might only impede the progress to world domination ...\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Mon, 15 May 2000 20:58:31 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Cast of numeric()" } ]
[ { "msg_contents": "Zeugswetter Andreas SB <[email protected]> writes:\n> I think the topmost numeric-type needs to be numeric, since it is the \n> only type with arbitrary scale and precision.\n> Thus I think we would need:\n> int2,int4,int8,float4,float8,numeric\n\nNo, this is wrong because it contradicts SQL92: float + numeric must\nyield float, not numeric.\n\n> But the above is still not correct, in the sence that e.g. int8 cannot be\n> converted to float4\n> without loss. In that sense I don't think one upward promotion info is\n> sufficient.\n\nAn important component of the second proposal is that the actual data\nconversion is done in one step if possible. We will *consider* using\nfloat4 before we consider float8, but if we end up using float8 then\nwe try to do a direct whatever-to-float8 conversion. So as long as the\nright set of conversion operators are available, there's no unnecessary\nprecision loss.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 15 May 2000 10:11:04 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: AW: type conversion discussion " }, { "msg_contents": "I wrote:\n>> But the above is still not correct, in the sence that e.g. int8\n>> cannot be converted to float4 without loss. In that sense I don't\n>> think one upward promotion info is sufficient.\n\n> An important component of the second proposal is that the actual data\n> conversion is done in one step if possible. We will *consider* using\n> float4 before we consider float8, but if we end up using float8 then\n> we try to do a direct whatever-to-float8 conversion. So as long as the\n> right set of conversion operators are available, there's no unnecessary\n> precision loss.\n\nAfter further thought I see that there is still a risk here, which\ndepends on the presence or absence of specific functions. Suppose that\nwe offer cos(float4) and cos(float8), but not cos(numeric). With the\nproposal as given, the system would execute cos(numericVar) as\ncos(float4(numericVar)) which is probably not the most desirable\nchoice --- but that would be the \"least promoted\" alternative.\n\nConsidering this example, I think that the proposed numeric hierarchy\nneeds to be altered. Instead of\n\n\tint2 -> int4 -> int8 -> numeric -> float4 -> float8\n\nperhaps we want\n\n\tint2 -> int4 -> int8 -> numeric -> float8\n\tfloat4 -> float8\n\nThat is, float4 promotes to float8 but nothing else promotes to float4.\nThis still satisfies the SQL92 rule that mixed exact/inexact\ncomputations yield inexact results --- but those results will always be\ndone in float8 now, never in float4. The only way to get a float4\ncomputation is to start from float4 variables or use explicit casts.\n\nThat's still not entirely satisfactory because simple examples like\n\n\tWHERE float4var < 4.4;\n\nwon't be done the way we want: the constant will promote to float8\nand then you'll get float4var::float8 < 4.4::float8 which is not\nable to use a float4 index.\n\nA sneaky way around that is to make the hierarchy\n\n\tint2 -> int4 -> int8 -> numeric -> float8 -> float4\n\nwhich is nonintuitive as hell, but would make mixed exact/float8\ncalculations do the right thing. But a mixed float8/float4\ncomputation would be done in float4 which is not so desirable.\n\nMy inclination at this point is that we want the auto promotion\nhierarchy to look like\n\n\tint2 -> int4 -> int8 -> numeric -> float8\n\tfloat4 -> float8\n\nbut perhaps to use a different method for assigning types to numeric\nliterals, such that a literal can be coerced to float4 if there are\nother float4s present, even though we wouldn't do that for nonliterals.\n(This could maybe be done by initially assigning literals an\nUNKNOWNNUMERIC data type, which then gets resolved to a specific type,\nmuch like we do for string literals.) A tad ugly, but I'm beginning to\ndoubt we can get *all* the behaviors we want without any special cases.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 15 May 2000 12:16:21 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: AW: type conversion discussion " }, { "msg_contents": "Tom Lane writes:\n\n> perhaps we want\n> \n> \tint2 -> int4 -> int8 -> numeric -> float8\n> \tfloat4 -> float8\n\nIn a parallel email you mentioned that your promotion tree idea will give\nthe system well-understood (single) inheritance semantics, with which I\nagree 100%. But that is only true if the upward conversion always works,\nwhich it won't because not every numeric \"is-a\" float8, and strictly\nspeaking, neither is int8. This is unclean at best, but might even cause\ngenuine failures if some promotion metric decided on a float8 function\nover a numeric function because it would generate the \"least casting\" on\nthe other function attributes.\n\nSo it would have to be more like this\n\n\tint2 -> int4 -> int8 -> numeric\n\tfloat4 -> float8 -> numeric\n\nThis tree is \"correct\" in the above sense but has a number of obvious\nproblems.\n\nfloat[x] + numeric would now yield numeric. The solution is making an\nexplicit float8+numeric function. Okay, so at the end it's actually more\nlike 8 functions, but that's a price I'd be willing to pay. (Perhaps the\ncommutator mechanism could be extended to cover different types as well.)\n\nIncidentally, this would also enable some cases to work that wouldn't now,\ne.g. if N is a numeric outside the range of float8 and F is some float8,\nthen N - F would currently fail, but it need not, depending on how it's\nimplemented.\n\nThe other problem is that integers would never implicitly be promoted to\nfloats. This is sensible behaviour from a numerical analysis point of view\nbut probably not acceptable for many. However, given that there is\nnumeric, any int/float operations would be promoted to numeric/numeric,\nwhich is in any case the right thing to do. The only thing is to provide\nnumeric functions.\n\n\nThe alternative is to use a non-tree lattice for type promotion\n\n - float4 -- float8 -\n / / \\\nint2 --- int4 ---- int8 ----- numeric\n\nbut that would introduce a world of problems which we probably best avoid\n(as long as possible).\n\n\n> That's still not entirely satisfactory because simple examples like\n> \n> \tWHERE float4var < 4.4;\n> \n> won't be done the way we want: the constant will promote to float8\n> and then you'll get float4var::float8 < 4.4::float8 which is not\n> able to use a float4 index.\n\nCould this do it?\n\n\tunknownnumeric -> float4 -> float8 -> numeric\n\n(Assuming `unknownnumeric' only represents literals with decimal points.\nOthers should probably be the \"best fit\" integer type.)\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Wed, 17 May 2000 19:19:29 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: AW: type conversion discussion " }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> So it would have to be more like this\n> \tint2 -> int4 -> int8 -> numeric\n> \tfloat4 -> float8 -> numeric\n> This tree is \"correct\" in the above sense but has a number of obvious\n> problems.\n\nThe most fundamental of which is that it violates SQL92: combining\nexact and approximate numerics is supposed to yield approximate\n(ie, float), not exact.\n\n> float[x] + numeric would now yield numeric. The solution is making an\n> explicit float8+numeric function. Okay, so at the end it's actually more\n> like 8 functions, but that's a price I'd be willing to pay. (Perhaps the\n> commutator mechanism could be extended to cover different types as well.)\n\nMake that 8 functions for each and every one of the numeric operators.\nI don't think that's reasonable... especially since those operators\ncannot cause the overflow problem to go away. (The SQL guys probably\ndid not foresee people implementing NUMERIC with wider range than FLOAT\n;-) ... but the fact that we did so doesn't give us license to ignore\nthat aspect of the spec ...)\n\n> numeric, any int/float operations would be promoted to numeric/numeric,\n> which is in any case the right thing to do.\n\nNo it isn't. See above.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 17 May 2000 23:26:30 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: AW: type conversion discussion " }, { "msg_contents": "Tom Lane writes:\n\n> (The SQL guys probably did not foresee people implementing NUMERIC\n> with wider range than FLOAT ;-) ... but the fact that we did so\n> doesn't give us license to ignore that aspect of the spec ...)\n\nI think that must have been it, why else would they (implicitly) rank\nfloats above numerics. If we submit to that notion, then I agree with the\npromotion tree you suggested.\n\nThe problem remains that upward casting will not be guaranteed to work all\nthe time, which is something that needs to be addressed; in potentially\nunpretty ways, because not every casting decision is necessarily a linear\nladder-climb, it might be affected by other casting decisions going on in\nparallel. (The workaround here could be to convert numerics that are wider\nthan floats to `infinity' :-)\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Fri, 19 May 2000 05:43:28 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: AW: type conversion discussion " }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> (The workaround here could be to convert numerics that are wider\n> than floats to `infinity' :-)\n\nHmm, that's actually not a bad idea. Infinity is a pretty portable\nnotion these days, what with nearly everyone toeing the IEEE float\nline ... so we could have numeric->float generate a NaN if possible\nand only resort to an elog() on machines without NaN.\n\nOTOH, no mathematician will accept the notion that 1e1000 is the\nsame as infinity ;-). Mathematical purity would probably favor\nthe elog.\n\nComments anyone?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 18 May 2000 23:51:22 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: AW: type conversion discussion " } ]
[ { "msg_contents": "I think we should...\n\n-- \nPeter Mount\nEnterprise Support\nMaidstone Borough Council\nAny views stated are my own, and not those of Maidstone Borough Council.\n\n\n\n-----Original Message-----\nFrom: Bruce Momjian [mailto:[email protected]]\nSent: Monday, May 15, 2000 1:29 PM\nTo: PostgreSQL-development\nCc: PostgreSQL-interfaces\nSubject: [INTERFACES] lo_unlink\n\n\nIn looking through the documention programmers guide, I don't see any\nmention of lo_unlink(). Isn't that the only way to remove large objects\nfrom the database? Shouldn't we mention that?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania\n19026\n", "msg_date": "Mon, 15 May 2000 15:16:37 +0100", "msg_from": "Peter Mount <[email protected]>", "msg_from_op": true, "msg_subject": "RE: lo_unlink" } ]
[ { "msg_contents": "> Here is a proposal for fixing these problems.\n\nSounds good. We would be looking up this info in a table, right? So we\ncan integrate this type hierarchy fully into our type extensibility\nsystem.\n\nAnother 7.1 project is to work on alternate languages and character\nsets, to decouple multibyte and locale from the default SQL_TEXT\ncharacter set. This will probably bring up issues similar to the\nnumeric problems, and since these character sets will be added as\nuser-defined types it will be important for the backend to understand\nhow to convert them for comparison operations, for example.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Mon, 15 May 2000 14:37:52 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Proposal for fixing numeric type-resolution issues" }, { "msg_contents": "Thomas Lockhart writes:\n\n> Another 7.1 project is to work on alternate languages and character\n> sets, to decouple multibyte and locale from the default SQL_TEXT\n> character set. This will probably bring up issues similar to the\n> numeric problems, and since these character sets will be added as\n> user-defined types it will be important for the backend to understand\n> how to convert them for comparison operations, for example.\n\nReally? I always thought the character set would be some separate entity\nand perhaps an oid reference would be stored with every character string\nand attribute. That would get you around any type conversion as long as\nthe functions acting on character types take this \"header\" field into\naccount.\n\nIf you want to go the data type way then you'd need to have some sort of\nmost general character set to cast to. That could be Unicode but that\nwould require that every user-defined character set be a subset of\nUnicode, which is perhaps not a good assumption to make. Also, I wonder\nhow collations would fit in there. Collations definitely can't be ordered\nat all, so casting can't be done in a controlled fashion.\n\nJust wondering...\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Wed, 17 May 2000 19:18:46 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal for fixing numeric type-resolution issues" }, { "msg_contents": "> Thomas Lockhart writes:\n> \n> > Another 7.1 project is to work on alternate languages and character\n> > sets, to decouple multibyte and locale from the default SQL_TEXT\n> > character set. This will probably bring up issues similar to the\n> > numeric problems, and since these character sets will be added as\n> > user-defined types it will be important for the backend to understand\n> > how to convert them for comparison operations, for example.\n> \n> Really? I always thought the character set would be some separate entity\n> and perhaps an oid reference would be stored with every character string\n> and attribute. That would get you around any type conversion as long as\n> the functions acting on character types take this \"header\" field into\n> account.\n\nI think that way too. If what Thomas is suggesting is that to make a\nuser-defined charaset, one need to make everything such as operators,\ncharset, functions to work with index etc. (like defining new a data\ntype), that would be too painfull.\n\n> If you want to go the data type way then you'd need to have some sort of\n> most general character set to cast to. That could be Unicode but that\n> would require that every user-defined character set be a subset of\n> Unicode, which is perhaps not a good assumption to make.\n\nRight. But the problem is SQL92 actually requires such a charset\ncalled \"SQL_TEXT.\" For me, the only candidate for SQL_TEX at this\npoint seems to be \"mule internal code.\" Basically it is a variant of\nISO-2022 and has a capability to adapt to most of charsets defined in\nISO-2022. I think we could expand it so that it could become a\nsuperset even for Unicode. Of course the problem is mule internal code\nis a \"internal code\" and is not widely spread in the world. Even\nthat's true we could use it for purely internal purpose (for the parse\ntree etc.).\n\n> Also, I wonder\n> how collations would fit in there. Collations definitely can't be ordered\n> at all, so casting can't be done in a controlled fashion.\n\nHmm... Collations seem to be a different issue. I think there's no\nsuch an idea like \"collation casting\" in SQL92.\n--\nTatsuo Ishii\n\n", "msg_date": "Thu, 18 May 2000 09:09:54 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal for fixing numeric type-resolution issues" }, { "msg_contents": "All good ideas and thoughts. I have been thinking that essentially\nseparate types per character set is the right thing, but we'll have\nplenty of time to talk about it.\n\nOne point is that SQL92 assigns a specific character set and collation\nsequence to every character string and every column definition; if we\nembedded this \"type\" identification into every string then we would be\nreplicating the existing Postgres type system one layer down (at least\nfor argument's sake ;)\n\nThere also need to be well defined conversions between character\nsets/collations, and some or most combinations will be illegal (e.g.\nhow do you collate American English against Japanese?). The Postgres\ntype system can enforce this simply by not providing conversion or\ncomparison functions for the relevant mixture of types.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Thu, 18 May 2000 05:41:37 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Proposal for fixing numeric type-resolution issues" }, { "msg_contents": "Again, anything to add to the TODO here?\n\n> We've got a collection of problems that are related to the parser's\n> inability to make good type-resolution choices for numeric constants.\n> In some cases you get a hard error; for example \"NumericVar + 4.4\"\n> yields\n> ERROR: Unable to identify an operator '+' for types 'numeric' and 'float8'\n> You will have to retype this query using an explicit cast\n> because \"4.4\" is initially typed as float8 and the system can't figure\n> out whether to use numeric or float8 addition. A more subtle problem\n> is that a query like \"... WHERE Int2Var < 42\" is unable to make use of\n> an index on the int2 column: 42 is resolved as int4, so the operator\n> is int24lt, which works but is not in the opclass of an int2 index.\n> \n> Here is a proposal for fixing these problems. I think we could get this\n> done for 7.1 if people like it.\n> \n> The basic problem is that there's not enough smarts in the type resolver\n> about the interrelationships of the numeric datatypes. All it has is\n> a concept of a most-preferred type within the category of numeric types.\n> (We are abusing the most-preferred-type mechanism, BTW, because both\n> FLOAT8 and NUMERIC claim to be the most-preferred type in the numeric\n> category! This is in fact why the resolver can't make a choice for\n> \"numeric+float8\".) We need more intelligence than that.\n> \n> I propose that we set up a strictly-ordered hierarchy of numeric\n> datatypes, running from least preferred to most preferred:\n> \tint2, int4, int8, numeric, float4, float8.\n> Rather than simply considering coercions to the most-preferred type,\n> the type resolver should use the following rules:\n> \n> 1. No value will be down-converted (eg int4 to int2) except by an\n> explicit conversion.\n> \n> 2. If there is not an exact matching operator, numeric values will be\n> up-converted to the highest numeric datatype present among the operator\n> or function's arguments. For example, given \"int2 + int8\" we'd up-\n> convert the int2 to int8 and apply int8 addition.\n> \n> The final piece of the puzzle is that the type initially assigned to\n> an undecorated numeric constant should be NUMERIC if it contains a\n> decimal point or exponent, and otherwise the smallest of int2, int4,\n> int8, NUMERIC that will represent it. This is a considerable change\n> from the current lexer behavior, where you get either int4 or float8.\n> \n> For example, given \"NumericVar + 4.4\", the constant 4.4 will initially\n> be assigned type NUMERIC, we will resolve the operator as numeric plus,\n> and everything's fine. Given \"Float8Var + 4.4\", the constant is still\n> initially numeric, but will be up-converted to float8 so that float8\n> addition can be used. The end result is the same as in traditional\n> Postgres: you get float8 addition. Given \"Int2Var < 42\", the constant\n> is initially typed as int2, since it fits, and we end up selecting\n> int2lt, thereby allowing use of an int2 index. (On the other hand,\n> given \"Int2Var < 100000\", we'd end up using int4lt, which is correct\n> to avoid overflow.)\n> \n> A couple of crucial subtleties here:\n> \n> 1. We are assuming that the parser or optimizer will constant-fold\n> any conversion functions that are introduced. Thus, in the\n> \"Float8Var + 4.4\" case, the 4.4 is represented as a float8 4.4 by the\n> time execution begins, so there's no performance loss.\n> \n> 2. We cannot lose precision by initially representing a constant as\n> numeric and later converting it to float. Nor can we exceed NUMERIC's\n> range (the default 1000-digit limit is more than the range of IEEE\n> float8 data). It would not work as well to start out by representing\n> a constant as float and then converting it to numeric.\n> \n> Presently, the pg_proc and pg_operator tables contain a pretty fair\n> collection of cross-datatype numeric operators, such as int24lt,\n> float48pl, etc. We could perhaps leave these in, but I believe that\n> it is better to remove them. For example, if int42lt is left in place,\n> then it would capture cases like \"Int4Var < 42\", whereas we need that\n> to be translated to int4lt so that an int4 index can be used. Removing\n> these operators will eliminate some code bloat and system-catalog bloat\n> to boot.\n> \n> As far as I can tell, this proposal is almost compatible with the rules\n> given in SQL92: in particular, SQL92 specifies that an operator having\n> both \"approximate numeric\" (float) and \"exact numeric\" (int or numeric)\n> inputs should deliver an approximate-numeric result. I propose\n> deviating from SQL92 in a single respect: SQL92 specifies that a\n> constant containing an exponent (eg 1.2E34) is approximate numeric,\n> which implies that the result of an operator using it is approximate\n> even if the other operand is exact. I believe it's better to treat\n> such a constant as exact (ie, type NUMERIC) and only convert it to\n> float if the other operand is float. Without doing that, an assignment\n> like\n> \tUPDATE tab SET NumericVar = 1.234567890123456789012345E34;\n> will not work as desired because the constant will be prematurely\n> coerced to float, causing precision loss.\n> \n> Comments?\n> \n> \t\t\tregards, tom lane\n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 13 Jun 2000 03:41:56 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal for fixing numeric type-resolution issues" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> Again, anything to add to the TODO here?\n\nIIRC, there was some unhappiness with the proposal you quote, so I'm\nnot sure we've quite agreed what to do... but clearly something must\nbe done.\n\n\t\t\tregards, tom lane\n\n\n>> We've got a collection of problems that are related to the parser's\n>> inability to make good type-resolution choices for numeric constants.\n>> In some cases you get a hard error; for example \"NumericVar + 4.4\"\n>> yields\n>> ERROR: Unable to identify an operator '+' for types 'numeric' and 'float8'\n>> You will have to retype this query using an explicit cast\n>> because \"4.4\" is initially typed as float8 and the system can't figure\n>> out whether to use numeric or float8 addition. A more subtle problem\n>> is that a query like \"... WHERE Int2Var < 42\" is unable to make use of\n>> an index on the int2 column: 42 is resolved as int4, so the operator\n>> is int24lt, which works but is not in the opclass of an int2 index.\n>> \n>> Here is a proposal for fixing these problems. I think we could get this\n>> done for 7.1 if people like it.\n>> \n>> The basic problem is that there's not enough smarts in the type resolver\n>> about the interrelationships of the numeric datatypes. All it has is\n>> a concept of a most-preferred type within the category of numeric types.\n>> (We are abusing the most-preferred-type mechanism, BTW, because both\n>> FLOAT8 and NUMERIC claim to be the most-preferred type in the numeric\n>> category! This is in fact why the resolver can't make a choice for\n>> \"numeric+float8\".) We need more intelligence than that.\n>> \n>> I propose that we set up a strictly-ordered hierarchy of numeric\n>> datatypes, running from least preferred to most preferred:\n>> int2, int4, int8, numeric, float4, float8.\n>> Rather than simply considering coercions to the most-preferred type,\n>> the type resolver should use the following rules:\n>> \n>> 1. No value will be down-converted (eg int4 to int2) except by an\n>> explicit conversion.\n>> \n>> 2. If there is not an exact matching operator, numeric values will be\n>> up-converted to the highest numeric datatype present among the operator\n>> or function's arguments. For example, given \"int2 + int8\" we'd up-\n>> convert the int2 to int8 and apply int8 addition.\n>> \n>> The final piece of the puzzle is that the type initially assigned to\n>> an undecorated numeric constant should be NUMERIC if it contains a\n>> decimal point or exponent, and otherwise the smallest of int2, int4,\n>> int8, NUMERIC that will represent it. This is a considerable change\n>> from the current lexer behavior, where you get either int4 or float8.\n>> \n>> For example, given \"NumericVar + 4.4\", the constant 4.4 will initially\n>> be assigned type NUMERIC, we will resolve the operator as numeric plus,\n>> and everything's fine. Given \"Float8Var + 4.4\", the constant is still\n>> initially numeric, but will be up-converted to float8 so that float8\n>> addition can be used. The end result is the same as in traditional\n>> Postgres: you get float8 addition. Given \"Int2Var < 42\", the constant\n>> is initially typed as int2, since it fits, and we end up selecting\n>> int2lt, thereby allowing use of an int2 index. (On the other hand,\n>> given \"Int2Var < 100000\", we'd end up using int4lt, which is correct\n>> to avoid overflow.)\n>> \n>> A couple of crucial subtleties here:\n>> \n>> 1. We are assuming that the parser or optimizer will constant-fold\n>> any conversion functions that are introduced. Thus, in the\n>> \"Float8Var + 4.4\" case, the 4.4 is represented as a float8 4.4 by the\n>> time execution begins, so there's no performance loss.\n>> \n>> 2. We cannot lose precision by initially representing a constant as\n>> numeric and later converting it to float. Nor can we exceed NUMERIC's\n>> range (the default 1000-digit limit is more than the range of IEEE\n>> float8 data). It would not work as well to start out by representing\n>> a constant as float and then converting it to numeric.\n>> \n>> Presently, the pg_proc and pg_operator tables contain a pretty fair\n>> collection of cross-datatype numeric operators, such as int24lt,\n>> float48pl, etc. We could perhaps leave these in, but I believe that\n>> it is better to remove them. For example, if int42lt is left in place,\n>> then it would capture cases like \"Int4Var < 42\", whereas we need that\n>> to be translated to int4lt so that an int4 index can be used. Removing\n>> these operators will eliminate some code bloat and system-catalog bloat\n>> to boot.\n>> \n>> As far as I can tell, this proposal is almost compatible with the rules\n>> given in SQL92: in particular, SQL92 specifies that an operator having\n>> both \"approximate numeric\" (float) and \"exact numeric\" (int or numeric)\n>> inputs should deliver an approximate-numeric result. I propose\n>> deviating from SQL92 in a single respect: SQL92 specifies that a\n>> constant containing an exponent (eg 1.2E34) is approximate numeric,\n>> which implies that the result of an operator using it is approximate\n>> even if the other operand is exact. I believe it's better to treat\n>> such a constant as exact (ie, type NUMERIC) and only convert it to\n>> float if the other operand is float. Without doing that, an assignment\n>> like\n>> UPDATE tab SET NumericVar = 1.234567890123456789012345E34;\n>> will not work as desired because the constant will be prematurely\n>> coerced to float, causing precision loss.\n>> \n>> Comments?\n>> \n>> regards, tom lane\n>> \n\n\n> -- \n> Bruce Momjian | http://www.op.net/~candle\n> [email protected] | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 13 Jun 2000 03:58:43 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal for fixing numeric type-resolution issues " }, { "msg_contents": "On Tue, 13 Jun 2000, Tom Lane wrote:\n\n> IIRC, there was some unhappiness with the proposal you quote, so I'm\n> not sure we've quite agreed what to do... but clearly something must\n> be done.\n\nYou might want to look at SQL99, too. It contains a type system and has\nsomething to say on these issues. At least I would hate to see something\nflat-out incompatible drawn up.\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Tue, 13 Jun 2000 15:00:44 +0200 (MET DST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal for fixing numeric type-resolution issues " }, { "msg_contents": "Sorry to be asking again, but any status on this?\n\n> Bruce Momjian <[email protected]> writes:\n> > Again, anything to add to the TODO here?\n> \n> IIRC, there was some unhappiness with the proposal you quote, so I'm\n> not sure we've quite agreed what to do... but clearly something must\n> be done.\n> \n> \t\t\tregards, tom lane\n> \n> \n> >> We've got a collection of problems that are related to the parser's\n> >> inability to make good type-resolution choices for numeric constants.\n> >> In some cases you get a hard error; for example \"NumericVar + 4.4\"\n> >> yields\n> >> ERROR: Unable to identify an operator '+' for types 'numeric' and 'float8'\n> >> You will have to retype this query using an explicit cast\n> >> because \"4.4\" is initially typed as float8 and the system can't figure\n> >> out whether to use numeric or float8 addition. A more subtle problem\n> >> is that a query like \"... WHERE Int2Var < 42\" is unable to make use of\n> >> an index on the int2 column: 42 is resolved as int4, so the operator\n> >> is int24lt, which works but is not in the opclass of an int2 index.\n> >> \n> >> Here is a proposal for fixing these problems. I think we could get this\n> >> done for 7.1 if people like it.\n> >> \n> >> The basic problem is that there's not enough smarts in the type resolver\n> >> about the interrelationships of the numeric datatypes. All it has is\n> >> a concept of a most-preferred type within the category of numeric types.\n> >> (We are abusing the most-preferred-type mechanism, BTW, because both\n> >> FLOAT8 and NUMERIC claim to be the most-preferred type in the numeric\n> >> category! This is in fact why the resolver can't make a choice for\n> >> \"numeric+float8\".) We need more intelligence than that.\n> >> \n> >> I propose that we set up a strictly-ordered hierarchy of numeric\n> >> datatypes, running from least preferred to most preferred:\n> >> int2, int4, int8, numeric, float4, float8.\n> >> Rather than simply considering coercions to the most-preferred type,\n> >> the type resolver should use the following rules:\n> >> \n> >> 1. No value will be down-converted (eg int4 to int2) except by an\n> >> explicit conversion.\n> >> \n> >> 2. If there is not an exact matching operator, numeric values will be\n> >> up-converted to the highest numeric datatype present among the operator\n> >> or function's arguments. For example, given \"int2 + int8\" we'd up-\n> >> convert the int2 to int8 and apply int8 addition.\n> >> \n> >> The final piece of the puzzle is that the type initially assigned to\n> >> an undecorated numeric constant should be NUMERIC if it contains a\n> >> decimal point or exponent, and otherwise the smallest of int2, int4,\n> >> int8, NUMERIC that will represent it. This is a considerable change\n> >> from the current lexer behavior, where you get either int4 or float8.\n> >> \n> >> For example, given \"NumericVar + 4.4\", the constant 4.4 will initially\n> >> be assigned type NUMERIC, we will resolve the operator as numeric plus,\n> >> and everything's fine. Given \"Float8Var + 4.4\", the constant is still\n> >> initially numeric, but will be up-converted to float8 so that float8\n> >> addition can be used. The end result is the same as in traditional\n> >> Postgres: you get float8 addition. Given \"Int2Var < 42\", the constant\n> >> is initially typed as int2, since it fits, and we end up selecting\n> >> int2lt, thereby allowing use of an int2 index. (On the other hand,\n> >> given \"Int2Var < 100000\", we'd end up using int4lt, which is correct\n> >> to avoid overflow.)\n> >> \n> >> A couple of crucial subtleties here:\n> >> \n> >> 1. We are assuming that the parser or optimizer will constant-fold\n> >> any conversion functions that are introduced. Thus, in the\n> >> \"Float8Var + 4.4\" case, the 4.4 is represented as a float8 4.4 by the\n> >> time execution begins, so there's no performance loss.\n> >> \n> >> 2. We cannot lose precision by initially representing a constant as\n> >> numeric and later converting it to float. Nor can we exceed NUMERIC's\n> >> range (the default 1000-digit limit is more than the range of IEEE\n> >> float8 data). It would not work as well to start out by representing\n> >> a constant as float and then converting it to numeric.\n> >> \n> >> Presently, the pg_proc and pg_operator tables contain a pretty fair\n> >> collection of cross-datatype numeric operators, such as int24lt,\n> >> float48pl, etc. We could perhaps leave these in, but I believe that\n> >> it is better to remove them. For example, if int42lt is left in place,\n> >> then it would capture cases like \"Int4Var < 42\", whereas we need that\n> >> to be translated to int4lt so that an int4 index can be used. Removing\n> >> these operators will eliminate some code bloat and system-catalog bloat\n> >> to boot.\n> >> \n> >> As far as I can tell, this proposal is almost compatible with the rules\n> >> given in SQL92: in particular, SQL92 specifies that an operator having\n> >> both \"approximate numeric\" (float) and \"exact numeric\" (int or numeric)\n> >> inputs should deliver an approximate-numeric result. I propose\n> >> deviating from SQL92 in a single respect: SQL92 specifies that a\n> >> constant containing an exponent (eg 1.2E34) is approximate numeric,\n> >> which implies that the result of an operator using it is approximate\n> >> even if the other operand is exact. I believe it's better to treat\n> >> such a constant as exact (ie, type NUMERIC) and only convert it to\n> >> float if the other operand is float. Without doing that, an assignment\n> >> like\n> >> UPDATE tab SET NumericVar = 1.234567890123456789012345E34;\n> >> will not work as desired because the constant will be prematurely\n> >> coerced to float, causing precision loss.\n> >> \n> >> Comments?\n> >> \n> >> regards, tom lane\n> >> \n> \n> \n> > -- \n> > Bruce Momjian | http://www.op.net/~candle\n> > [email protected] | (610) 853-3000\n> > + If your life is a hard drive, | 830 Blythe Avenue\n> > + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 9 Oct 2000 15:57:32 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal for fixing numeric type-resolution issues" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> Sorry to be asking again, but any status on this?\n\nAt this point I think I can safely say nothing's going to be done for\n7.1. It is still an issue that needs to be addressed though.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 09 Oct 2000 16:18:15 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal for fixing numeric type-resolution issues " } ]
[ { "msg_contents": "> > > We've talked about examples like this before. I'm inclined to think\n> > > that when we are unable to resolve an operator involving unknown-type\n> > > inputs, we should try again assuming that the unknowns are of type\n> > > 'text'. Comments?\n> > Yes please. SQL and the rest of the world assumes the 'xxx' is a character\n> > constant, only PostgreSQL doesn't.\n\nAnd only Postgres is trying to have a properly built extensible type\nsystem with fewer legacy \"SQL80\" holdovers. So don't start throwing\nthings out without having a solid alternative that considers these\ncases.\n\nIn the case of length(): \n\npre-7.0 used length() to give a \"length\" of several data types,\nincluding strings and geometric types. This led to more instances of\nparser confusion when using untyped strings, since there were more\npossible matches of types and the function.\n\nFor 7.0, I changed the implementation to decouple string types and\nother types by natively supporting char_length() (and\ncharacter_length()), the SQL92-defined length function(s) for strings.\nI left length() for the other types.\n\nI believe that this is mentioned in the release notes.\n\nbtw, what were we hoping to accomplish with length(755)? Why isn't \"3\"\na good answer??\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Mon, 15 May 2000 14:46:39 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Casting, again" }, { "msg_contents": "> For 7.0, I changed the implementation to decouple string types and\n> other types by natively supporting char_length() (and\n> character_length()), the SQL92-defined length function(s) for strings.\n> I left length() for the other types.\n> \n> I believe that this is mentioned in the release notes.\n> \n> btw, what were we hoping to accomplish with length(755)? Why isn't \"3\"\n> a good answer??\n\nThree is the right answer. I was just wondering why it used to fail,\nbut now it works.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 15 May 2000 10:53:08 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Casting, again" }, { "msg_contents": "Thomas Lockhart <[email protected]> writes:\n> btw, what were we hoping to accomplish with length(755)? Why isn't \"3\"\n> a good answer??\n\nIf you believe it should have an answer at all, then 3 is probably\nthe right answer. But it used to be rejected, and I tend to think\nthat that's the right behavior. I don't like the idea of silent\nconversions from numeric-looking things into text. It might be\nmerely amusing in this case but in other cases it could be very\nconfusing if not outright wrong. Why was this change put in?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 15 May 2000 11:17:25 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Casting, again " }, { "msg_contents": "> > btw, what were we hoping to accomplish with length(755)? Why isn't \"3\"\n> > a good answer??\n> If you believe it should have an answer at all, then 3 is probably\n> the right answer. But it used to be rejected, and I tend to think\n> that that's the right behavior. I don't like the idea of silent\n> conversions from numeric-looking things into text. It might be\n> merely amusing in this case but in other cases it could be very\n> confusing if not outright wrong. Why was this change put in?\n\nActually, I'm not sure a change *was* put in! I haven't yet looked,\nbut it may be that this is a result of my adding a \"number to text\"\nconversion function. The type conversion code took that and ran!\n\nRemember that for v7.0, \"length\" for character strings should be\n\"char_length\". Maybe some of the trouble here is from leftover\nattempts to get strings and other \"length\" types to play together in\nan underspecified query.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Mon, 15 May 2000 15:40:07 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Casting, again" }, { "msg_contents": "Thomas Lockhart <[email protected]> writes:\n> Actually, I'm not sure a change *was* put in! I haven't yet looked,\n> but it may be that this is a result of my adding a \"number to text\"\n> conversion function. The type conversion code took that and ran!\n\nAh, I think you are right --- I was through the type-resolution code\nnot too long ago, and I don't recall seeing any special cases for\nnumeric->text either. It must be as you say, that the addition of\nthis apparently harmless conversion function caused an unexpected\nchange in the overall behavior.\n\nAfter reflecting on this example for a little bit, I like my proposal\nfor explicit \"promotability\" links between types even better. The\nexample illustrates that it's dangerous to have promotability decisions\nmade on the basis of whether there happens to be a conversion function\navailable or not. Offering a text(int4) function that users can call\nwhen they want to force a conversion is fine, but that should not\nautomatically mean that the system can *silently* call it to cause an\nimplicit conversion. Another example is that if we were to offer an\nint(bool) conversion function, as was suggested for the Nth time in\na nearby thread, the current system would not allow us to control\nwhether that conversion can happen implicitly --- it would, period.\nIf implicit conversions can only follow specific \"promotability\" links\nthen we don't have this risk.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 15 May 2000 11:50:25 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Casting, again " }, { "msg_contents": "> If implicit conversions can only follow specific \"promotability\" links\n> then we don't have this risk.\n\nWe've had some evolution in how we do type coersion and conversion,\nand istm that we are about ready to take the next step. Before, there\nwere only a few folks thinking about it, and we implemented some steps\nto test out new ideas without making fundamental changes outside the\nparser. Now, we can make some more improvements based on experience\nwith the current system.\n\nI like the idea of having some table-driven rules for implicit type\ncoersion, since that technique can extend to user-defined types. The\nissues we have regarding string types and numeric types need to be\nthought through in the context of having *more* string types, which\nafaik is how we are going to integrate multibyte character sets into\nbasic Postgres.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Tue, 16 May 2000 01:21:43 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Casting, again" } ]
[ { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> I think your plan looks good for the numerical land. (I'll ponder the oid\n> issues in a second.) For other type categories, perhaps not. Should a line\n> be promoted to a polygon so you can check if it contains a point? Or a\n> polygon to a box? Higher dimensions? :-)\n\nLine->polygon, probably not. On the other hand I can certainly imagine\nthat box->polygon would be a reasonable promotion. The point of the\nproposal is to make these choices table-driven, in any event; so they'd\nbe fairly easy to change if someone didn't like them.\n\n> [ enumerates cases where casting is needed ]\n> e) The function is overloaded for many types, amongst which is text. Then\n> call the text version. I believe this would currently fail, which I'd\n> consider a deficiency.\n\nThis seems to be the only case that's really worth debating. Is it\nbetter to fail (drawing attention to the ambiguity) than to make a\ndefault assumption? I tend to agree that we want a default, but\nreasonable people might disagree.\n\n> The fact that an oid is also a number should be an implementation detail.\n\nCould be. A version or three ago you actually did have to write\n\n\t... where oid = 1234::oid\n\nif you wanted to refer to a specific row by OID. However, while it\nmight be logically purer to insist that OIDs are not numbers, it's just\ntoo damn handy to be laxer about the distinction. I've used both the\nold and new behavior and I much prefer the new. If you want an actual\nargument for it: I doubt that ordinary users touch OIDs at all, and\nthe ones who do probably know what they're doing. You might see some\ninconsistency between my position on OIDs and my position on booleans\n(where I *don't* want cast-free conversions), but I draw the distinction\nfrom experience about what sorts of operations are useful and how many\nreal-life errors can be caught by hard-line error checks.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 15 May 2000 10:52:33 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: type conversion discussion " }, { "msg_contents": "Tom Lane writes:\n\n> > The fact that an oid is also a number should be an implementation detail.\n> \n> Could be. A version or three ago you actually did have to write\n> \n> \t... where oid = 1234::oid\n> \n> if you wanted to refer to a specific row by OID. However, while it\n> might be logically purer to insist that OIDs are not numbers, it's just\n> too damn handy to be laxer about the distinction.\n\nDefinitely. But wouldn't three (or six) extra `=' operators be the road of\nleast resistance or clearest separation? Not sure.\n\n> I doubt that ordinary users touch OIDs at all, and the ones who do\n> probably know what they're doing.\n\nCertain elements around these parts actively advocate using oids for keys\nor even unsigned numbers (*shudder*). I wouldn't be so sure about this\nstatement at all.\n\nOne thing to keep in mind in any case is that oids might not be int4-like\nforever, eventually we might want int8, or the unsigned version thereof.\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Mon, 15 May 2000 21:09:01 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: type conversion discussion " }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n>> if you wanted to refer to a specific row by OID. However, while it\n>> might be logically purer to insist that OIDs are not numbers, it's just\n>> too damn handy to be laxer about the distinction.\n\n> Definitely. But wouldn't three (or six) extra `=' operators be the road of\n> least resistance or clearest separation? Not sure.\n\nActually, that's what we've got now: \"oid = 1234\" gets parsed into the\noideqint4 operator. What bugs me about that is the shenanigans the\noptimizer has to pull to use an index on the oid column. I'm hoping\nthat we can clean up this mess enough so that the operator delivered by\nthe parser is the same thing the column's index claims to use in the\nfirst place.\n\n> One thing to keep in mind in any case is that oids might not be int4-like\n> forever, eventually we might want int8, or the unsigned version thereof.\n\nAgreed, but with any luck that case will work transparently too: the\nconstant will just get promoted up to int8 before we apply the operator.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 15 May 2000 15:50:03 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: type conversion discussion " } ]
[ { "msg_contents": "on 5/15/00 2:15 AM, Michael A. Olson at [email protected] wrote:\n\n> Berkeley DB is Open Source. It's free for use in other Open Source\n> projects, like PostgreSQL. If a developer wants to use it in a\n> proprietary application, then the developer needs to pay Sleepycat\n> a licensing fee -- that's how we make our living. But Open Source\n> projects don't have to pay us anything. You can download the full\n> package from our Web site at www.sleepycat.com.\n\nI have to add my 0.02 to this issue. I read the informal description of the\nSleepycat license (http://www.sleepycat.com/licensing.html). It looks like a\ncommercial twist on BSD with a GPL sense to it.\n\nIf this were a totally new product, I think it might be an acceptable\nlicense, a compromise between BSD's total freedom and the GPL's push to keep\nthings open-source. However, given that there are existing users of Postgres\nwho probably use the binary without distributing source, this license is\nsignificantly more restrictive than the previous one, and would force\ncurrent users to review their practices.\n\nIt seems that forcing this new license on Postgres would be\ncounter-productive at a time when the user base seems to be on the rise and\nPostgreSQL is starting to make a new, quality name for itself in the\nOpen-Source community.\n\nThis is by no means a judgement of Berkeley DB Data Store as a product, just\na point about current users of PostgreSQL and their expectations.\n\n-Ben\n\n\n", "msg_date": "Mon, 15 May 2000 11:12:48 -0400", "msg_from": "Benjamin Adida <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Proposal: replace no-overwrite with Berkeley DB" }, { "msg_contents": "> on 5/15/00 2:15 AM, Michael A. Olson at [email protected] wrote:\n> \n> > Berkeley DB is Open Source. It's free for use in other Open Source\n> > projects, like PostgreSQL. If a developer wants to use it in a\n> > proprietary application, then the developer needs to pay Sleepycat\n> > a licensing fee -- that's how we make our living. But Open Source\n> > projects don't have to pay us anything. You can download the full\n> > package from our Web site at www.sleepycat.com.\n> \n> I have to add my 0.02 to this issue. I read the informal description of the\n> Sleepycat license (http://www.sleepycat.com/licensing.html). It looks like a\n> commercial twist on BSD with a GPL sense to it.\n> \n> If this were a totally new product, I think it might be an acceptable\n> license, a compromise between BSD's total freedom and the GPL's push to keep\n> things open-source. However, given that there are existing users of Postgres\n> who probably use the binary without distributing source, this license is\n> significantly more restrictive than the previous one, and would force\n> current users to review their practices.\n> \n> It seems that forcing this new license on Postgres would be\n> counter-productive at a time when the user base seems to be on the rise and\n> PostgreSQL is starting to make a new, quality name for itself in the\n> Open-Source community.\n> \n> This is by no means a judgement of Berkeley DB Data Store as a product, just\n> a point about current users of PostgreSQL and their expectations.\n\nYes, it seems we would need some special arrangement from them. We\ndon't plan on changing our license just to use Sleepycat DB. In fact,\nthis is just an exploration. We don't even know if it will be a win for\nus. \n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 15 May 2000 14:14:21 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal: replace no-overwrite with Berkeley DB" }, { "msg_contents": "Benjamin Adida <[email protected]> writes:\n> If this were a totally new product, I think it might be an acceptable\n> license, a compromise between BSD's total freedom and the GPL's push to keep\n> things open-source. However, given that there are existing users of Postgres\n> who probably use the binary without distributing source, this license is\n> significantly more restrictive than the previous one, and would force\n> current users to review their practices.\n\nIMHO it would not be acceptable to make Postgres distribution even a\nlittle less free than it is now. However, it could be that we can get\naround that. According to the FAQ on Sleepycat's website, they consider\nthat the open-source restriction applies to the software that directly\ncalls Berkeley DB --- which would be Postgres. Code that sits atop\nPostgres could still be proprietary without triggering licensing\nrequirements. So their existing policy is already close to what we'd\nneed.\n\nAlso, I note (forget if this is on their website or if it was in last\nnight's private email) that Sleepycat have cut some sort of special\nlicensing deal with Gnome to persuade the Gnome folks that it's OK for\nthem to depend on Berkeley DB. So I expect they'd be open to making\na similar deal with us. I think a written, signed agreement between\nSleepycat and us, guaranteeing that Berkeley DB + Postgres could be\ndistributed as freely as Postgres is now, is possible and would solve\neveryone's concerns on this issue.\n\nI'm more concerned about the technical issues, the biggest of which\nis how we can preserve MVCC semantics. Mike made a good point that\nusers don't care about implementation technology --- but they do care\nabout results, and MVCC's lack of locking (readers don't block writers\nnor vice versa) is an important result that I'm unwilling to give up.\nI'm also dissatisfied with the idea of going through a \"Recno\" access\nmethod to get at heap data; that sounds like a recipe for serious\nperformance degradation. However, maybe we could address issues like\nthat by working with the Sleepycat guys to develop a true heap access\nmethod within their framework. The MVCC issue is more serious because\nI'm not sure that could be added to their framework after-the-fact.\n\nIf we go into this at all, we'd certainly *not* want to take the\nattitude that Berkeley DB is a closed box that we don't get to mess\nwith. It's open source and we'd be contributing improvements to it,\nprobably some pretty major ones. In effect we'd become partners with\nthe Sleepycat guys --- and so another big issue is how comfortable we\nwould be working together. But it could be a win-win proposition if\nwe join forces to produce better software than either group could do\nalone.\n\nI'm not at all sold that this is a workable proposal --- but I think\nit has enough potential to be worth some close examination.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 15 May 2000 15:36:00 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal: replace no-overwrite with Berkeley DB " }, { "msg_contents": "> If we go into this at all, we'd certainly *not* want to take the\n> attitude that Berkeley DB is a closed box that we don't get to mess\n> with. It's open source and we'd be contributing improvements to it,\n> probably some pretty major ones. In effect we'd become partners with\n> the Sleepycat guys --- and so another big issue is how comfortable we\n> would be working together. But it could be a win-win proposition if\n> we join forces to produce better software than either group could do\n> alone.\n\nAnother option is to keep our heap table structure intact, and just\nSleepycat DB for our indexes. That may be a big win, with little\ndownside. Certainly something to think about. It may work better with\nMVCC, and allow fast sequential scans and fast heap access from the\nindexs, without having to go through the db structures to get to it.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 15 May 2000 15:55:08 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal: replace no-overwrite with Berkeley DB" }, { "msg_contents": "Several people have asked about the terms of the Berkeley DB license,\nand the conditions under which users need to pay Sleepycat for\nredistribution.\n\nTo clarify, you're permitted to redistribute binary copies of your\napplication, including Berkeley DB, as long as source code is freely\navailable *somewhere*. Anyone could compile and sell PostgreSQL on\na CD without paying Sleepycat, because the source code remains\navailable on PostgreSQL.org.\n\nLots of people ship binary copies of the OpenLDAP directory server,\nwhich uses Berkeley DB. They don't pay us. Only the companies that\nship proprietary directory servers do.\n\nLicense fees are only required if you make a proprietary version of\nthe Open Source product. For example, if a vendor took PostgreSQL,\nmade changes to the backend, and didn't contribute those changes\nback to PostgreSQL.org, then the vendor would have to pay Sleepycat\nfor the right to redistribute our software as a part of the package.\n\nFor the purposes of this proposal, we'd consider the PostgreSQL\nbackend to be the embedding app -- that is, anyone could develop\nnew proprietary clients, since those don't directly embed our code.\nAnd we stipulate that dynamically-loaded functions that implement\nuser-defined types and functions don't constitute changes to the\nbackend.\n\nSo the only case in which a license fee would be required would\nbe if someone forked the backend and kept their changes proprietary.\nI'm not aware of anyone distributing a forked version of the\nbackend now, but I've been outside the community for a while now.\n\nI understand the implications of the BSD and GPL licenses, and why\nthey're appropriate or inappropriate for particular cases. If the\nBerkeley DB license imposes conditions on PostgreSQL that aren't\nin keeping with the desires of the developers, then of course the\nproposed project won't work.\n\nIf you've got additional questions on the license, ask away.\n\n\t\t\t\t\tmike\n\n", "msg_date": "Mon, 15 May 2000 17:44:35 -0700", "msg_from": "\"Michael A. Olson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Berkeley DB license terms (was Re: Proposal...)" }, { "msg_contents": "> Several people have asked about the terms of the Berkeley DB license,\n> and the conditions under which users need to pay Sleepycat for\n> redistribution.\n> \n> To clarify, you're permitted to redistribute binary copies of your\n> application, including Berkeley DB, as long as source code is freely\n> available *somewhere*. Anyone could compile and sell PostgreSQL on\n> a CD without paying Sleepycat, because the source code remains\n> available on PostgreSQL.org.\n> \n> Lots of people ship binary copies of the OpenLDAP directory server,\n> which uses Berkeley DB. They don't pay us. Only the companies that\n> ship proprietary directory servers do.\n> \n> License fees are only required if you make a proprietary version of\n> the Open Source product. For example, if a vendor took PostgreSQL,\n> made changes to the backend, and didn't contribute those changes\n> back to PostgreSQL.org, then the vendor would have to pay Sleepycat\n> for the right to redistribute our software as a part of the package.\n\nSeems this changes our license more toward GPL. I don't think that is\ngoing to be supportable by the group. I doubt we are willing to modify\nour license in order to use the Sleepycat DB code.\n\nWe don't use GPL code in PostgreSQL for the same reason.\n\n> I understand the implications of the BSD and GPL licenses, and why\n> they're appropriate or inappropriate for particular cases. If the\n> Berkeley DB license imposes conditions on PostgreSQL that aren't\n> in keeping with the desires of the developers, then of course the\n> proposed project won't work.\n\nSorry, looks like a deal killer. Of course, others will voice their\nopinions. This is just my guess.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 15 May 2000 21:15:19 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Berkeley DB license terms (was Re: Proposal...)" }, { "msg_contents": "At 11:08 AM 5/16/00 +1000, you wrote:\n\n> Essentially, this is the same as the GPL licence, right?\n\nYes, except that companies that want to distribute proprietary versions\ncan pay for the privilege.\n\n\t\t\t\t\tmike\n\n", "msg_date": "Mon, 15 May 2000 18:29:53 -0700", "msg_from": "\"Michael A. Olson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Berkeley DB license terms (was Re: Proposal...)" }, { "msg_contents": "At 09:15 PM 5/15/00 -0400, you wrote:\n\n> Sorry, looks like a deal killer. Of course, others will voice their\n> opinions. This is just my guess.\n\nGiven the preferences that I've heard expressed so far, I think you're\nprobably right. And it sounds as if Vadim's making progress on a WAL\nthat will preserve multi-versioning, so that's another reason for you\nto stick with what you've got now.\n\nNever hurts to ask, though :-). If the situation changes at some\npoint, and you'd like to resume the discussion, we'd be glad to do\nthat.\n\t\t\t\t\tmike\n\n", "msg_date": "Mon, 15 May 2000 18:33:19 -0700", "msg_from": "\"Michael A. Olson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Berkeley DB license terms (was Re: Proposal...)" }, { "msg_contents": "On Mon, 15 May 2000, Bruce Momjian wrote:\n\n> > Several people have asked about the terms of the Berkeley DB license,\n> > and the conditions under which users need to pay Sleepycat for\n> > redistribution.\n> > \n> > To clarify, you're permitted to redistribute binary copies of your\n> > application, including Berkeley DB, as long as source code is freely\n> > available *somewhere*. Anyone could compile and sell PostgreSQL on\n> > a CD without paying Sleepycat, because the source code remains\n> > available on PostgreSQL.org.\n> > \n> > Lots of people ship binary copies of the OpenLDAP directory server,\n> > which uses Berkeley DB. They don't pay us. Only the companies that\n> > ship proprietary directory servers do.\n> > \n> > License fees are only required if you make a proprietary version of\n> > the Open Source product. For example, if a vendor took PostgreSQL,\n> > made changes to the backend, and didn't contribute those changes\n> > back to PostgreSQL.org, then the vendor would have to pay Sleepycat\n> > for the right to redistribute our software as a part of the package.\n> \n> Seems this changes our license more toward GPL. I don't think that is\n> going to be supportable by the group. I doubt we are willing to modify\n> our license in order to use the Sleepycat DB code.\n\nI don't know ... I read this as totally anti-GPL ... \"you are more then\nwelcome to distribute binary only, but then you have to pay us for use of\nour libraries\" ...\n\n... the only aspect that would worry me is if SleepCat were to change\ntheir license and make it more restrictive ...\n\n", "msg_date": "Mon, 15 May 2000 22:39:39 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Berkeley DB license terms (was Re: Proposal...)" }, { "msg_contents": "> > Seems this changes our license more toward GPL. I don't think that is\n> > going to be supportable by the group. I doubt we are willing to modify\n> > our license in order to use the Sleepycat DB code.\n> \n> I don't know ... I read this as totally anti-GPL ... \"you are more then\n> welcome to distribute binary only, but then you have to pay us for use of\n> our libraries\" ...\n> \n> ... the only aspect that would worry me is if SleepCat were to change\n> their license and make it more restrictive ...\n\nBut it ties the hands of binary-only distributors, or pay them. Not a\ngood choice.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 15 May 2000 21:40:21 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Berkeley DB license terms (was Re: Proposal...)" }, { "msg_contents": "On Mon, 15 May 2000, Bruce Momjian wrote:\n\n> > > Seems this changes our license more toward GPL. I don't think that is\n> > > going to be supportable by the group. I doubt we are willing to modify\n> > > our license in order to use the Sleepycat DB code.\n> > \n> > I don't know ... I read this as totally anti-GPL ... \"you are more then\n> > welcome to distribute binary only, but then you have to pay us for use of\n> > our libraries\" ...\n> > \n> > ... the only aspect that would worry me is if SleepCat were to change\n> > their license and make it more restrictive ...\n> \n> But it ties the hands of binary-only distributors, or pay them. Not a\n> good choice.\n\nWoah here ... didn't Michael state that binary-only was okay, as long as\nthe source *was* available on the 'Net? ie. Enhydra can distribute their\nbinaries, as long as sources were still available on postgresql.org?\n\n\n\n", "msg_date": "Mon, 15 May 2000 22:48:19 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Berkeley DB license terms (was Re: Proposal...)" }, { "msg_contents": "At 11:52 AM 5/16/00 +1000, Chris Bitmead wrote:\n\n> That's no different to GPL. One is always free to negotiate with the\n> author a fee to not have to abide the standard licence. Out of\n> curiousity, any reason you didn't just use the GPL as-is since it is the\n> same idea?\n\nBerkeley DB already carried the University of California copyright,\nand we couldn't remove it or the conditions it imposed. We'd have\npreferred the GPL but we'd have had to craft a more complicated and\nnewer license the preserved the UC terms and the GNU terms.\n\n\t\t\t\t\tmike\n\n", "msg_date": "Mon, 15 May 2000 19:02:21 -0700", "msg_from": "\"Michael A. Olson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Berkeley DB license terms (was Re: Proposal...)" }, { "msg_contents": "> On Mon, 15 May 2000, Bruce Momjian wrote:\n> \n> > > > Seems this changes our license more toward GPL. I don't think that is\n> > > > going to be supportable by the group. I doubt we are willing to modify\n> > > > our license in order to use the Sleepycat DB code.\n> > > \n> > > I don't know ... I read this as totally anti-GPL ... \"you are more then\n> > > welcome to distribute binary only, but then you have to pay us for use of\n> > > our libraries\" ...\n> > > \n> > > ... the only aspect that would worry me is if SleepCat were to change\n> > > their license and make it more restrictive ...\n> > \n> > But it ties the hands of binary-only distributors, or pay them. Not a\n> > good choice.\n> \n> Woah here ... didn't Michael state that binary-only was okay, as long as\n> the source *was* available on the 'Net? ie. Enhydra can distribute their\n> binaries, as long as sources were still available on postgresql.org?\n\nBut that limits companies from distributing binary-only versions where\nthey don't want to give out the source.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 15 May 2000 22:05:47 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Berkeley DB license terms (was Re: Proposal...)" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n>> Woah here ... didn't Michael state that binary-only was okay, as long as\n>> the source *was* available on the 'Net? ie. Enhydra can distribute their\n>> binaries, as long as sources were still available on postgresql.org?\n\n> But that limits companies from distributing binary-only versions where\n> they don't want to give out the source.\n\nThe way I read it was that as long as *we* are making Postgres source\navailable, people using Postgres as a component wouldn't have to, nor\nmake their own source available which'd probably be the real issue.\n\nOTOH, there'd still be a problem with distributing slightly-modified\nversions of Postgres --- that might require a Sleepycat license.\n\nOn the whole this seems like a can of worms better left unopened.\nWe don't want to create questions about whether Postgres is free\nor not.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 15 May 2000 22:46:38 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Berkeley DB license terms (was Re: Proposal...) " }, { "msg_contents": "> Bruce Momjian <[email protected]> writes:\n> >> Woah here ... didn't Michael state that binary-only was okay, as long as\n> >> the source *was* available on the 'Net? ie. Enhydra can distribute their\n> >> binaries, as long as sources were still available on postgresql.org?\n> \n> > But that limits companies from distributing binary-only versions where\n> > they don't want to give out the source.\n> \n> The way I read it was that as long as *we* are making Postgres source\n> available, people using Postgres as a component wouldn't have to, nor\n> make their own source available which'd probably be the real issue.\n> \n> OTOH, there'd still be a problem with distributing slightly-modified\n> versions of Postgres --- that might require a Sleepycat license.\n> \n> On the whole this seems like a can of worms better left unopened.\n> We don't want to create questions about whether Postgres is free\n> or not.\n\nAgreed.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 15 May 2000 22:46:45 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Berkeley DB license terms (was Re: Proposal...)" }, { "msg_contents": "Seems like something went wrong with the new Majordomo. I used to get ~50-60\nmessages per day from pgsql-hackers and interfaces lists. Now I get 1-2 per\nday. It started last week. Apparently something is wrong with the majordomo\nsetup.\n\nGene Sokolov.\n\n\n\n\n", "msg_date": "Tue, 16 May 2000 10:30:22 +0400", "msg_from": "\"Gene Sokolov\" <[email protected]>", "msg_from_op": false, "msg_subject": "Problems with the new Majordomo 2." }, { "msg_contents": "\nPRS ... PostReleaseSyndrome ... ppl are rebuilding up their energy,\nwatching for bugs while waiting for development to start up again.\n\nOn Tue, 16 May 2000, Gene Sokolov wrote:\n\n> Seems like something went wrong with the new Majordomo. I used to get ~50-60\n> messages per day from pgsql-hackers and interfaces lists. Now I get 1-2 per\n> day. It started last week. Apparently something is wrong with the majordomo\n> setup.\n> \n> Gene Sokolov.\n> \n> \n> \n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Tue, 16 May 2000 08:54:27 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Problems with the new Majordomo 2." }, { "msg_contents": "The Hermit Hacker <[email protected]> writes:\n> PRS ... PostReleaseSyndrome ... ppl are rebuilding up their energy,\n> watching for bugs while waiting for development to start up again.\n\nIf he's only getting one or two messages a day, then something's surely\nbroken! According to my maillog there were 92 messages on the pghackers\nlist yesterday, and another 20+ on interfaces.\n\nHowever, I'm inclined to think that the breakage is on his end, since\nno one else is complaining ...\n\nBTW, did I mention that I confirmed majordomo had lost my subscription\nto pgsql-patches? I did a who on the list and found no entry. Curious.\nDoesn't seem to have happened on any other list, but it explains why\nI've seen no patches for a good while.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 16 May 2000 10:29:31 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Problems with the new Majordomo 2. " }, { "msg_contents": "On Tue, 16 May 2000, Tom Lane wrote:\n\n> The Hermit Hacker <[email protected]> writes:\n> > PRS ... PostReleaseSyndrome ... ppl are rebuilding up their energy,\n> > watching for bugs while waiting for development to start up again.\n> \n> If he's only getting one or two messages a day, then something's surely\n> broken! According to my maillog there were 92 messages on the pghackers\n> list yesterday, and another 20+ on interfaces.\n> \n> However, I'm inclined to think that the breakage is on his end, since\n> no one else is complaining ...\n\nRegardless of which end is at fault, I am looking at this from our end\nanyway ... I'm wondering if maybe there was a connection problem to his\nsystem and the mail system itself just bounced it ...\n\n> BTW, did I mention that I confirmed majordomo had lost my subscription\n> to pgsql-patches? I did a who on the list and found no entry. \n> Curious. Doesn't seem to have happened on any other list, but it\n> explains why I've seen no patches for a good while.\n\nI'm embarressed to say that the Mj2 guys took a look at our system logs\nand, ummmm, it doesn't look like some ppl were subscribed to the\npgsql-patches list when we first moved it over :( Looks like definitely\noperator error on this one *sigh*\n\n\n", "msg_date": "Tue, 16 May 2000 13:08:07 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Problems with the new Majordomo 2. " }, { "msg_contents": "From: \"The Hermit Hacker\" <[email protected]>\n> On Tue, 16 May 2000, Tom Lane wrote:\n> > The Hermit Hacker <[email protected]> writes:\n> > > PRS ... PostReleaseSyndrome ... ppl are rebuilding up their energy,\n> > > watching for bugs while waiting for development to start up again.\n> >\n> > If he's only getting one or two messages a day, then something's surely\n> > broken! According to my maillog there were 92 messages on the pghackers\n> > list yesterday, and another 20+ on interfaces.\n> >\n> > However, I'm inclined to think that the breakage is on his end, since\n> > no one else is complaining ...\n>\n> Regardless of which end is at fault, I am looking at this from our end\n> anyway ... I'm wondering if maybe there was a connection problem to his\n> system and the mail system itself just bounced it ...\n\nThanks for helping me. Unfortunately your subscription reset did not help.\nThe traffic is still the same. I don't believe the problem is on my end\nbecuase the traffic I get from the freebsd-stable as well as a couple of\nother lists is usual. I also get a large traffic from customers, which is\nnot affected in any way. Just the two postgres lists are affected.\n\n I vaguely remember getting a couple of test messages from one of the pg\nlists about ten days ago and I think the volume dropped right after that.\n\n We did have a connectivity outage some time ago when both our primary\nand seconday MXs were unavailable, but it did not last longer than a couple\nof hours. If Majordomo unsubscribed me, I wonder why I still get any traffic\nfrom pg and why the subscription reset did not help.\n\nGene Sokolov.\n\n\n\n\n", "msg_date": "Wed, 17 May 2000 10:39:38 +0400", "msg_from": "\"Gene Sokolov\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Problems with the new Majordomo 2. " }, { "msg_contents": "On Wed, 17 May 2000, Gene Sokolov wrote:\n\n> From: \"The Hermit Hacker\" <[email protected]>\n> > On Tue, 16 May 2000, Tom Lane wrote:\n> > > The Hermit Hacker <[email protected]> writes:\n> > > > PRS ... PostReleaseSyndrome ... ppl are rebuilding up their energy,\n> > > > watching for bugs while waiting for development to start up again.\n> > >\n> > > If he's only getting one or two messages a day, then something's surely\n> > > broken! According to my maillog there were 92 messages on the pghackers\n> > > list yesterday, and another 20+ on interfaces.\n> > >\n> > > However, I'm inclined to think that the breakage is on his end, since\n> > > no one else is complaining ...\n> >\n> > Regardless of which end is at fault, I am looking at this from our end\n> > anyway ... I'm wondering if maybe there was a connection problem to his\n> > system and the mail system itself just bounced it ...\n> \n> Thanks for helping me. Unfortunately your subscription reset did not help.\n> The traffic is still the same. I don't believe the problem is on my end\n> becuase the traffic I get from the freebsd-stable as well as a couple of\n> other lists is usual. I also get a large traffic from customers, which is\n> not affected in any way. Just the two postgres lists are affected.\n> \n> I vaguely remember getting a couple of test messages from one of the pg\n> lists about ten days ago and I think the volume dropped right after that.\n> \n> We did have a connectivity outage some time ago when both our primary\n> and seconday MXs were unavailable, but it did not last longer than a couple\n> of hours. If Majordomo unsubscribed me, I wonder why I still get any traffic\n> from pg and why the subscription reset did not help.\n\nAre you running any kind of filtering software? Procmail or perhaps\nsomething built into OE? Maybe there's a header that you're keying off\nof that's dumping the mail to the trash. Around the time of the test\nmessages was the mail2news gateway stuff, I don't know if there were any\nmore headers put in tho. (haven't paid attention)\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] http://www.pop4.net\n 128K ISDN from $22.00/mo - 56K Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Wed, 17 May 2000 06:52:05 -0400 (EDT)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Problems with the new Majordomo 2. " }, { "msg_contents": "From: \"Vince Vielhaber\" <[email protected]>\n> Are you running any kind of filtering software? Procmail or perhaps\n> something built into OE? Maybe there's a header that you're keying off\n> of that's dumping the mail to the trash. Around the time of the test\n> messages was the mail2news gateway stuff, I don't know if there were any\n> more headers put in tho. (haven't paid attention)\n\nNo, I don't have any filters. But even if the mail was dumped to the trash,\nI should have seen it there.\n\n I just checked maillog on our primary MX (sendmail 8.9.3). Grepping for\nhub.org and postgresql.org produced just 2 entries. Grepping the same log\nfor freebsd.org produced a few dozen entries which is about the right number\nfor the traffic there. This evidently rules out any settings local to my\nworkstation. I tend to believe that majordomo is not even trying to deliver\nanything to me.\n\n Is it possible that majordomo2 can automatically consider some domains\n\"bad\" and not deliver? Something like compiling a list of domains where it\nonce failed to deliver mail?\n\nGene Sokolov.\n\n\n\n", "msg_date": "Wed, 17 May 2000 15:40:15 +0400", "msg_from": "\"Gene Sokolov\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Problems with the new Majordomo 2. " }, { "msg_contents": "The Hermit Hacker <[email protected]> writes:\n\n> I don't know ... I read this as totally anti-GPL ... \"you are more then\n> welcome to distribute binary only, but then you have to pay us for use of\n> our libraries\" ...\n\nGPL (as opposed to LGPL) has been used for that purpose - either GPL\nyour code, or if you want to keep it proprietary: Pay us, and we'll\ngive you the same code with a different license. \n\n-- \nTrond Eivind Glomsr�d\nRed Hat, Inc.\n", "msg_date": "17 May 2000 07:56:53 -0400", "msg_from": "[email protected] (Trond Eivind=?iso-8859-1?q?_Glomsr=F8d?=)", "msg_from_op": false, "msg_subject": "Re: Berkeley DB license terms (was Re: Proposal...)" } ]
[ { "msg_contents": "Tatsuo Ishii wrote:\n\n> >This is interesting, but I wonder how do you handle \"alignment issue\"?\n> In different architectures members in a structure might be aligned\n> differently. For some complex data types we need to solve the issue\n> to use the cross architecture binary cursors.\n> Same thing can be said for the floating format issue...\n> --\n> Tatsuo Ishii\n\nI'm not quite sure what \"alignment\" means. The data we've been working with\nhave been floats (usually float4), integers (usually int4), and variable\nlength or fixed length arrays of both. The only \"trick\" is to skip the first\n16 bytes in the binary cursor which we've always assumed was some sort of\nheader. (Of course, you also have to know the size in bytes of the datatype--\nbut this is always known a priori). Perhaps we've just been lucky with the\nLinux-->SGI transfers. It would be interesting to know if others have tried\nbinary cursors across architectures. Perhaps the pitfalls show up for other\ndatatypes or other architectures. (?)\n\n-Tony\n\n\n", "msg_date": "Mon, 15 May 2000 09:10:24 -0700", "msg_from": "\"G. Anthony Reina\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Binary cursor across computers with differentarchitectures" } ]
[ { "msg_contents": "I've just uploaded to my site the current JDBC driver.\n\nFor JDBC2 (ie JDK1.2.x and JDK1.3.x):\n\thttp://www.retep.org.uk/postgres/jdbc7.0-1.2.jar\n\nFor JDBC2 (ie JDK1.1.x, 1.1.7 or later advised):\n\thttp://www.retep.org.uk/postgres/jdbc7.0-1.1.jar\n\n-- \nPeter Mount\nEnterprise Support\nMaidstone Borough Council\nAny views stated are my own, and not those of Maidstone Borough Council.\n\n\n", "msg_date": "Mon, 15 May 2000 17:25:42 +0100", "msg_from": "Peter Mount <[email protected]>", "msg_from_op": true, "msg_subject": "JDBC 7.0 binaries" } ]
[ { "msg_contents": "The Hermit Hacker wrote:\n> \n> On Mon, 15 May 2000, Peter Eisentraut wrote:\n> \n> > On Mon, 15 May 2000, The Hermit Hacker wrote:\n> >\n> > > Hrmmm, some sort of --with-berkeley-db configure switch, so by default, it\n> > > uses ours, but if someone wants to do the db code, it could plug-n-play?\n> >\n> > But wasn't the main reason Michael Olson gave that a lot of code could be\n> > removed because Berkeley DB does it for you? But with that switch we'd end\n> > up with more code, not less.\n> \n> right, and my point was that, up until now, we've worked at making sure\n> that the whole thing is self-contained ... as soon as we throw in a\n> third-party piece of software that is *efffectively* our guts, we now\n> throw in a new point of failure for the end users ... what happens if, a\n> year down the road, SleepyCat decides that v4.0 falls undera new license\n> that negates our ability to use it? we've just drop'd all our guts in\n> favor of theirs and now what?\n\nThere could be some ways to get a twisted license (like Medusa used in\nZope)\nwhere the Berkeley DB used in PostgreSQL is free but used without\npostgres \nis still under the original Sleepycat terms.\n\nThat arrangement seems to work quite nicely with Zope.\n\nI still don't see how we could replace some part of storage manager and \naccess methods guts with Berkeley DB and still keep the extended\nfeatures\nlike R-trees and MVCC (and sure there are others), and integrate two\ntypes \nof transaction management on top of them.\n\n> I'm not saying that using some of SleepyCat's stuff for backend is a bad\n> idea, but I'm saying that we shouldn't be relying on it ... add on, yes ...\n\nBut what would the idea of such add-on be ? \n\nDoes it offer real advantages over our current scheme ?\nIf so, is the integrating effort significantly less than fixing what we\nhave ?\n\nBTW, is there a general-purpose optimisation library available that we \ncould use instead of our current one ? ;)\n\n-----------------\nHannu\n", "msg_date": "Mon, 15 May 2000 20:03:30 +0300", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Proposal: replace no-overwrite with Berkeley DB" }, { "msg_contents": "On Mon, 15 May 2000, Peter Eisentraut wrote:\n\n> On Mon, 15 May 2000, The Hermit Hacker wrote:\n> \n> > Hrmmm, some sort of --with-berkeley-db configure switch, so by default, it\n> > uses ours, but if someone wants to do the db code, it could plug-n-play?\n> \n> But wasn't the main reason Michael Olson gave that a lot of code could be\n> removed because Berkeley DB does it for you? But with that switch we'd end\n> up with more code, not less.\n\nright, and my point was that, up until now, we've worked at making sure\nthat the whole thing is self-contained ... as soon as we throw in a\nthird-party piece of software that is *efffectively* our guts, we now\nthrow in a new point of failure for the end users ... what happens if, a\nyear down the road, SleepyCat decides that v4.0 falls undera new license\nthat negates our ability to use it? we've just drop'd all our guts in\nfavor of theirs and now what?\n\nI'm not saying that using some of SleepyCat's stuff for backend is a bad\nidea, but I'm saying that we shouldn't be relying on it ... add on, yes\n... exclusive, no ...\n\n", "msg_date": "Mon, 15 May 2000 14:36:27 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal: replace no-overwrite with Berkeley DB" }, { "msg_contents": "> right, and my point was that, up until now, we've worked at making sure\n> that the whole thing is self-contained ... as soon as we throw in a\n> third-party piece of software that is *efffectively* our guts, we now\n> throw in a new point of failure for the end users ... what happens if, a\n> year down the road, SleepyCat decides that v4.0 falls undera new license\n> that negates our ability to use it? we've just drop'd all our guts in\n> favor of theirs and now what?\n> \n> I'm not saying that using some of SleepyCat's stuff for backend is a bad\n> idea, but I'm saying that we shouldn't be relying on it ... add on, yes\n> ... exclusive, no ...\n\nWe could get perpetual rights to the code as integrated into our code. \nAlso, if they change something, we could always take it as our own and\nkeep it working for us. I think we would need something like that.\n\nIt sort of goes to how open we are. Someone can always take PostgreSQL\nand create a branch if we do a terrible job. We would need that\nassurance of the Sleepycat DB.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 15 May 2000 14:09:47 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal: replace no-overwrite with Berkeley DB" }, { "msg_contents": ">\n>We could get perpetual rights to the code as integrated into our code. \n>Also, if they change something, we could always take it as our own and\n>keep it working for us. I think we would need something like that.\n>\n\nOne of the often-stated virtues of PGSQL is that it is easy for a company\nto take the source and go commercial. If you start integrating 'special\nlicense greements' into the development, then that advantage is severly\nreduced. \n\nA commercial operator has to form an agreement with sleepycat or rewrite\nthe storage manager. Unless sleepycat grant a completely open license to\nPGSQL and all it's commercial descendants in perpetuity, it seems you may\nbe removing one of the seeling points of PGSQL.\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.C.N. 008 659 498) | /(@) ______---_\nTel: +61-03-5367 7422 | _________ \\\nFax: +61-03-5367 7430 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Tue, 16 May 2000 10:02:19 +1000", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal: replace no-overwrite with Berkeley DB" }, { "msg_contents": "> >\n> >We could get perpetual rights to the code as integrated into our code. \n> >Also, if they change something, we could always take it as our own and\n> >keep it working for us. I think we would need something like that.\n> >\n> \n> One of the often-stated virtues of PGSQL is that it is easy for a company\n> to take the source and go commercial. If you start integrating 'special\n> license greements' into the development, then that advantage is severly\n> reduced. \n> \n> A commercial operator has to form an agreement with sleepycat or rewrite\n> the storage manager. Unless sleepycat grant a completely open license to\n> PGSQL and all it's commercial descendants in perpetuity, it seems you may\n> be removing one of the seeling points of PGSQL.\n\nYes, something like this would be required.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 15 May 2000 20:27:49 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal: replace no-overwrite with Berkeley DB" } ]
[ { "msg_contents": "On Mon, 15 May 2000, Peter Eisentraut wrote:\n\n> On Mon, 15 May 2000, The Hermit Hacker wrote:\n> \n> > Everythingn up to here sounds great ... but this part here totally throws\n> > me off ... this would mean that, unlike now where we rely on *zero*\n> > external code,\n> \n> ... where `zero' is defined as regex package, GNU make, Autoconf, Flex,\n> Perl, multibyte code ...\n\nwhere zero is defined as \"I can build a binary, put it up on the ftp site,\nand nobody has any other requirements in order to use it\" ...\n\n> > Effectively, if at some point down the road, the SleepyCat license\n> > changes, the whole project just gets slam'd for a loop ...\n> \n> Hmm, didn't you recently dismiss the argument \"What if at some point down\n> the road PostgreSQL Inc./Great Bridge/Evil Empire changes the\n> license/abducts the source code of PostgreSQL\" with \"use the last free\n> version\"?\n\nOkay, then are we merging SleepyCat's code into ours, and distributing\ntheir code? Or are we relying on someone having a copy of the libraries\nalready installed on their machine? \n\n\n", "msg_date": "Mon, 15 May 2000 15:10:48 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Proposal: replace no-overwrite with Berkeley DB" } ]
[ { "msg_contents": "Hi\n\nI just tried to download the latest snap. However as I do not recall the\nexact location I need to see the directory index. \n\nBut it is not visible. I tried ncftp/solaris and Netscape from Linux. The\nresult is the same. It can't find any files.\n\nls -l\ndir \n\nboth has no effect whatsoever.. Is this a problem for anyone else ?\n\n/Frank\n\n\n", "msg_date": "Mon, 15 May 2000 20:43:16 +0200 (MET DST)", "msg_from": "Frank G Hahn <[email protected]>", "msg_from_op": true, "msg_subject": "FTP-sever ftp.postgresql.org unable to get dir-list ?" }, { "msg_contents": "Frank G Hahn <[email protected]> writes:\n> But it is not visible. I tried ncftp/solaris and Netscape from Linux. The\n> result is the same. It can't find any files.\n\nhub.org suffered a disk failure this morning. I think Marc is still\npicking up the pieces. Try a mirror site if you're in a hurry.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 15 May 2000 15:07:50 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: FTP-sever ftp.postgresql.org unable to get dir-list ? " }, { "msg_contents": "\ndisk crash. still restoring.\n\nVince.\n\n\n\nOn Mon, 15 May 2000, Frank G Hahn wrote:\n\n> Hi\n> \n> I just tried to download the latest snap. However as I do not recall the\n> exact location I need to see the directory index. \n> \n> But it is not visible. I tried ncftp/solaris and Netscape from Linux. The\n> result is the same. It can't find any files.\n> \n> ls -l\n> dir \n> \n> both has no effect whatsoever.. Is this a problem for anyone else ?\n> \n> /Frank\n> \n> \n> \n\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] http://www.pop4.net\n 128K ISDN from $22.00/mo - 56K Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Mon, 15 May 2000 15:16:39 -0400 (EDT)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: FTP-sever ftp.postgresql.org unable to get dir-list\n ?" }, { "msg_contents": "\nthanks, fixed now ...\n\nOn Mon, 15 May 2000, Frank G Hahn wrote:\n\n> Hi\n> \n> I just tried to download the latest snap. However as I do not recall the\n> exact location I need to see the directory index. \n> \n> But it is not visible. I tried ncftp/solaris and Netscape from Linux. The\n> result is the same. It can't find any files.\n> \n> ls -l\n> dir \n> \n> both has no effect whatsoever.. Is this a problem for anyone else ?\n> \n> /Frank\n> \n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Mon, 15 May 2000 16:42:19 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: FTP-sever ftp.postgresql.org unable to get dir-list\n ?" } ]
[ { "msg_contents": "\nmost of the first set of documents are not found.\n\nNot Found\n\nThe requested URL /docs/user/index.html was not found on this server.\n\nAdditionally, a 404 Not Found error was encountered while trying to use an ErrorDocument to handle the\nrequest. \n\nApache/1.3.9 Ben-SSL/1.37 Server at www.postgresql.org Port 80\n\n-- \n[ Jim Mercer [email protected] +1 416 410-5633 ]\n[ Reptilian Research -- Longer Life through Colder Blood ]\n[ Don't be fooled by cheap Finnish imitations; BSD is the One True Code. ]\n", "msg_date": "Mon, 15 May 2000 14:44:26 -0400", "msg_from": "Jim Mercer <[email protected]>", "msg_from_op": true, "msg_subject": "broken links on http://www.postgresql.org/doxlist.html" }, { "msg_contents": "\n\ndisk crash. still restoring.\n\nVince.\n\n\n\n\nOn Mon, 15 May 2000, Jim Mercer wrote:\n\n> \n> most of the first set of documents are not found.\n> \n> Not Found\n> \n> The requested URL /docs/user/index.html was not found on this server.\n> \n> Additionally, a 404 Not Found error was encountered while trying to use an ErrorDocument to handle the\n> request. \n> \n> Apache/1.3.9 Ben-SSL/1.37 Server at www.postgresql.org Port 80\n> \n> \n\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] http://www.pop4.net\n 128K ISDN from $22.00/mo - 56K Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Mon, 15 May 2000 15:17:05 -0400 (EDT)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: broken links on http://www.postgresql.org/doxlist.html" }, { "msg_contents": "> disk crash. still restoring.\n\nDoes that make it easier or harder to roll out your new site\norganization?\n\nLet us know if we can help...\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Tue, 16 May 2000 01:57:45 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: broken links on http://www.postgresql.org/doxlist.html" }, { "msg_contents": "On Tue, 16 May 2000, Thomas Lockhart wrote:\n\n> > disk crash. still restoring.\n> \n> Does that make it easier or harder to roll out your new site\n> organization?\n> \n> Let us know if we can help...\n\nHarder, I'm headed outa town Wed morning and am nowhere near ready to\ngo. Just now got one machine going so perhaps I'm begining to see\nlight at the end of the tunnel. I figure sometime in early June on\nthe new stuff - hopefully sooner, but it's hard to say right now.\n\nBTW, the documentation seems to be gone from the website, will that\nregenerate itself tonite? (of course it may have already, I haven't\nlooked in the last couple of hours)\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] http://www.pop4.net\n 128K ISDN from $22.00/mo - 56K Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Mon, 15 May 2000 22:05:19 -0400 (EDT)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: broken links on http://www.postgresql.org/doxlist.html" } ]
[ { "msg_contents": "Tom Lane writes:\n\n> I expect what you are after is the ability to produce an 0-or-1\n> numeric value from a bool field, so that you could do things like\n> sum(boolfield::int) to count the number of true values in a column. \n> I agree that we need such an operator (and I'm surprised no one's\n> gotten round to contributing one).\n\nLet's contribute one now ...\n\nselect count(*) from test4 where a = true;\n\nselect sum( case when a then 1 else 0 end ) from test4;\n\n\n> But I don't agree that there\n> should be an implicit, automatic conversion from bool to int; that\n> defeats half of the error-checking value of having a separate type for\n> truth values in the first place.\n\nDefinitely.\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Mon, 15 May 2000 20:56:51 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Proposal for fixing numeric type-resolution issues " } ]
[ { "msg_contents": "Tom Lane writes:\n\n> I've noticed in other cases that it's willing to do implicit\n> conversion to text from practically anything. That strikes me as a\n> bug, or at least pretty darn unintuitive. Is this behavior\n> intentional, and if so what's the rationale?\n\nWell, I'm sure you know my stance on these things by now, but let me add\none last thing: What I'm missing from the type system is a governing\nstrategy. On the one hand there is strong typing, on the other hand\nimplicit casting across the board, but not when you would actually want\nit. This is usually where programming language paradigms clash, but note\nthat programming language design in the last two decades has clearly been\nmoving to one of two things: either strong and strict typing or no typing\nat all. I'm not saying I like strict like Java were you can't pass a\nliteral `5' as a `short int' argument without complaints but strict as in\na string is a string, a date is a date, a point is a point, a number is a\nnumber.\n\nI would go as far as saying that if you try to insert a 5 in a text field\nthen this should be an error, you must write '5'. Surely some might claim\nthat this is an inconvenience. Indeed, this is inconveniencing me because\npossible errors in string processing or even system logic are silently\ndropped under the table. Nobody ever got carpal tunnel syndrome because of\ntwo extra quotes, and if the SQL is machine-generated then fixing the\nprogram is the best thing in the long run anyway.\n\nThe previous paragraph agrees with SQL (see section 9.2).\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Mon, 15 May 2000 20:57:27 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Casting, again " }, { "msg_contents": "Your pg_excelencies,\n\nPeter Eisentraut wrote:\n\n> I would go as far as saying that if you try to insert a 5 in a text field\n> then this should be an error, you must write '5'. Surely some might claim\n> that this is an inconvenience. Indeed, this is inconveniencing me because\n> possible errors in string processing or even system logic are silently\n> dropped under the table. Nobody ever got carpal tunnel syndrome because of\n> two extra quotes, and if the SQL is machine-generated then fixing the\n> program is the best thing in the long run anyway.\n\nDid you think to a system with some abstract types (like numeric, char, date,\netc.) and fizical\ntypes within the abstract types (i.e. abstract numeric type holds physical\ninteger type, physical\nfloat type, phisical numeric type), and the only implicit cast that parser\ndoes to be\nbetween phisical types whithin the same abstract type ?\n\nRaul.\n\n\n\n\n", "msg_date": "Wed, 17 May 2000 03:02:17 +0300", "msg_from": "Raul Chirea <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Casting, again" } ]
[ { "msg_contents": "> > I've read this paper ~2 years ago. My plans so far were:\n> > \n> > 1. WAL in 7.1\n> > 2. New (overwriting) storage manager in 7.2\n> > \n> > Comments?\n> \n> Vadim,\n> \n> Perhaps best solution will be to keep both (or three) storage \n> managers - and specify which one to use at database creation time.\n> \n> After reading the Stonebraker's paper, I could think there \n> are situations that we want the no-overwrite storage manager and\n> other where overwrite storage manager may offer better performance.\n> Wasn't Postgres originally designed to allow different storage\n> managers?\n\nOverwriting and non-overwriting smgr-s have quite different nature.\nAccess methods would take care about what type of smgr is used for\nspecific table/index...\n\nVadim\n", "msg_date": "Mon, 15 May 2000 11:59:07 -0700", "msg_from": "\"Mikheev, Vadim\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: WAL versus Postgres (or: what goes around, comes ar\n\t ound)" }, { "msg_contents": ">>>\"Mikheev, Vadim\" said:\n > > Perhaps best solution will be to keep both (or three) storage \n > > managers - and specify which one to use at database creation time.\n > > \n > > After reading the Stonebraker's paper, I could think there \n > > are situations that we want the no-overwrite storage manager and\n > > other where overwrite storage manager may offer better performance.\n > > Wasn't Postgres originally designed to allow different storage\n > > managers?\n > \n > Overwriting and non-overwriting smgr-s have quite different nature.\n > Access methods would take care about what type of smgr is used for\n > specific table/index...\n\nIn light of the discussion whether we can use Berkeley DB (or Sleepycat DB?) - \nperhaps it is indeed good idea to start working on the access methods layer - \nor perhaps just define more 'reasonable' SMGR layer at higher level than the \ncurrent Postgres code.\n\nThe idea is: (when) we have this storage manager layer, we could use different \nstorage managers (or heaps managers in current terms) to manage different \ntables/databases.\n\nMy idea to use different managers at the database level comes from the fact, \nthat we do not have transactions that span databases, and that transactions \nare probably the things that will be difficult to implement (in short time) \nfor heaps using different storage managers - such as one table no-overwrite, \nanother table WAL, third table Berkeley DB etc.\n\n From Vadim's response I imagine he considers this easier to implement...\n\nOn the license issue - it is unlikely PostgreSQL to rip off its storage \ninternals to replace everything with Berkeley DB. This may have worked three \nor five years ago, but the current storage manager is reasonable (especially \nits crash recovery - I have not seen any other DBMS that is even close to \nPostgreSQL in terms of 'cost of crash recovery' - this is anyway different \ntopic). But, if we have the storage manager layer, it may be possible to use \nBerkeley DB as an additional access method - for databases/applications that \nmay make benefit of it - performance wise and where license permits.\n\nDaniel\n\n", "msg_date": "Tue, 16 May 2000 10:03:58 +0300", "msg_from": "Daniel Kalchev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: WAL versus Postgres (or: what goes around, comes ar\n ound)" }, { "msg_contents": "Yesterday I sent out a message explaining Sleepycat's standard\nlicensing policy with respect to binary redistribution. That\npolicy generally imposes GPL-like restrictions on the embedding\napplication, unless the distributor purchases a separate\nlicense.\n\nWe've talked it over at Sleepycat, and we're willing to write a\nspecial agreement for PostgreSQL's use of Berkeley DB. That\nagreement would permit redistribution of Berkeley DB with\nPostgreSQL at no charge, in binary or source code form, by any\nparty, with or without modifications to the engine.\n\nIn short, we can adopt the PostgreSQL license terms for PostgreSQL's\nuse of Berkeley DB.\n\nThe remaining issues are technical ones.\n\nRather than replacing just the storage manager, you'd be replacing\nthe access methods, buffer manager, transaction manager, and some\nof the shared memory plumbing with our stuff. I wasn't sufficiently\nclear in my earlier message, and talked about \"no-overwrite\" as if\nit were the only component.\n\nClearly, that's a lot of work. On the other hand, you'd have the\nbenefit of an extremely well-tested and widely deployed library to\nprovide those services. Lots of different groups have used the\nsoftware, so the abstractions that the API presents are well-thought\nout and work well in most cases.\n\nThe group is interested in multi-version concurrency control, so that\nreaders never block on writers. If that's genuinely critical, we'd\nbe willing to see some work done to add it to Berkeley DB, so that it\ncan do either conventional 2PL without versioning, or MV. Naturally,\nwe'd participate in any kind of design discussions you wanted, but\nwe'd like to see the PostgreSQL implementors build it, since you\nunderstand the feature you want.\n\nFinally, there's the question of whether a tree-based heap store with\nan artificial key will be as fast as the heap structure you're using\nnow. Benchmarking is the only way to know for sure. I don't believe\nthat this will be a major problem. The internal nodes of any btree\ngenerally wind up in the cache very quickly, and stay there because\nthey're hot. So you're not doing a lot of disk I/O to get a record\noff disk, you're chasing pointers in memory. We don't lose technical\nevaluations on performance, as a general thing; I think that you will\nbe satisfied with the speed.\n\n\n\t\t\t\t\tmike\n\n", "msg_date": "Tue, 16 May 2000 06:57:09 -0700", "msg_from": "\"Michael A. Olson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Berkeley DB license" }, { "msg_contents": "At 06:57 16/05/00 -0700, Michael A. Olson wrote:\n>We've talked it over at Sleepycat, and we're willing to write a\n>special agreement for PostgreSQL's use of Berkeley DB. That\n>agreement would permit redistribution of Berkeley DB with\n>PostgreSQL at no charge, in binary or source code form, by any\n>party, with or without modifications to the engine.\n\nJust to clarify - if I take PostgreSQL, make a few minor changes to create\na commercial product called Boastgress, your proposed license would allow\nthe distribution of binaries for the new product without further\ninteraction, payments, or licensing from Sleepycat?\n\nSimilaryly, if changes were made to BDB, I would not have to send those\nchanges to you, nor would I have to make the source available?\n\nPlease don't misunderstand me - it seems to me that you are making a very\ngenerous offer, and I want to clarify that I have understood correctly.\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.C.N. 008 659 498) | /(@) ______---_\nTel: +61-03-5367 7422 | _________ \\\nFax: +61-03-5367 7430 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Wed, 17 May 2000 00:28:18 +1000", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Berkeley DB license" }, { "msg_contents": "At 12:28 AM 5/17/00 +1000, you wrote:\n\n> Just to clarify - if I take PostgreSQL, make a few minor changes to create\n> a commercial product called Boastgress, your proposed license would allow\n> the distribution of binaries for the new product without further\n> interaction, payments, or licensing from Sleepycat?\n\nCorrect.\n\n> Similaryly, if changes were made to BDB, I would not have to send those\n> changes to you, nor would I have to make the source available?\n\nAlso correct. However, the license would only permit redistribution of\nthe Berkeley DB software embedded in the PostgreSQL engine or the\nderivative product that the proprietary vendor distributes. The\nvendor would not be permitted to extract Berkeley DB from PostgreSQL\nand distribute it separately, as part of some other product offering\nor as a standalone embedded database engine.\n\nThe intent here is to clear the way for use of Berkeley DB in PostgreSQL,\nbut not to apply PostgreSQL's license to Berkeley DB for other uses.\n\n\t\t\t\t\tmike\n\n", "msg_date": "Tue, 16 May 2000 07:44:45 -0700", "msg_from": "\"Michael A. Olson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Berkeley DB license" }, { "msg_contents": "> Also correct. However, the license would only permit redistribution of\n> the Berkeley DB software embedded in the PostgreSQL engine or the\n> derivative product that the proprietary vendor distributes. The\n> vendor would not be permitted to extract Berkeley DB from PostgreSQL\n> and distribute it separately, as part of some other product offering\n> or as a standalone embedded database engine.\n> \n> The intent here is to clear the way for use of Berkeley DB in PostgreSQL,\n> but not to apply PostgreSQL's license to Berkeley DB for other uses.\n\nTotally agree, and totally reasonable.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 16 May 2000 11:01:16 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Berkeley DB license" }, { "msg_contents": "On Tue, 16 May 2000, Michael A. Olson wrote:\n\n> Rather than replacing just the storage manager, you'd be replacing\n> the access methods, buffer manager, transaction manager, and some\n> of the shared memory plumbing with our stuff. \n\nSo, basically, we rip out 3+ years or work on our backend and put an SQL\nfront-end over top of BerkleyDB? \n\n\n", "msg_date": "Tue, 16 May 2000 13:05:28 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Berkeley DB license" }, { "msg_contents": "> On Tue, 16 May 2000, Michael A. Olson wrote:\n> \n> > Rather than replacing just the storage manager, you'd be replacing\n> > the access methods, buffer manager, transaction manager, and some\n> > of the shared memory plumbing with our stuff. \n> \n> So, basically, we rip out 3+ years or work on our backend and put an SQL\n> front-end over top of BerkleyDB? \n\nWell, if we look at our main componients,\nparser/rewrite/optimizer/executor, they stay pretty much the same. It\nis the lower level stuff that would change.\n\nNow, no one is suggesting we do this. The issue is exploring what gains\nwe could make in doing this.\n\nI would hate to throw out our code, but I would also hate to not make \nchange because we think our code is better without objectively judging\nours against someone else's.\n\nIn the end, we may find that the needs of a database for storage are\ndifferent enough that SDB would not be a win, but I think it is worth\nexploring to see if that is true.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 16 May 2000 12:19:00 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Berkeley DB license" }, { "msg_contents": "At 01:05 PM 5/16/00 -0300, The Hermit Hacker wrote:\n\n> So, basically, we rip out 3+ years or work on our backend and put an SQL\n> front-end over top of BerkleyDB? \n\nI'd put this differently.\n\nGiven that you're considering rewriting the low-level storage code\nanyway, and given that Berkeley DB offers a number of interesting\nservices, you should consider using it.\n\nIt may make sense for you to leverage the 9+ years of work in\nBerkeley DB to save yourself a major reimplementation effort now.\n\nWe'd like you guys to make the decision on technical merit, so we\nagreed to the license terms you require for PostgreSQL.\n\n\t\t\t\t\tmike\n\n", "msg_date": "Tue, 16 May 2000 09:27:11 -0700", "msg_from": "\"Michael A. Olson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Berkeley DB license" }, { "msg_contents": "Then <[email protected]> spoke up and said:\n> In the end, we may find that the needs of a database for storage are\n> different enough that SDB would not be a win, but I think it is worth\n> exploring to see if that is true.\n\nActually, there are other possibilities, too. As a for-instance, it\nmight be interesting to see what a reiserfs-based storage manager\nlooks/performs like. \n\n-- \n=====================================================================\n| JAVA must have been developed in the wilds of West Virginia. |\n| After all, why else would it support only single inheritance?? |\n=====================================================================\n| Finger [email protected] for my public key. |\n=====================================================================", "msg_date": "Tue, 16 May 2000 12:33:17 -0400 (EDT)", "msg_from": "Brian E Gallew <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Berkeley DB license" }, { "msg_contents": "On Tue, 16 May 2000, Bruce Momjian wrote:\n\n> > On Tue, 16 May 2000, Michael A. Olson wrote:\n> > \n> > > Rather than replacing just the storage manager, you'd be replacing\n> > > the access methods, buffer manager, transaction manager, and some\n> > > of the shared memory plumbing with our stuff. \n> > \n> > So, basically, we rip out 3+ years or work on our backend and put an SQL\n> > front-end over top of BerkleyDB? \n> \n> Now, no one is suggesting we do this. The issue is exploring what gains\n> we could make in doing this.\n\nDefinitely ... I'm just reducing it down to simpler terms, that's all :)\n\n\n", "msg_date": "Tue, 16 May 2000 14:48:50 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Berkeley DB license" }, { "msg_contents": "> On Tue, 16 May 2000, Bruce Momjian wrote:\n> \n> > > On Tue, 16 May 2000, Michael A. Olson wrote:\n> > > \n> > > > Rather than replacing just the storage manager, you'd be replacing\n> > > > the access methods, buffer manager, transaction manager, and some\n> > > > of the shared memory plumbing with our stuff. \n> > > \n> > > So, basically, we rip out 3+ years or work on our backend and put an SQL\n> > > front-end over top of BerkleyDB? \n> > \n> > Now, no one is suggesting we do this. The issue is exploring what gains\n> > we could make in doing this.\n> \n> Definitely ... I'm just reducing it down to simpler terms, that's all :)\n\nI am glad you did. I like the fact we are open to re-evaluate our code\nand consider code from outside sources. Many open-source efforts have\nproblems with code-not-made-here.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 16 May 2000 13:52:03 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Berkeley DB license" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> > On Tue, 16 May 2000, Bruce Momjian wrote:\n> >\n> > > > On Tue, 16 May 2000, Michael A. Olson wrote:\n> > > >\n> > > > > Rather than replacing just the storage manager, you'd be replacing\n> > > > > the access methods, buffer manager, transaction manager, and some\n> > > > > of the shared memory plumbing with our stuff.\n> > > >\n> > > > So, basically, we rip out 3+ years or work on our backend and put an SQL\n> > > > front-end over top of BerkleyDB?\n> > >\n> > > Now, no one is suggesting we do this. The issue is exploring what gains\n> > > we could make in doing this.\n> >\n> > Definitely ... I'm just reducing it down to simpler terms, that's all :)\n> \n> I am glad you did. I like the fact we are open to re-evaluate our code\n> and consider code from outside sources. Many open-source efforts have\n> problems with code-not-made-here.\n\nI have been planning to add a full-text index (a.k.a. inverted index) to\npostgres \ntext and array types for some time already. This is the only major index\ntype \nnot yet supported by postgres and currently implemented as an external\nindex in \nour products. I have had a good excuse (for me) for postponing that work\nin \nthe limited size of text datatype (actually the tuples) but AFAIK it is\ngoing \naway in 7.1 ;)\n\nI have done a full-text index for a major national newspaper that has\nworked \nok for several years using python and old (v1.86, pre-sleepycat) BSD DB\ncode - I \nstayed away from 2.x versions due to SC license terms. I'm happy to \nhear that BSD DB and postgreSQL storage schemes are designed to be\ncompatible.\n\nBut I still suspect that taking some existing postgres index (most\nlikely btree)\nas base would be less effort than integrating locking/transactions of\nPG/BDB.\n\n------------\nHannu\n", "msg_date": "Tue, 16 May 2000 23:26:13 +0300", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Berkeley DB license" }, { "msg_contents": "At 09:46 AM 5/17/00 +1000, you wrote:\n\n> What if I am an evil software\n> empire who takes postgresql, renames it, cuts out most of the\n> non-sleepycat code and starts competing with sleepycat?\n\nWe make zero dollars in the SQL market today, so I'm not risking\nanything by promoting this project. Our embedded engine is a very\ndifferent product from your object/relational client/server engine.\nI'm not worried about competition from you guys or from derivatives.\nIt's just not my market.\n\nI'd really like to see PostgreSQL go head-to-head with the established\nproprietary vendors of relational systems. I believe Berkeley DB will\nhelp. If you use it and if you're successful, then I get to brag to\nall my customers in the embedded market about the number of installed\nsites I have worldwide.\n\n> Or what if I am\n> just some company who wants to make proprietry use of sleepycat but\n> don't want to pay the fee?\n\nWe've done these licenses pretty often for other groups. Gnome is an\nexample that was mentioned recently. We do two things:\n\n\t+ Define the embedding application with reasonable precision; and\n\n\t+ Explicitly state that the embedding application may not surface\n\t our interfaces directly to third parties.\n\nThe first bullet keeps bad guys from using the trick you identify.\nThe second keeps bad guys from helping their friends use the trick.\n\nIf we decide we decide that the integration is a good idea, we'll\ndraft a letter and you can get it reviewed by your attorney. I don't\nknow whether PostgreSQL.org has a lawyer, but if not, Great Bridge\nwill probably loan you one. We'll work with you to be sure that the\nlanguage is right.\n\nI don't want to minimize concern on this point. Certainly the\nagreement granting PostgreSQL these rights will require some care.\nBut that's what I do for a living, and I can assure you that you'll\nget a letter that's fair and that lives up to the promises we've\nmade to the list. I don't want to get hung up on the legal issues.\nThose are tractable. It's more important to me to know whether\nthere's a reasonable technical fit.\n\n\t\t\t\t\tmike\n\n", "msg_date": "Tue, 16 May 2000 17:35:42 -0700", "msg_from": "\"Michael A. Olson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Berkeley DB license" }, { "msg_contents": "At 10:41 AM 5/17/00 +1000, Chris Bitmead wrote:\n\n> Can you explain in technical terms what \"surface our interfaces\" means?\n\nBasically, it would not be permitted to write trivial wrappers around\nBerkeley DB functions like db_open, simply to permit applications other\nthan PostgreSQL to call them to get around Sleepycat's license terms\nfor Berkeley DB.\n\nI realize that we can argue at length about what constitutes a \"trivial\nwrapper,\" and how much gray area there is around that. We'd write the\nagreement so that there was plenty of room for you to improve PostgreSQL\nwithout violating the terms. You'll be able to review the agreement and\nto get legal advice on it.\n\nLet's hold the legal discussion until we decide whether we need to have\nit at all. If there's just no technical fit, we can save the trouble\nand expense of drafting a letter agreement and haggling over terms. If\nthe technical fit is good, then the next hurdle will be the agreement,\nand we can focus on that with our full attention.\n\n\t\t\t\t\tmike\n\n\n\n", "msg_date": "Tue, 16 May 2000 18:03:18 -0700", "msg_from": "\"Michael A. Olson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Berkeley DB license" }, { "msg_contents": "At 11:13 AM 5/17/00 +1000, Chris Bitmead wrote:\n\n> Why do you even need wrappers? You just link with libpgbackend.so and\n> call whatever functions you want. Or would the agreement say something\n> like you can only call postgresql functions that are not part of\n> sleepycat?\n\nYes, the agreement would state that only the PostgreSQL application\ncould call the Berkeley DB interfaces directly.\n\n\t\t\t\t\tmike\n\n", "msg_date": "Tue, 16 May 2000 18:33:48 -0700", "msg_from": "\"Michael A. Olson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Berkeley DB license" }, { "msg_contents": "> At 11:13 AM 5/17/00 +1000, Chris Bitmead wrote:\n> \n> > Why do you even need wrappers? You just link with libpgbackend.so and\n> > call whatever functions you want. Or would the agreement say something\n> > like you can only call postgresql functions that are not part of\n> > sleepycat?\n> \n> Yes, the agreement would state that only the PostgreSQL application\n> could call the Berkeley DB interfaces directly.\n\nWell then, let's start looking into the options. I know Vadim has a new\nstorage manager planned for 7.2, so he is going to look at the sleepycat\ncode and give us an opinion.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 16 May 2000 21:49:06 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Berkeley DB license" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> > At 11:13 AM 5/17/00 +1000, Chris Bitmead wrote:\n> >\n> > > Why do you even need wrappers? You just link with libpgbackend.so and\n> > > call whatever functions you want. Or would the agreement say something\n> > > like you can only call postgresql functions that are not part of\n> > > sleepycat?\n> >\n> > Yes, the agreement would state that only the PostgreSQL application\n> > could call the Berkeley DB interfaces directly.\n> \n> Well then, let's start looking into the options. I know Vadim has a new\n> storage manager planned for 7.2, so he is going to look at the sleepycat\n> code and give us an opinion.\n\nJust curious, \n\nBut is he *allowed* to view the code? I realize its pseudo-Open\nSource, but to my understanding one cannot even look at GPL code\nwithout fear of \"infecting\" whatever project you may be working\non. If Vadim looks a the code, decides \"Nah, I can do this in two\nweeks time\" and then the overwrite system appears in 7.2 aren't\nthere issues there?\n\nMike Mascaru\n", "msg_date": "Tue, 16 May 2000 23:11:13 -0400", "msg_from": "Mike Mascari <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Berkeley DB license" }, { "msg_contents": "> Just curious, \n> \n> But is he *allowed* to view the code? I realize its pseudo-Open\n> Source, but to my understanding one cannot even look at GPL code\n> without fear of \"infecting\" whatever project you may be working\n> on. If Vadim looks a the code, decides \"Nah, I can do this in two\n> weeks time\" and then the overwrite system appears in 7.2 aren't\n> there issues there?\n\nI have never heard that of GNU code, and I assume it is not true.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 16 May 2000 23:13:05 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Berkeley DB license" }, { "msg_contents": "At 23:11 16/05/00 -0400, Mike Mascari wrote:\n>\n>But is he *allowed* to view the code? I realize its pseudo-Open\n>Source, but to my understanding one cannot even look at GPL code\n>without fear of \"infecting\" whatever project you may be working\n>on. If Vadim looks a the code, decides \"Nah, I can do this in two\n>weeks time\" and then the overwrite system appears in 7.2 aren't\n>there issues there?\n>\n\nThere used to be, but I'm not sure if the laws have changed. \n\nI *think* the problem was differentiating reverse engineering from copying\n- in the most extreme cases I have heard of companies paying someone to\ndocument the thing to be reverse engineered, then hunting for someone who\nhas never seen the thing in question to actually do the work from the\ndocumentation.\n\nIn this case, Vadim is not planning to reverse engineer SDB, but it *might*\nbe worth getting an opinion or, better, a waiver from Sleepycat...or\ngetting someone else to look into it, depending on how busy Vadim is.\n\nP.S. Isn't it amazing (depressing) how a very generous offer to help from\npeople who want the same outcomes as we do, seems to turn rapidly into\ndiscussions of litigation. This can not be a healthy legal system.\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.C.N. 008 659 498) | /(@) ______---_\nTel: +61-03-5367 7422 | _________ \\\nFax: +61-03-5367 7430 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Wed, 17 May 2000 13:31:39 +1000", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Berkeley DB license" }, { "msg_contents": "\"Michael A. Olson\" <[email protected]> writes:\n> Let's hold the legal discussion until we decide whether we need to have\n> it at all.\n\nSeems to me that Mike has stated that Sleepycat is willing to do what\nthey have to do to meet our requirements that Postgres remain completely\nfree. Let's take that as read for the moment and pay more attention to\nthe technical issues. As he says, if there are technical showstoppers\nthen there's no point in haggling over license wording. We can come\nback to the legal details when and if we decide that the idea looks\nlike a good one technically.\n\nI'm going to be extremely presumptive here and try to push the technical\ndiscussion into several specific channels. It looks to me like we've\ngot four major areas of concern technically:\n\n1. MVCC semantics. If we can't keep MVCC then the deal's dead in the\nwater, no question. Vadim's by far the best man to look at this issue;\nVadim, do you have time to think about it soon?\n\n2. Where to draw the API line. Berkeley DB's API doesn't seem to fit\nvery neatly into the existing modularization of Postgres. How should\nwe approach that, and how much work would it cost us? Are there parts\nof BDB that we just don't want to use at all?\n\n3. Additional access methods. Mike thinks we could live with using\nBDB's Recno access method for primary heap storage. I'm dubious\n(OK, call me stubborn...). We also have index methods that BDB hasn't\ngot. I'm not sure that our GIST code is being used or is even\nfunctional, but the rtree code has certainly got users. So, how hard\nmight it be to add raw-heap and rtree access methods to BDB?\n\n4. What are we buying for the above work? With all due respect to\nMike, he's said that he hasn't looked at the Postgres code since it\nwas in Berkeley's hands. We've made some considerable strides since\nthen, and maybe moving to BDB wouldn't be as big a win as he thinks.\nOn the other hand, Vadim is about to invest a great deal of work\nin WAL+new smgr; maybe we'd get more bang for the buck by putting\nthe same amount of effort into interfacing to BDB. We've got to\nlook hard at this.\n\nAnyone see any major points that I've missed here?\n\nHow can we move forward on looking at these questions? Seems like\nwe ought to try for the \"quick kill\": if anyone can identify any\nclear showstoppers in any of these areas, nailing down any one will\nend the discussion. As long as we don't find a showstopper it seems\nthat we ought to keep talking about it.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 17 May 2000 00:16:51 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Berkeley DB license " }, { "msg_contents": "* Michael A. Olson <[email protected]> [000516 18:42] wrote:\n> At 10:41 AM 5/17/00 +1000, Chris Bitmead wrote:\n> \n> > Can you explain in technical terms what \"surface our interfaces\" means?\n> \n> Basically, it would not be permitted to write trivial wrappers around\n> Berkeley DB functions like db_open, simply to permit applications other\n> than PostgreSQL to call them to get around Sleepycat's license terms\n> for Berkeley DB.\n> \n> I realize that we can argue at length about what constitutes a \"trivial\n> wrapper,\" and how much gray area there is around that. We'd write the\n> agreement so that there was plenty of room for you to improve PostgreSQL\n> without violating the terms. You'll be able to review the agreement and\n> to get legal advice on it.\n> \n> Let's hold the legal discussion until we decide whether we need to have\n> it at all. If there's just no technical fit, we can save the trouble\n> and expense of drafting a letter agreement and haggling over terms. If\n> the technical fit is good, then the next hurdle will be the agreement,\n> and we can focus on that with our full attention.\n\nNot that I really have any say about it but...\n\nI'm sorry, this proposal will probably lead to hurt on both sides,\none for SleepyCat possibly loosing intellectual rights by signing\ninto a BSDL'd program and another for Postgresql who might feel\nthe aftermath.\n\nMaking conditions on what constitutes a legal derivative of the\nPostgresql engine versus a simple wrapper is so arbitrary that it\nreally scares me.\n\nNow if SleepyCat could/would release under a full BSDL license, or\nif Postgresql didn't have a problem with going closed source there\nwouldn't be a problem, but I don't see either happening.\n\nFinally, all this talk about changing licenses (the whole GPL mess),\nincorperating _encumbered_ code (SleepyCat DB) is concerning to\nsay the least.\n\nAre you guys serious about compromising the codebase and tainting\nit in such a way that it becomes impossible for the people that\nare actively working on it to eventually profit from it?\n\n(with the exception of over-hyped IPO, but even that well has dried up)\n\n-Alfred\n", "msg_date": "Tue, 16 May 2000 21:36:01 -0700", "msg_from": "Alfred Perlstein <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Berkeley DB license" }, { "msg_contents": "Tom Lane wrote:\n> \n> \n> 3. Additional access methods. Mike thinks we could live with using\n> BDB's Recno access method for primary heap storage. I'm dubious\n> (OK, call me stubborn...). We also have index methods that BDB hasn't\n> got. I'm not sure that our GIST code is being used or is even\n> functional,\n\nThere was some discussion about GIST in 6.x and at least some people \nseemed to use it for their specific needs.\n\n> but the rtree code has certainly got users. So, how hard\n> might it be to add raw-heap and rtree access methods to BDB?\n>\n\nWhat if we go ahead an add R-tree and Gist and whatnot to DBD, \nwon't they become property of Sleepycat licensable for business \nuse by them only ? \n\nDitto for other things we may need to push inside BDB to keep good \nstructure.\n\n> \n> Anyone see any major points that I've missed here?\n> \n\nI still feel kind of eery about making some parts of code \nproprietary/GPL/BDB-PL and using PostgreSQL only for SQL layer \nand not storage.\n\nWe should probably also look at the pre-Sleepycat (v 1.x.x) \nBDB code that had the original Berkeley license and see what we \ncould make of that. \n\nIt does not have transaction support, but we already have MVCC.\n\nIt does not have page size restrictions either ;)\n\nAnd using even it still gives some bragging rights to Sleepycat ;)\n\n---------\nHannu\n", "msg_date": "Wed, 17 May 2000 11:23:07 +0300", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Berkeley DB license" }, { "msg_contents": "on 5/16/00 9:57 AM, Michael A. Olson at [email protected] wrote:\n\n> The group is interested in multi-version concurrency control, so that\n> readers never block on writers. If that's genuinely critical, we'd\n> be willing to see some work done to add it to Berkeley DB, so that it\n> can do either conventional 2PL without versioning, or MV. Naturally,\n> we'd participate in any kind of design discussions you wanted, but\n> we'd like to see the PostgreSQL implementors build it, since you\n> understand the feature you want.\n\nI don't think this point has been made strongly enough yet: readers never\nblocking writers AND writers never blocking readers are *critical* to any\nserious web application. It is one of the main reasons (besides marketing)\nwhy Oracle crushed Informix and Sybase in the web era.\n\nOracle solves this problem using its rollback segments to pull out old data\n(sometimes resulting in the nasty \"snapshot too old\" error if a transaction\nis trying to pull out data so old that it has been removed from the rollback\nsegment!), and Postgres uses MVCC (which is much cleaner IMHO).\n\nAs a user of databases for web applications only, I (and many others looking\nto Postgres for a serious Open-Source Oracle replacement) would be forced to\ncompletely drop Postgres if this (backwards) step were taken.\n\nIf you love gory details about this web & locking stuff, check out\nhttp://photo.net/wtr/aolserver/introduction-2.html\n(search for Postgres and read on from there).\n\n-Ben\n\n", "msg_date": "Wed, 17 May 2000 10:29:13 -0400", "msg_from": "Benjamin Adida <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Berkeley DB license" }, { "msg_contents": "I know people are still reviewing the SDB implementation for PostgreSQL,\nbut I was thinking about it today.\n\nThis is the first time I realized how efficient our current system is. \nWe have shared buffers that are mapped into the address space of each\nbackend. When a table is sequentially scanned, buffers are loaded into\nthat area and the backend accesses that 8k straight out of memory. If I\nremember the optimizations I added, much of that access uses inlined\nfunctions (macros) meaning the buffers are scanned at amazing speeds. I\nknow inlining a few of those functions gained a 10% speedup.\n\nI wonder how SDB performs such file scans. Of course, the real trick is\ngetting those buffers loaded faster. For sequential scans, the kernel\nprefetch does a good job, but index scans that hit many tuples have\nproblems, I am sure. ISAM helps in this regard, but I don't see that\nSDB has it.\n\nThere is also the Linux problem of preventing read-ahead after an\nseek(), while the BSD/HP kernels prevent prefetch only when prefetch\nblocks remain unused.\n\nAnd there is the problem of cache wiping, where a large sequential scan\nremoves all other cached blocks from the buffer. I don't know a way to\nprevent that one, though we could have large sequential scans reuse\ntheir own buffer, rather than grabbing the oldest buffer.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 20 May 2000 00:14:17 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Berkeley DB license" }, { "msg_contents": "On Sat, 20 May 2000, Bruce Momjian wrote:\n\n> And there is the problem of cache wiping, where a large sequential scan\n> removes all other cached blocks from the buffer. I don't know a way to\n> prevent that one, though we could have large sequential scans reuse\n> their own buffer, rather than grabbing the oldest buffer.\n\nOn some systems, you can specify (or hint) to the kernel that the file you\nare reading should not be buffered.\n\nThe only (completely) real solution for this is to use raw devices,\nuncached by the kernel, without any filesystem overhead... \n\nAre there any plans to support that?\n\n", "msg_date": "Sat, 20 May 2000 00:31:02 -0400 (EDT)", "msg_from": "Alex Pilosov <[email protected]>", "msg_from_op": false, "msg_subject": "Raw devices (was Re: Berkeley DB license)" }, { "msg_contents": "> On Sat, 20 May 2000, Bruce Momjian wrote:\n> \n> > And there is the problem of cache wiping, where a large sequential scan\n> > removes all other cached blocks from the buffer. I don't know a way to\n> > prevent that one, though we could have large sequential scans reuse\n> > their own buffer, rather than grabbing the oldest buffer.\n> \n> On some systems, you can specify (or hint) to the kernel that the file you\n> are reading should not be buffered.\n\nWell, I was actually thinking of the cache wiping that happens to our\nown PostgreSQL shared buffers, which we certainly do control.\n\n> The only (completely) real solution for this is to use raw devices,\n> uncached by the kernel, without any filesystem overhead... \n\nWe are not sure if we want to go in that direction. Commercial vendors\nhave implemented it, but the gain seems to be minimal, especially with\nmodern file systems. Trying to duplicate all the disk buffer management\nin our code seems to be of marginal benefit. We have bigger fish to\nfry, as the saying goes.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 20 May 2000 00:42:58 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Raw devices (was Re: Berkeley DB license)u" }, { "msg_contents": "> The only (completely) real solution for this is to use raw devices,\n> uncached by the kernel, without any filesystem overhead...\n> Are there any plans to support that?\n\nNo specific plans. afaik no \"anti-plans\" either, but the reason that\nwe don't do this yet is that it isn't clear to all of us that this\nwould be a real performance win. If someone wanted to do it as a\nproject that would result in a benchmark, that would help move things\nalong...\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Sat, 20 May 2000 04:51:41 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Raw devices (was Re: Berkeley DB license)" }, { "msg_contents": "Thomas Lockhart <[email protected]> writes:\n>> The only (completely) real solution for this is to use raw devices,\n>> uncached by the kernel, without any filesystem overhead...\n>> Are there any plans to support that?\n\n> No specific plans. afaik no \"anti-plans\" either, but the reason that\n> we don't do this yet is that it isn't clear to all of us that this\n> would be a real performance win.\n\n... whereas it *is* clear that it would be a portability loss ...\n\n> If someone wanted to do it as a project that would result in a\n> benchmark, that would help move things along...\n\nI think we'd want to see some indisputable evidence that there'd be\na substantial gain in the Postgres context. We could be talked into\nliving with the portability issues if the prize is worthy enough;\nbut that is unproven as far as I've seen.\n\nAt the moment, we have a long list of known performance gains that we\ncan get without any portability compromise (for example, the lack of\npg_index caching that we were getting our noses rubbed in just this\nmorning). So I think none of the key developers feel particularly\nexcited about raw I/O. There's lots of lower-hanging fruit.\n\nStill, if you want to pursue it, be our guest. The great thing\nabout open-source software is there's room for everyone to play.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 20 May 2000 01:20:53 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Raw devices (was Re: Berkeley DB license) " }, { "msg_contents": "Hi,\n\nAlex Pilosov:\n> The only (completely) real solution for this is to use raw devices,\n> uncached by the kernel, without any filesystem overhead... \n> \n...and with no OS caching _at_all_.\n\n> Are there any plans to support that?\n> \nIMHO it's interesting to note that even Oracle, which used to be one of\nthe \"you gotta use a raw partition if you want any speed at all\" guys,\nhas moved into the \"use a normal partition or a regular file unless you\ndo things like sharing a RAID between two hosts\" camp.\n\nOr so I've been told a year or so ago.\n\n-- \nMatthias Urlichs | noris network GmbH | [email protected] | ICQ: 20193661\nThe quote was selected randomly. Really. | http://smurf.noris.de/\n-- \nLawsuit (noun) --\n A machine which you go into as a pig and come out as a sausage.\n --Ambrose Bierce\n", "msg_date": "Sat, 20 May 2000 09:57:38 +0200", "msg_from": "\"Matthias Urlichs\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Raw devices (was Re: Berkeley DB license)" }, { "msg_contents": "> Hi,\n> \n> Alex Pilosov:\n> > The only (completely) real solution for this is to use raw devices,\n> > uncached by the kernel, without any filesystem overhead... \n> > \n> ...and with no OS caching _at_all_.\n> \n> > Are there any plans to support that?\n> > \n> IMHO it's interesting to note that even Oracle, which used to be one of\n> the \"you gotta use a raw partition if you want any speed at all\" guys,\n> has moved into the \"use a normal partition or a regular file unless you\n> do things like sharing a RAID between two hosts\" camp.\n\nYes, we noticed that. We are glad we didn't waste time going in that\ndirection.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 20 May 2000 07:29:39 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Raw devices (was Re: Berkeley DB license)" }, { "msg_contents": "At 12:14 AM 5/20/00 -0400, Bruce Momjian wrote:\n\n> We have shared buffers that are mapped into the address space of each\n> backend. [ ... ] \n> I wonder how SDB performs such file scans.\n\nBerkeley DB is an embedded toolkit, and works hard to provide mechanism\nin place of policy. That is to say, you can request memory-mapped\naccess to database files, or you can use a shared memory buffer cache.\nWe give you the tools to do what you want. We try not to force you to\ndo what we want.\n\nWe don't do query processing, we do fast storage and retrieval. Query\nplanning and optimization, including access path selection and buffer\nmanagement policy selection, get built on top of Berkeley DB. We offer\na variety of ways, via the API, to control the behavior of the lock,\nlog, shmem, and transaction subsystems.\n\n\t\t\t\t\tmike\n\n", "msg_date": "Sat, 20 May 2000 09:56:59 -0700", "msg_from": "\"Michael A. Olson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Berkeley DB license" } ]
[ { "msg_contents": "> > I've read this paper ~2 years ago. My plans so far were:\n> > \n> > 1. WAL in 7.1\n> > 2. New (overwriting) storage manager in 7.2\n> > \n> \n> Oh, so Vadim has overwriting storage manager concept for 7.2.\n> Vadim, how will you keep old rows around for MVCC?\n\nJust like you told about it - some outstanding files for old\nversions. Something like Oracle' rollback segments.\nAnd, for sure, this will be the most complex part of smgr and\nthat's why I think that we can't use their smgr if we're\ngoing to keep MVCC.\n\nAs for WAL, WAL itself (as collection of routines to log changes,\ncreate checkpoints etc) is 90% done. Now it has to be integrated\ninto system and the most hard part of this work are access methods\nspecific redo/undo functions. If we're going to use our access\nmethods then we'll have to write these functions for no matter\nwhat WAL implementation will be used.\n\nVadim\n \n", "msg_date": "Mon, 15 May 2000 13:23:29 -0700", "msg_from": "\"Mikheev, Vadim\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: WAL versus Postgres (or: what goes around, comes ar\n\t ound)" } ]
[ { "msg_contents": "> Perhaps what you are talking about is at so low a level that it\n> has no influence on these features...but if not then it might be \n> that the writer of a WAL will want to write an implementation of\n> the storage manager that is well integrated with the WAL.\n\nYes, I would like to do this, if everyone agreed to wait for\n7.2. Actually, I'm not sure if we're able to make both smgr\nand WAL in 7.1\n\nVadim\n", "msg_date": "Mon, 15 May 2000 13:27:07 -0700", "msg_from": "\"Mikheev, Vadim\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Proposal: replace no-overwrite with Berkeley DB" }, { "msg_contents": "> > Perhaps what you are talking about is at so low a level that it\n> > has no influence on these features...but if not then it might be\n> > that the writer of a WAL will want to write an implementation of\n> > the storage manager that is well integrated with the WAL.\n> Yes, I would like to do this, if everyone agreed to wait for\n> 7.2. Actually, I'm not sure if we're able to make both smgr\n> and WAL in 7.1\n\nistm that future work on distributed databases would require some\ngeneric API layer, perhaps identical to the current smgr layer or\nperhaps something higher up. Maybe an alternate local storage scheme\ncould plug into that same interface, much as storage managers used to\ndo.\n\nIf this is accurate, then someone could demonstrate the sleepycat code\nwithout having to impact other parts of the code?\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Tue, 16 May 2000 01:35:48 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal: replace no-overwrite with Berkeley DB" }, { "msg_contents": "Thomas Lockhart <[email protected]> writes:\n> istm that future work on distributed databases would require some\n> generic API layer, perhaps identical to the current smgr layer or\n> perhaps something higher up. Maybe an alternate local storage scheme\n> could plug into that same interface, much as storage managers used to\n> do.\n> If this is accurate, then someone could demonstrate the sleepycat code\n> without having to impact other parts of the code?\n\nAs far as I can tell, the current smgr switch is at a much lower level\nthan the Berkeley DB API is --- smgr's API involves reading and writing\ndisk blocks, and the contents of those blocks is the concern of higher\nlevels like bufmgr and the various access methods. BDB would want to\nreplace most of the access-method layer, not to mention bufmgr, lockmgr,\na lot of the shmem code, and maybe parts of transam. We don't have a\nsingle API that covers that territory, and I'm not sure it'd be\nreasonable to try to make one.\n\nAnother problem is that we've been kinda sloppy about preserving purity\nof the APIs that we have --- for example, relation rename ought to be\ndone via an smgr call, but instead it's been hacked into higher-level\ncode. Cleaning up that sort of thing would be a good idea in any case,\nbut it's just part of the work you'd have to do before you could think\nabout plugging in BDB.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 15 May 2000 22:03:27 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal: replace no-overwrite with Berkeley DB " } ]
[ { "msg_contents": "> > I've read this paper ~2 years ago. My plans so far were:\n> > \n> > 1. WAL in 7.1\n> > 2. New (overwriting) storage manager in 7.2\n> \n> Will we still have MVCC ?\n\nI'm personally surely not one who would like to remove it -:)\n\nVadim\n", "msg_date": "Mon, 15 May 2000 13:36:57 -0700", "msg_from": "\"Mikheev, Vadim\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: WAL versus Postgres (or: what goes around, comes ar\n\tound)" } ]
[ { "msg_contents": "> Another option is to keep our heap table structure intact, and just\n> Sleepycat DB for our indexes. That may be a big win, with little\n> downside. Certainly something to think about. It may work \n> better with MVCC, and allow fast sequential scans and fast heap\n> access from the indexs, without having to go through the db structures\n> to get to it.\n\nBut... we will still have to implement new smgr for tables and\nredo/undo functions for heap access method.\n\nVadim\n", "msg_date": "Mon, 15 May 2000 13:39:41 -0700", "msg_from": "\"Mikheev, Vadim\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Proposal: replace no-overwrite with Berkeley DB" } ]
[ { "msg_contents": "I'm just upgrading a database to v7.0, but I am rather unlucky in that in\neach of 2 large (ca. 20000 rows) tables, 1 row is too big eg:\n\nERROR: copy: line 57552, Tuple is too big: size 8152, max size 8140\n\nNow, it would help me if I could actually get a clue as to which row this\nis. The error message comes from hio.c:118 at which point one has a\nHeapTuple. Is there a way of extracting a piece of the row in the tuple to\nbe able to identify it?\n\nIt seems that a HeapTuple starts with a HeapTupleHeader, so the data I\nsuppose starts at tuple->t_data[sizeof(HeapTupleHeaderData)] and goes on for\ntuple->t_len-sizeof(HeapTupleHeaderData) bytes, but how is it represented?\n\nAny pointers appreciated! (These are huge COPY statements, so after the\nerror I'm left with an empty table - I'd just like to chop the 12 bytes\noff!)\n\nCheers,\n\nPatrick\n", "msg_date": "Mon, 15 May 2000 23:29:57 +0100", "msg_from": "Patrick Welche <[email protected]>", "msg_from_op": true, "msg_subject": "reading row in backend" }, { "msg_contents": "Patrick Welche <[email protected]> writes:\n> ERROR: copy: line 57552, Tuple is too big: size 8152, max size 8140\n\n> Now, it would help me if I could actually get a clue as to which row this\n> is.\n\nUm ... doesn't that line number in the COPY source data help any?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 15 May 2000 18:55:50 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: reading row in backend " }, { "msg_contents": "On Mon, May 15, 2000 at 06:55:50PM -0400, Tom Lane wrote:\n> Patrick Welche <[email protected]> writes:\n> > ERROR: copy: line 57552, Tuple is too big: size 8152, max size 8140\n> \n> > Now, it would help me if I could actually get a clue as to which row this\n> > is.\n> \n> Um ... doesn't that line number in the COPY source data help any?\n\nUnfortunately not - in fact the 2nd large tuple allegedly has a smaller\nline number than the first :(\n\nAm I barking up the wrong tree if I think that the HeapTuple contains the\nactual data? The first number in it would the primary key...\n\nCheers,\n\nPatrick\n", "msg_date": "Tue, 16 May 2000 00:09:00 +0100", "msg_from": "Patrick Welche <[email protected]>", "msg_from_op": true, "msg_subject": "Re: reading row in backend" }, { "msg_contents": "Patrick Welche <[email protected]> writes:\n>> Um ... doesn't that line number in the COPY source data help any?\n\n> Unfortunately not - in fact the 2nd large tuple allegedly has a smaller\n> line number than the first :(\n\nHmm, are you claiming that COPY is reporting a bogus line number?\nIf so, that needs to be looked into.\n\n> Am I barking up the wrong tree if I think that the HeapTuple contains the\n> actual data? The first number in it would the primary key...\n\nType HeapTuple is a pointer to a HeapTupleData, which is just\nadministrative overhead. The t_data field of the HeapTupleData\npoints at the actual tuple (a HeapTupleHeaderData followed by\nnull-values bitmap and then the user data). See include/access/htup.h\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 15 May 2000 19:22:30 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: reading row in backend " }, { "msg_contents": "On Mon, May 15, 2000 at 07:22:30PM -0400, Tom Lane wrote:\n> Patrick Welche <[email protected]> writes:\n> >> Um ... doesn't that line number in the COPY source data help any?\n> \n> > Unfortunately not - in fact the 2nd large tuple allegedly has a smaller\n> > line number than the first :(\n> \n> Hmm, are you claiming that COPY is reporting a bogus line number?\n> If so, that needs to be looked into.\n\nYup:\n\nERROR: copy: line 57552, Tuple is too big: size 8152, max size 8140\nPQendcopy: resetting connection\nERROR: copy: line 26714, Tuple is too big: size 8164, max size 8140\nPQendcopy: resetting connection\n\n% wc /home/prlw1/db.out\n 3631449 19833180 186847533 /home/prlw1/db.out\n\nand the second line is in a table that is over half way through the file.\n\nI'll take a look after some sleep..\n\n> > Am I barking up the wrong tree if I think that the HeapTuple contains the\n> > actual data? The first number in it would the primary key...\n> \n> Type HeapTuple is a pointer to a HeapTupleData, which is just\n> administrative overhead. The t_data field of the HeapTupleData\n> points at the actual tuple (a HeapTupleHeaderData followed by\n> null-values bitmap and then the user data). See include/access/htup.h\n\nI think my question really is, is the user data meant to be just text? When\nI tried printing a block, I just got jibberish - it could well be the way I\ntried to print it, but I don't know what to expect..\n\nCheers,\n\nPatrick\n", "msg_date": "Tue, 16 May 2000 00:37:02 +0100", "msg_from": "Patrick Welche <[email protected]>", "msg_from_op": true, "msg_subject": "Re: reading row in backend" }, { "msg_contents": "Patrick Welche <[email protected]> writes:\n>> Hmm, are you claiming that COPY is reporting a bogus line number?\n>> If so, that needs to be looked into.\n\n> Yup:\n\n> ERROR: copy: line 57552, Tuple is too big: size 8152, max size 8140\n> PQendcopy: resetting connection\n> ERROR: copy: line 26714, Tuple is too big: size 8164, max size 8140\n> PQendcopy: resetting connection\n\n> % wc /home/prlw1/db.out\n> 3631449 19833180 186847533 /home/prlw1/db.out\n\n> and the second line is in a table that is over half way through the file.\n\nLooks reasonable enough to me. The line numbers are within the data\nbeing fed to that copy command --- copy has no way to know that it's\npart of some larger script...\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 15 May 2000 20:44:03 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: reading row in backend " }, { "msg_contents": "On Mon, May 15, 2000 at 08:44:03PM -0400, Tom Lane wrote:\n> Patrick Welche <[email protected]> writes:\n> >> Hmm, are you claiming that COPY is reporting a bogus line number?\n> >> If so, that needs to be looked into.\n> \n> > Yup:\n> \n> > ERROR: copy: line 57552, Tuple is too big: size 8152, max size 8140\n> > PQendcopy: resetting connection\n> > ERROR: copy: line 26714, Tuple is too big: size 8164, max size 8140\n> > PQendcopy: resetting connection\n> \n> > % wc /home/prlw1/db.out\n> > 3631449 19833180 186847533 /home/prlw1/db.out\n> \n> > and the second line is in a table that is over half way through the file.\n> \n> Looks reasonable enough to me. The line numbers are within the data\n> being fed to that copy command --- copy has no way to know that it's\n> part of some larger script...\n\nAh! Thank you! So it's 57552 rows down from \"COPY\" !\n\nOK\n\nCheers,\n\nPatrick\n", "msg_date": "Tue, 16 May 2000 10:42:05 +0100", "msg_from": "Patrick Welche <[email protected]>", "msg_from_op": true, "msg_subject": "Re: reading row in backend" } ]
[ { "msg_contents": "Hi all,\n\nUnder current non-overwrite storage manager,PostgreSQL always has\nto insert index tuples corresponding to a new updated heap tuple even\nwhen the key values are invariant. This is a big pain for frequently\nupdated tables. Can we omit the insertion when the key values are\ninvariant ? We could follow update chain of heap tuples since MVCC\nwas introduced.\nIn addtion we may be able to update heap-TID member of an index\ntuple to the latest TID in the middle of index access without vacuuming.\n\nComments anyone ?\n\nRegards.\n\nHiroshi Inoue\[email protected]\n", "msg_date": "Tue, 16 May 2000 09:12:20 +0900", "msg_from": "\"Hiroshi Inoue\" <[email protected]>", "msg_from_op": true, "msg_subject": "Insertion of index tuples are necessary ?" }, { "msg_contents": "\"Hiroshi Inoue\" <[email protected]> writes:\n> Under current non-overwrite storage manager,PostgreSQL always has\n> to insert index tuples corresponding to a new updated heap tuple even\n> when the key values are invariant. This is a big pain for frequently\n> updated tables. Can we omit the insertion when the key values are\n> invariant ? We could follow update chain of heap tuples since MVCC\n> was introduced.\n\nHmm ... I'm worried about whether the performance gain in that case\nwon't be lost in the case where the key values *do* change. When\nyou look up a tuple using the index entry, and find it's been updated,\nyou would not know whether the updated version has the same key or not.\nYou'd have to chase the update pointer to find the current version,\nand then check to see if its key fields are the same or not. If they're\nnot, you just wasted a fetch cycle. Whether you win overall depends\nstrongly on how many updates preserve the key fields vs change them.\n\n> In addtion we may be able to update heap-TID member of an index\n> tuple to the latest TID in the middle of index access without vacuuming.\n\nYou might be able to buy back the performance by also modifying index\ntuples in-place to point to the latest version of their heap tuple,\nso that the excess fetch only happens once. But that introduces a\nwhole bunch of issues like locking of the index. (I suspect that if\nit were easy to do this, we'd already be marking dead index tuples\nas invalid --- but we're not...)\n\n> Comments anyone ?\n\nWAL and the new storage manager will certainly change these\nconsiderations quite a bit. I'm inclined not to put much work into\noptimizations like this that may be obsolete very soon. Let's wait\nand see how things look once the WAL stuff is done...\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 15 May 2000 20:58:27 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Insertion of index tuples are necessary ? " } ]
[ { "msg_contents": "Announcing RedHat RPMs for PostgreSQL 7.0\n\nNEW LOCATION!\nftp://ftp.postgresql.org/pub/binary/v7.0/redhat-RPM\n\nPlease read the README in that directory before downloading. That same\nfile is installed as part of the postgresql-7.0-1.i386.rpm as\n/usr/doc/postgresql-7.0/README.rpm.\n\nOr, you may go to http://www.ramifordistat.net/postgres as usual.\n\n**********NOTE***********\nAn initdb is REQUIRED for an upgrade from prior to Postgresql-7.0RC5.\n--oldpackage may be required to upgrade from 7.0 beta releases or\nrelease candidate RPMS.\n\nPlease let me know about any problems you may have.\n\nMy apologies for the delay in getting this RPM out -- my 'Real World'\nwork schedule drastically interferred with my PostgreSQL schedule, and\nwill continue to do so until at least May 24th. For the curious, I am a\nbroadcast engineer, and am in the middle of a transmitter site move.\n\nNOTE: There are no Linux/alpha patches in these RPMs. Hopefully, that\nsituation will soon be rectified in a -2 RPMset. However, I _did_ get\nthe 7.0 JDBC jar's in....\n\nHope you enjoy the RPMs!\n\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Mon, 15 May 2000 22:52:42 -0400", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": true, "msg_subject": "RPMS for 7.0 final." }, { "msg_contents": "> Announcing RedHat RPMs for PostgreSQL 7.0\n> NEW LOCATION!\n> ftp://ftp.postgresql.org/pub/binary/v7.0/redhat-RPM\n\nI've posted RPMs built for RH-5.2 at the same site. These have *not*\nbeen tested, since I don't have an independent machine to try an\ninstallation. However, they use a verbatim copy of Lamar's spec file,\nso should be just fine.\n\nPlease let us know if you find any problems.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Tue, 16 May 2000 14:23:32 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PORTS] RPMS for 7.0 final." } ]
[ { "msg_contents": "hackers:\n\nI got following mail:\n\n> \tI was just looking over the PostgreSQL 7.0 docs and noticed that there\n> doesn't seem to be any new features for Unicode support. I wanted to verify\n> if this is true?\n> \n> \tRight now we have a database that must support many different languages.\n> This works ok when we use UTF8 but the problem is that we do not know how\n> many characters the text will be. I was hoping that PG7.0 would support true\n> Unicode (2 byte) instead of just UTF8. Do you know if there is any plan to\n> support plain Unicode?\n\nI think supporting \"true Unicode (2 byte)\" (probably that means UCS-2)\nis not that easy since it includes '\\0'. We need to fix at least:\n\n\tthe parser\n\tlibpq\n\tpsql\n\tall client programs ...\n\nAnother idea might be doing a conversion between UTF-8 and UCS-2\nsomewhere between frontend and backend. However we still need to fix:\n\n\tlibpq\n\tpsql\n\tall client programs ...\n\nin this case. Any idea?\n\nBy the way, does anobody know what's wrong with UTF-8? In my\nunderstanding UTF-8 and UCS-2 are logically identical.\n--\nTatsuo Ishii\n", "msg_date": "Tue, 16 May 2000 13:53:47 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL and Unicode" } ]
[ { "msg_contents": "On Mon, May 15, 2000 at 09:53:34AM -0300, The Hermit Hacker wrote:\n> On Mon, 15 May 2000, Hannu Krosing wrote:\n> > > Everythingn up to here sounds great ... but this part here totally throws\n> > > me off ... this would mean that, unlike now where we rely on *zero*\n> > > external code, we essentially gut our internals and create API-links to\n> > > someone else's code ...\n> > \n> > And what will happen to R-trees and Gist access methods ? \n> > \n> > Will we implement them on top of Berkeley DB :) ?\n> > \n> > OTOH, it may be good to have a clearly defined API to our own storage \n> > manager that could possibly be used separately from the rest of PostgreSQL.\n> > \n> > So people who can't accept/afford BSDDB licence could use ours instead.\n> \n> Hrmmm, some sort of --with-berkeley-db configure switch, so by default, it\n> uses ours, but if someone wants to do the db code, it could plug-n-play?\n> \n> That one has possibilities ...\n\n\tI have not looked at what kind of API would be required, but this\nwould be my favorite. It sould be nice to be able to choose different\nstorage methods, optimized for different data patterns or even for\ndifferent media.\n\n-- \nAdam Haberlach |\"You have to understand that the\[email protected] | entire 'Net is based on people with\nhttp://www.newsnipple.com/ | too much free time on their hands.\"\n", "msg_date": "Mon, 15 May 2000 22:35:37 -0700", "msg_from": "Adam Haberlach <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Proposal: replace no-overwrite with Berkeley DB" } ]
[ { "msg_contents": "> \tMy understanding of the problem is UTF8 is this. Functionally, it is\n> equivalent to UCS-2, that is you can encode any Unicode character in UTF-8\n> that you could encode in UCS-2.\n> \tThe problem we've run into is only related to Postgres. For example we had\n> a field that was fixed at 20 characters. If we put in ASCII then we could\n> put in all 20 characters. If we put in UTF8 encoded Japanese then (depending\n> on which characters were used) we got about 3 UTF8 characters for each\n> Japanese character. Aside from going from 20 characters to 7 (*problem #1*)\n> we also now have unpredictable behavior. Some characters, like Japanese,\n> were 3:1 ratio when encoding. UTF8 can go as high as 6:1 encoding ratio for\n> some language (I don't know which off hand) this is *problem #2*. Finally,\n> as a side affect of this, the string was just truncated so we sometimes got\n> only a partial UTF8 character in the database. This made the unencoding\n> either fail or produce weird results (*problem #3*).\n\nYes, I have noticed this problem too. But don't we have same problem\nwith UCS-2, with 2:1 ratio, then? I think we should fix this in the\nway:\n\tchar(10) should means 10 letters, not 10 bytes no matter what\n\tencoding we use\n\nI will tackle this problem for 7.1.\n\nHow do you think, Rainer? Are you still unhappy with the solution\nabove?\n--\nTatsuo Ishii\n", "msg_date": "Tue, 16 May 2000 16:08:55 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": true, "msg_subject": "RE: PostgreSQL and Unicode" } ]
[ { "msg_contents": "> \tThis sounds good. I agree that char(x) should mean x letters, not x bytes.\n> \n> \tIf this could be done in 7.1 that would be great! That means about 2 weeks,\n> right?! ;-)\n\nNo no:-) you must be talking about 7.0.1. I think that fix would\nintroduce some data format imcompatibility that is not allowed in the\nminor version ups.\n\n> P.S. Can anyone point me to the right person to ask regarding a problem\n> we've been having with postmaster processes not going away. It seems to be\n> related to JDBC although I've heard of PHP people having similar problems.\n\nCan you tell us more about \"postmaster processes not going away\"\nproblem?\n--\nTatsuo Ishii\n\n", "msg_date": "Tue, 16 May 2000 17:01:05 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": true, "msg_subject": "RE: PostgreSQL and Unicode" } ]
[ { "msg_contents": "Hi,\n\nI originally posted this problem to the interfaces list but have not had any\nresponses. I would like to resolve this as pgAdmin cannot manage users or\ndatabases whilst this problem exists:\n\nI have a problem with the use of CREATE/ALTER/DROP USER/DATABASE via ODBC\nwhich was not there in v6.x.x. Any code that executes any of the SQL listed\nresults in an error along the lines of:\n\nERROR: DROP DATABASE: May not be called in a transaction block\n\nThe ODBC log (and knowledge that it isn't pgAdmin or M$ ADO) shows that the\nODBC driver is automatically wrapping the query in a transaction. \n\nconn=47987408, query='BEGIN'\nconn=47987408, query='DROP DATABASE \"matt\"'\nERROR from backend during send_query: 'ERROR: DROP DATABASE: May not be\ncalled in a transaction block'\nconn=47987408, query='COMMIT'\nSTATEMENT ERROR: func=SC_execute, desc='', errnum=1, errmsg='Error while\nexecuting the query'\n \n------------------------------------------------------------\n hdbc=47987408, stmt=49221232, result=0\n manual_result=0, prepare=0, internal=0\n bindings=0, bindings_allocated=0\n parameters=0, parameters_allocated=0\n statement_type=6, statement='DROP DATABASE \"matt\"'\n stmt_with_params='DROP DATABASE \"matt\"'\n data_at_exec=-1, current_exec_param=-1, put_data=0\n currTuple=-1, current_col=-1, lobj_fd=-1\n maxRows=0, rowset_size=1, keyset_size=0, cursor_type=0,\nscroll_concurrency=1\n cursor_name='SQL_CUR02EF0E70'\n ----------------QResult Info\n-------------------------------\nCONN ERROR: func=SC_execute, desc='', errnum=110, errmsg='ERROR: DROP\nDATABASE: May not be called in a transaction block'\n ------------------------------------------------------------\n henv=47987392, conn=47987408, status=1, num_stmts=16\n sock=47980304, stmts=47980352, lobj_type=27904\n ---------------- Socket Info -------------------------------\n socket=488, reverse=0, errornumber=0, errormsg='(NULL)'\n buffer_in=47993744, buffer_out=47997848\n buffer_filled_in=3, buffer_filled_out=0, buffer_read_in=2\nconn=47987408, SQLDisconnect\n\nAny thoughts/suggestions would be welcomed!!\n\nRegards, \n \nDave. \n \n-- \n\"If you stand still, sooner or later something will eat you.\"\n - James Burke\nhttp://www.vale-housing.co.uk/ (Work)\nhttp://www.pgadmin.freeserve.co.uk/ (Home of pgAdmin) \n", "msg_date": "Tue, 16 May 2000 09:17:21 -0000", "msg_from": "Dave Page <[email protected]>", "msg_from_op": true, "msg_subject": "ODBC & v7.0(Rel) Errors with Users and Databases" }, { "msg_contents": "Dave Page writes:\n\n> ERROR: DROP DATABASE: May not be called in a transaction block\n\nThis command can't be rolled back so you aren't allowed to try. This was\nthought as an improvement. In general, a database isn't a database object\nso one shouldn't be transacting around with them. (Same goes for users.)\n\n> The ODBC log (and knowledge that it isn't pgAdmin or M$ ADO) shows that the\n> ODBC driver is automatically wrapping the query in a transaction. \n\nI don't know anything about ODBC but it certainly should provide a means\nto execute a command without that wrapping block. Is this a special\nfunction or do you just execute some exec(\"DROP DATABASE\") style call?\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Wed, 17 May 2000 19:18:23 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ODBC & v7.0(Rel) Errors with Users and Databases" } ]
[ { "msg_contents": "> Regarding the \"postmaster processes not going away\" problem...\n> \n> We're developing a Java application that connects to a PostgreSQL db. During\n> our development process we debug the code and sometimes kill the program in\n> the middle of the run. Sometimes this means that an open Connection to the\n> database is not properly closed. Now, I realize that this is an unfriendly\n> thing to do to PG but I would think it would eventually recover. What\n> happens instead is that the postmaster/postgres process that was handling\n> that connection never terminates. I have seen processes that are more than 2\n> weeks old before we noticed and restarted postmaster manually.\n> \n> The problem is that eventually PG runs out of connections and stops allowing\n> new ones.\n> \n> So, is there a way to tell PG to timeout unused connections after some\n> specified time? I've looked through all the docs and could not find anything\n> like this. I realize that this is a difficult issue because if there is an\n> unresolved transaction what do you do with it. I guess all you could do is\n> roll it back.\n> \n> Any other suggestions? If not, can I request this as a future feature?\n> Although our problems are happening during debugging, they could happen\n> during deployment given a hardware problem or, *gasp*, a bug in our code.\n\nWhat about adding KEEPALIVE option to the socket? This would take a\nwhile to detect orphaned socket, though.\n--\nTatsuo Ishii\n", "msg_date": "Tue, 16 May 2000 18:30:48 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": true, "msg_subject": "RE: PostgreSQL and Unicode" }, { "msg_contents": "Tatsuo Ishii <[email protected]> writes:\n>> Regarding the \"postmaster processes not going away\" problem...\n\n> What about adding KEEPALIVE option to the socket?\n\nOf course, since whatever OS he's using on the client side is too broken\nto notice that the socket is orphaned and close it, it might be so\nbroken as to respond to the keepalive pings :-(. Still, it'd be an easy\nthing to try...\n\nEven though the stated case sounds more like an OS bug than anything\nelse, setting KEEPALIVE on our TCP connections is probably still a good\nidea. If the client machine were to crash completely then it wouldn't\nbe reasonable to expect it to close the connection, and we'd want to\nhave some method of ensuring that the connected backend shuts down\neventually. KEEPALIVE seems sufficiently low-overhead (and easy to\nimplement) to be the right answer for this scenario.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 16 May 2000 10:19:09 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RE: PostgreSQL and Unicode " }, { "msg_contents": "> > What about adding KEEPALIVE option to the socket?\n> \n> Of course, since whatever OS he's using on the client side is too broken\n> to notice that the socket is orphaned and close it, it might be so\n> broken as to respond to the keepalive pings :-(. Still, it'd be an easy\n> thing to try...\n> \n> Even though the stated case sounds more like an OS bug than anything\n> else, setting KEEPALIVE on our TCP connections is probably still a good\n> idea. If the client machine were to crash completely then it wouldn't\n> be reasonable to expect it to close the connection, and we'd want to\n> have some method of ensuring that the connected backend shuts down\n> eventually. KEEPALIVE seems sufficiently low-overhead (and easy to\n> implement) to be the right answer for this scenario.\n\nOk. Here are patches against 7.0. BTW, does this break some platforms\nsuch as Windows NT or QUNX4?\n\n*** postgresql-7.0/src/backend/libpq/pqcomm.c.orig\tTue May 16 18:06:42 2000\n--- postgresql-7.0/src/backend/libpq/pqcomm.c\tWed May 17 08:23:09 2000\n***************\n*** 375,381 ****\n \t\tif (setsockopt(port->sock, pe->p_proto, TCP_NODELAY,\n \t\t\t\t\t &on, sizeof(on)) < 0)\n \t\t{\n! \t\t\tperror(\"postmaster: StreamConnection: setsockopt\");\n \t\t\treturn STATUS_ERROR;\n \t\t}\n \t}\n--- 375,387 ----\n \t\tif (setsockopt(port->sock, pe->p_proto, TCP_NODELAY,\n \t\t\t\t\t &on, sizeof(on)) < 0)\n \t\t{\n! \t\t\tperror(\"postmaster: StreamConnection: setsockopt(TCP_NODELAY)\");\n! \t\t\treturn STATUS_ERROR;\n! \t\t}\n! \t\tif (setsockopt(port->sock, SOL_SOCKET, SO_KEEPALIVE,\n! \t\t\t\t\t &on, sizeof(on)) < 0)\n! \t\t{\n! \t\t\tperror(\"postmaster: StreamConnection: setsockopt(SO_KEEPALIVE)\");\n \t\t\treturn STATUS_ERROR;\n \t\t}\n \t}\n", "msg_date": "Wed, 17 May 2000 10:07:01 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": true, "msg_subject": "Re: RE: PostgreSQL and Unicode " }, { "msg_contents": "This should be save. Both NT and QNX4 support SO_KEEPALIVE.\n\n-----Urspr�ngliche Nachricht-----\nVon: Tatsuo Ishii <[email protected]>\nAn: <[email protected]>\nCc: <[email protected]>; <[email protected]>; <[email protected]>;\n<[email protected]>\nGesendet: Mittwoch, 17. Mai 2000 03:07\nBetreff: Re: [HACKERS] RE: PostgreSQL and Unicode\n\n\n> > > What about adding KEEPALIVE option to the socket?\n>\n> Ok. Here are patches against 7.0. BTW, does this break some platforms\n> such as Windows NT or QUNX4?\n\n\n", "msg_date": "Wed, 17 May 2000 15:38:51 +0200", "msg_from": "\"Kardos, Dr. Andreas\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RE: PostgreSQL and Unicode " }, { "msg_contents": "> This should be save. Both NT and QNX4 support SO_KEEPALIVE.\n\nThanks for the info. The fix will apear in 7.0.1.\n--\nTatsuo Ishii\n", "msg_date": "Wed, 17 May 2000 22:46:51 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": true, "msg_subject": "Re: RE: PostgreSQL and Unicode " }, { "msg_contents": "Seems this is applied.\n\n\n> > > What about adding KEEPALIVE option to the socket?\n> > \n> > Of course, since whatever OS he's using on the client side is too broken\n> > to notice that the socket is orphaned and close it, it might be so\n> > broken as to respond to the keepalive pings :-(. Still, it'd be an easy\n> > thing to try...\n> > \n> > Even though the stated case sounds more like an OS bug than anything\n> > else, setting KEEPALIVE on our TCP connections is probably still a good\n> > idea. If the client machine were to crash completely then it wouldn't\n> > be reasonable to expect it to close the connection, and we'd want to\n> > have some method of ensuring that the connected backend shuts down\n> > eventually. KEEPALIVE seems sufficiently low-overhead (and easy to\n> > implement) to be the right answer for this scenario.\n> \n> Ok. Here are patches against 7.0. BTW, does this break some platforms\n> such as Windows NT or QUNX4?\n> \n> *** postgresql-7.0/src/backend/libpq/pqcomm.c.orig\tTue May 16 18:06:42 2000\n> --- postgresql-7.0/src/backend/libpq/pqcomm.c\tWed May 17 08:23:09 2000\n> ***************\n> *** 375,381 ****\n> \t\tif (setsockopt(port->sock, pe->p_proto, TCP_NODELAY,\n> \t\t\t\t\t &on, sizeof(on)) < 0)\n> \t\t{\n> ! \t\t\tperror(\"postmaster: StreamConnection: setsockopt\");\n> \t\t\treturn STATUS_ERROR;\n> \t\t}\n> \t}\n> --- 375,387 ----\n> \t\tif (setsockopt(port->sock, pe->p_proto, TCP_NODELAY,\n> \t\t\t\t\t &on, sizeof(on)) < 0)\n> \t\t{\n> ! \t\t\tperror(\"postmaster: StreamConnection: setsockopt(TCP_NODELAY)\");\n> ! \t\t\treturn STATUS_ERROR;\n> ! \t\t}\n> ! \t\tif (setsockopt(port->sock, SOL_SOCKET, SO_KEEPALIVE,\n> ! \t\t\t\t\t &on, sizeof(on)) < 0)\n> ! \t\t{\n> ! \t\t\tperror(\"postmaster: StreamConnection: setsockopt(SO_KEEPALIVE)\");\n> \t\t\treturn STATUS_ERROR;\n> \t\t}\n> \t}\n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 13 Jun 2000 03:47:38 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RE: PostgreSQL and Unicode" } ]
[ { "msg_contents": "As usual when replying from here, replies prefixed with PM:\n\n-- \nPeter Mount\nEnterprise Support\nMaidstone Borough Council\nAny views stated are my own, and not those of Maidstone Borough Council.\n\n\n\n-----Original Message-----\nFrom: Lamar Owen [mailto:[email protected]]\nSent: Tuesday, May 16, 2000 3:53 AM\nTo: [email protected]; [email protected];\[email protected]\nSubject: [HACKERS] RPMS for 7.0 final.\n\n\nAnnouncing RedHat RPMs for PostgreSQL 7.0\n\n[snip]\n\nNOTE: There are no Linux/alpha patches in these RPMs. Hopefully, that\nsituation will soon be rectified in a -2 RPMset. However, I _did_ get\nthe 7.0 JDBC jar's in....\n\nPM: Good job I got them done in time wasn't it ;-)\n\n", "msg_date": "Tue, 16 May 2000 13:10:56 +0100", "msg_from": "Peter Mount <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] RPMS for 7.0 final." }, { "msg_contents": "Peter Mount wrote:\n>Lamar Owen Wrote:\n>> NOTE: There are no Linux/alpha patches in these RPMs. Hopefully, that\n>> situation will soon be rectified in a -2 RPMset. However, I _did_ get\n>> the 7.0 JDBC jar's in....\n \n> PM: Good job I got them done in time wasn't it ;-)\n\nThe interesting side of it was that I had already built the RPM's,\nrealized that I had forgotten a changelog note in the spec file, and\nhappened to check mail -- and you had just announced the jars. I was\nquite happy to see _that_ announcement! I was not a happy camper that I\nwasn't going to have 7.0 jars, and didn't have the time nor the presence\nof mind to install the jdk and build them myself. I triple-checked the\nRPM stuff, so, hopefully, there won't be any stupid errors there -- as I\npretty well know that part of things -- but, java I know not. But, happy\ncamper I now am!\n\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Tue, 16 May 2000 08:17:59 -0400", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] RPMS for 7.0 final." } ]
[ { "msg_contents": "> -----Original Message-----\n> From: [email protected] [mailto:[email protected]]On\n> Behalf Of Tom Lane\n\n[snip]\n\n> \n> regression=# vacuum foo;\n> NOTICE: FlushRelationBuffers(foo, 0): block 0 is dirty (private \n> 1, global 1)\n> VACUUM\n> \n> This is being caused by the buffer manager changes I recently made\n> to avoid doing unnecessary writes at commit. After the DELETE, foo's\n> buffer is written to disk, but at that point its (single) tuple is\n> marked with an uncommitted xmax transaction. The SELECT verifies the\n> tuple is dead and changes its on-row status to xmax-committed, but\n> *that change is not forced to disk* (the rationale being that even if\n> we crashed without writing the change, it could be done again later).\n> Now VACUUM comes along, finds no live tuples, and decides to truncate\n> the relation to zero blocks. During the truncation,\n> FlushRelationBuffers sees that the buffer it's flushing is still marked\n> dirty, and hence emits the above notice.\n>\n\nThis means vacuum doesn't necessarily flush all dirty buffers of\nthe target table. Doesn't this break the assumption of pg_upgrade ?\n \n> \n> Comments anyone?\n>\n\nSome changes around bufmgr seems to be needed.\n1) Provide a function which could flush all dirty buffers\n and vacuum calls the function for example.\nor\n2) SetBufferCommitInfoNeedsSave() sets SharedBuffer\n Changed and BufferDirtiedByMe. \nand/or\n3) Flush dirty buffers even in case of abort.\n\nRegards.\n\nHiroshi Inoue\[email protected]\n", "msg_date": "Tue, 16 May 2000 22:33:44 +0900", "msg_from": "\"Hiroshi Inoue\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Actually it's a bufmgr issue (was Re: Another pg_listener issue)" }, { "msg_contents": "\"Hiroshi Inoue\" <[email protected]> writes:\n>> Now VACUUM comes along, finds no live tuples, and decides to truncate\n>> the relation to zero blocks. During the truncation,\n>> FlushRelationBuffers sees that the buffer it's flushing is still marked\n>> dirty, and hence emits the above notice.\n\n> This means vacuum doesn't necessarily flush all dirty buffers of\n> the target table. Doesn't this break the assumption of pg_upgrade ?\n\nNo, because it does still flush the buffer. It's only emitting a\nwarning, because it thinks this condition suggests a bug in VACUUM.\nBut with the way bufmgr behaves now, the condition is actually fairly\nnormal, and so the warning is no longer of any value.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 16 May 2000 10:32:29 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Actually it's a bufmgr issue (was Re: Another pg_listener issue) " }, { "msg_contents": "> -----Original Message-----\n> From: Tom Lane [mailto:[email protected]]\n>\n> \"Hiroshi Inoue\" <[email protected]> writes:\n> >> Now VACUUM comes along, finds no live tuples, and decides to truncate\n> >> the relation to zero blocks. During the truncation,\n> >> FlushRelationBuffers sees that the buffer it's flushing is still marked\n> >> dirty, and hence emits the above notice.\n>\n> > This means vacuum doesn't necessarily flush all dirty buffers of\n> > the target table. Doesn't this break the assumption of pg_upgrade ?\n>\n> No, because it does still flush the buffer.\n\nYes FlushRelationBuffers notices and flushes dirty buffers >=\nthe specified block. But doesn't it notice dirty buffers < the\nspecified block ? Or does vacuum flush all pages < the\nspecified block while processing ?\n\nRegards.\n\nHiroshi Inoue\[email protected]\n\n", "msg_date": "Wed, 17 May 2000 01:38:57 +0900", "msg_from": "\"Hiroshi Inoue\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Actually it's a bufmgr issue (was Re: Another pg_listener issue) " }, { "msg_contents": "\"Hiroshi Inoue\" <[email protected]> writes:\n>>>> This means vacuum doesn't necessarily flush all dirty buffers of\n>>>> the target table. Doesn't this break the assumption of pg_upgrade ?\n>> \n>> No, because it does still flush the buffer.\n\n> Yes FlushRelationBuffers notices and flushes dirty buffers >=\n> the specified block. But doesn't it notice dirty buffers < the\n> specified block ? Or does vacuum flush all pages < the\n> specified block while processing ?\n\nOh, I see what you mean: even after a VACUUM, it's possible for\nthere to be unwritten dirty buffers for a relation, containing\neither changes entered by an aborted transaction, or updates of\non-row status values made by non-VACUUM transactions. Hmm.\nIt's barely possible that the second case could break pg_upgrade,\nif something before VACUUM had updated the on-row status values\nin a page and then VACUUM itself had no reason to dirty the page.\nIf those status values never get written then pg_upgrade fails.\n\nMaybe it would be a good idea for VACUUM to force out all dirty\npages for the relation even when theu're only dirty because of\non-row status updates. Normally we wouldn't care about risking\nlosing those updates, but for pg_upgrade we would.\n\nI was about to change FlushRelationBuffers anyway to get rid of\nthe bogus warning message. What I propose to do is give it two\nbehaviors:\n(1) write out dirty buffers at or beyond the specified block,\n but don't remove buffers from pool; or\n(2) remove buffers at or beyond the specified block from pool,\n after writing them out if dirty.\n\nVACUUM should apply case (2) beginning at the proposed truncation point,\nand then apply case (1) starting at block 0.\n\nSound good?\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 16 May 2000 12:57:16 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Actually it's a bufmgr issue (was Re: Another pg_listener issue) " }, { "msg_contents": "> -----Original Message-----\n> From: Tom Lane [mailto:[email protected]]\n\n[snip]\n \n> Maybe it would be a good idea for VACUUM to force out all dirty\n> pages for the relation even when theu're only dirty because of\n> on-row status updates. Normally we wouldn't care about risking\n> losing those updates, but for pg_upgrade we would.\n> \n> I was about to change FlushRelationBuffers anyway to get rid of\n> the bogus warning message. What I propose to do is give it two\n> behaviors:\n> (1) write out dirty buffers at or beyond the specified block,\n> but don't remove buffers from pool; or\n> (2) remove buffers at or beyond the specified block from pool,\n> after writing them out if dirty.\n> \n> VACUUM should apply case (2) beginning at the proposed truncation point,\n> and then apply case (1) starting at block 0.\n> \n> Sound good?\n>\n\nAgreed.\n\nRegards.\n\nHiroshi Inoue\[email protected]\n", "msg_date": "Wed, 17 May 2000 08:17:51 +0900", "msg_from": "\"Hiroshi Inoue\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Actually it's a bufmgr issue (was Re: Another pg_listener issue) " }, { "msg_contents": ">> I was about to change FlushRelationBuffers anyway to get rid of\n>> the bogus warning message. What I propose to do is give it two\n>> behaviors:\n>> (1) write out dirty buffers at or beyond the specified block,\n>> but don't remove buffers from pool; or\n>> (2) remove buffers at or beyond the specified block from pool,\n>> after writing them out if dirty.\n>> \n>> VACUUM should apply case (2) beginning at the proposed truncation point,\n>> and then apply case (1) starting at block 0.\n>> \n>> Sound good?\n\n> Agreed.\n\nOK, I've committed a fix for this. After looking at the uses of\nFlushRelationBuffers, I gave it just one behavior: flush *all* dirty\nbuffers of the relation, and remove from the cache those that are\nat or beyond the specified block number. This allows VACUUM's needs\nto be met in one buffer-cache scan instead of two.\n\nI also cleaned up ReleaseRelationBuffers, which should have but did\nnot remove the relation's buffers from the cache. This left us with\n\"valid\" buffers for a deleted relation after any DROP TABLE. No\nknown bug there, but clearly trouble waiting to happen. Likewise\nfor DropBuffers (same thing for a whole database).\n\nFinally, the \"removal\" of the deleted buffers in these routines\nconsisted of calling BufTableDelete(), which removes the buffer from the\nshared-buffer hashtable, so it will not be found by a lookup for a\nspecific block --- but the various routines that scan the whole\nshared-buffer array would still think the buffer belongs to its former\nrelation! That can't be good either, so I made BufTableDelete() clear\nthe tag field for the buffer.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 18 May 2000 23:31:34 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Actually it's a bufmgr issue (was Re: Another pg_listener issue) " } ]
[ { "msg_contents": "On Fri, 12 May 2000, The Hermit Hacker wrote:\n\n> well, that answers my question ... and most cool on the 'summa cuade\n> laude' ... how does your name get written now:\n> \n> Ryan Kirkpatrick, B.CS, B.EE, SCL?\n\n\tTechnically it is only one degree with two majors, the electrical\nengineering is 'engineering - electrical concentration' (meaning that I\nhad to take a few mechanical engineering courses along the way as well :),\nand the computer science major was actually computer science engineering\n(which means it included some electrical engineering courses as well). So\nto be correct, I guess my name would get written:\n\nRyan Kirkpatrick, B.CSE+EEC, SCL\n\nThough I have never been much of one to stand on title so 'Ryan' will also\nprobably work. :) Anyway, thanks for the congrats. TTYL.\n\n---------------------------------------------------------------------------\n| \"For to me to live is Christ, and to die is gain.\" |\n| --- Philippians 1:21 (KJV) |\n---------------------------------------------------------------------------\n| Ryan Kirkpatrick | Boulder, Colorado | http://www.rkirkpat.net/ |\n---------------------------------------------------------------------------\n\n", "msg_date": "Tue, 16 May 2000 07:35:38 -0600 (MDT)", "msg_from": "Ryan Kirkpatrick <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Linux/Alpha and PostgreSQL 7.0 Release Status" } ]
[ { "msg_contents": "Could anyone please tell me what I'm doing wrong? I'm sure I'm just\noverlooking something, but what?\n\n======================\n\nmoran:/acct$ id\nuid=1007(postgres) gid=1003(postgres) groups=1003(postgres)\nmoran:/acct$ export P=/acct/pindybook\nmoran:/acct$ initlocation P\nThe location will be initialized with username \"postgres\".\nThis user will own all the files and must also own the server process.\n\nFixing permissions on pre-existing directory /acct/pindybook\nCreating directory /acct/pindybook/base\n\ninitlocation is complete.\nYou can now create a database using\n CREATE DATABASE <name> WITH LOCATION = 'P'\nin SQL, or\n createdb <name> -D 'P'\nfrom the shell.\n\nmoran:/acct$ createdb indybook -D 'P'\nERROR: The database path 'P' is invalid. This may be due to a character that is not allowed or because the chosen path isn't permitted for databases\ncreatedb: database creation failed\nmoran:/acct$ ls -ld pindybook\ndrwx------ 3 postgres postgres 512 May 16 09:40 pindybook\nmoran:/acct$ ls -l pindybook\ntotal 1\ndrwx------ 2 postgres postgres 512 May 16 09:40 base\nmoran:/acct$ \n\n======================\n\nThanks...\n\n-- \nRichard Kuhns\t\t\[email protected]\nPO Box 6249\t\t\tTel: (765)477-6000 \\\n100 Sawmill Road\t\t\t\t x319\nLafayette, IN 47903\t\t (800)489-4891 /\n", "msg_date": "Tue, 16 May 2000 10:25:48 -0500 (EST)", "msg_from": "Richard J Kuhns <[email protected]>", "msg_from_op": true, "msg_subject": "Question about databases in alternate locations..." }, { "msg_contents": "Richard J Kuhns wrote:\n> \n> Could anyone please tell me what I'm doing wrong? I'm sure I'm just\n> overlooking something, but what?\n> \n> ======================\n> \n> moran:/acct$ id\n> uid=1007(postgres) gid=1003(postgres) groups=1003(postgres)\n> moran:/acct$ export P=/acct/pindybook\n\nfirst guess is this: did you export that value before you started the\npostmaster? the postmaster needs to have that value in it's environment\nbefore it is started in order for you to use the alternate location.\n\ngood luck,\n\njeff\n", "msg_date": "Tue, 16 May 2000 10:47:38 -0500", "msg_from": "Jeff Hoffmann <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Question about databases in alternate locations..." }, { "msg_contents": "> Could anyone please tell me what I'm doing wrong? I'm sure I'm just\n> overlooking something, but what?\n\nAs Jeff pointed out, the environment variable \"P\" must be known to the\nserver backend to be used in the WITH LOCATION clause. Using it in the\npreceeding initlocation invocation was correct. The utility tries it\nas an environment variable, then as an absolute path, so \"initlocation\nP\" and \"initlocation $P\" are both valid. You can make the environment\nvariable known to the backend by defining it in the postgres account's\n.cshrc or .bashrc file, or by explicitly setting it before firing up\nthe backend.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Wed, 17 May 2000 05:38:58 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Question about databases in alternate locations..." }, { "msg_contents": "Thomas Lockhart writes:\n > > Could anyone please tell me what I'm doing wrong? I'm sure I'm just\n > > overlooking something, but what?\n > \n > As Jeff pointed out, the environment variable \"P\" must be known to the\n > server backend to be used in the WITH LOCATION clause. Using it in the\n > preceeding initlocation invocation was correct. The utility tries it\n > as an environment variable, then as an absolute path, so \"initlocation\n > P\" and \"initlocation $P\" are both valid. You can make the environment\n > variable known to the backend by defining it in the postgres account's\n > .cshrc or .bashrc file, or by explicitly setting it before firing up\n > the backend.\n > \n > - Thomas\n\nThanks to everyone who answered; my problem was that the backend knew\nnothing about it.\n\nThat brings up a comment, a question, and an offer. First, the comment: I\nactually did check the user's guide before I posted the question, but the\ndescription of initlocation doesn't mention it at all -- it just gives an\nexample that doesn't work unless the backend already knows about the\nvariable. It does refer to the CREATE DATABASE section, but at a quick\nglance (I know, I should have read more carefully, mea culpa!) I just saw\nan example that looked similar to the initlocation example.\n\nNow for the question. What's the reason for using this method, as opposed\nto using, say, a system catalog to hold the valid locations? Historical?\nHaving to stop and restart the backend so it can re-read its environment\nseems kind of archaic.\n\nNow the offer. I'm in the design stage of the process of converting a\nfairly large legacy application to PostgreSQL. It's going to be\nessentially a complete re-write, but I should be able to do it in\nmore-or-less independent sections. I really like what I've experienced so\nfar of PostgreSQL, I'd like to contribute, and modifying the postmaster to\nuse (or at least look at, if it exists) a system catalog for this info\nmight be a good way to get my feet wet. Comments?\n\nThanks...\n\t\t\t\t- Rich\n\n-- \nRichard Kuhns\t\t\[email protected]\nPO Box 6249\t\t\tTel: (765)477-6000 \\\n100 Sawmill Road\t\t\t\t x319\nLafayette, IN 47903\t\t (800)489-4891 /\n", "msg_date": "Wed, 17 May 2000 08:50:27 -0500 (EST)", "msg_from": "Richard J Kuhns <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Question about databases in alternate locations..." }, { "msg_contents": "> That brings up a comment, a question, and an offer. First, the comment: I\n> actually did check the user's guide before I posted the question, but the\n> description of initlocation doesn't mention it at all -- it just gives an\n> example that doesn't work unless the backend already knows about the\n> variable. It does refer to the CREATE DATABASE section, but at a quick\n> glance (I know, I should have read more carefully, mea culpa!) I just saw\n> an example that looked similar to the initlocation example.\n\nHmm. You are right, the doc on initlocation is weak. I'll put it on my\ntodo list (and will always gladly accept patches to the sgml sources\nor just new words in an email ;)\n\nHowever, the topic is covered in more detail in the Admin Guide, in\nthe chapter on \"Disk Management\" (actually, it is the only topic in\nthat chapter so far :(\n\n> Now for the question. What's the reason for using this method, as opposed\n> to using, say, a system catalog to hold the valid locations? Historical?\n> Having to stop and restart the backend so it can re-read its environment\n> seems kind of archaic.\n\nThis was and is a topic of discussion on the -hackers list. Peter E\n(if I recall right) was proposing some changes to remove the\nenvironment variable capabilities in Postgres. He also proposed making\na *list* of allowed locations as an environment variable as a way of\nmanaging or controlling the allowed locations.\n\nIn my view, environment variables (or some other mechanism,\npotentially) allow a dbamin to decouple the storage location from the\ndatabase contents, and give some control over allowed locations. The\ncurrent implementation is not ideal; for example Peter's proposal to\nhave a list of allowed locations seems great, since at the moment the\nbackend will try *any* environment variable (e.g. $HOME) so could be a\nsecurity problem.\n\nPutting all of this stuff in a table is a possibility, but\n1) Ingres did this, but they had way too many tables involved in\ndefining and using tables imho. We should do better.\n2) If a dbadmin wants to *carefully* move database locations around,\nthe environment variables allow this to happen by just shutting down\nthe backend, tarring/untarring a disk area, redefining the environment\nvariable, and restarting the backend.\n3) We don't (yet) have a way to move tables from within Postgres. So\nhardcoding or \"hard storing\" absolute paths would make it pretty\ndifficult to accomplish (2).\n\n> Now the offer. I'm in the design stage of the process of converting a\n> fairly large legacy application to PostgreSQL. It's going to be\n> essentially a complete re-write, but I should be able to do it in\n> more-or-less independent sections. I really like what I've experienced so\n> far of PostgreSQL, I'd like to contribute, and modifying the postmaster to\n> use (or at least look at, if it exists) a system catalog for this info\n> might be a good way to get my feet wet. Comments?\n\nNot sure that we should do the system catalog thing without first\nimplementing the ability to do a \"ALTER TABLE SET LOCATION=...\"\ncommand from within Postgres. But it's time to move the the -hackers\nlist. Welcome!\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Wed, 17 May 2000 15:27:46 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Question about databases in alternate locations..." }, { "msg_contents": "Richard J Kuhns <[email protected]> writes:\n> Now for the question. What's the reason for using this method, as opposed\n> to using, say, a system catalog to hold the valid locations? Historical?\n> Having to stop and restart the backend so it can re-read its environment\n> seems kind of archaic.\n\nWell, there'd be a certain amount of circularity in consulting a table\nto find out where you can find tables, no? ;-) But you're right, the\nenvironment-variable mechanism is pretty grotty. There's been a great\ndeal of discussion already in pg-hackers about how to clean up this\nand related issues; suggest you consult the archives if you want to get\ninvolved with fixing it.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 17 May 2000 12:35:14 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Question about databases in alternate locations... " }, { "msg_contents": "Hi,\n\nWhen I try to use pg_dump, I get this error. Can it have something to\n\ndo with a custom type I added. I made sure I added the input/output functions\n\nand comparision functions for sorting and queries.\n\nThe type works fine in SQL queries in general.\n\n> pg_dump -s scm\n\\connect - d23adm\nfailed sanity check, type with oid 457690 was not found\n\nThanks\n\nPatrick\n\n--\n________________________________________\nPatrick Robin\[email protected]\nWalt Disney Feature Animation\n500 South Buena Vista Street\nBurbank,California 91521-4817\n\n\n\n", "msg_date": "Wed, 17 May 2000 10:13:20 -0700", "msg_from": "Patrick Robin <[email protected]>", "msg_from_op": false, "msg_subject": "pg_dump return failed sanity check" }, { "msg_contents": "hi,\n we're looking at migrating from ORACLE to postgres in the\nvery near future and we've run into a small problem. there's\na data type defined \"LINE\". we have named one of our tables \nas \"LINE\" also and it would require a great deal of code \nchanges to rename that table. is it possible to simply\n\"turn off\" the line type? any help is appreciated.\n\nthanks,\n mikeo \n", "msg_date": "Wed, 17 May 2000 13:41:24 -0400", "msg_from": "mikeo <[email protected]>", "msg_from_op": false, "msg_subject": "line type" }, { "msg_contents": "hi,\n we're looking at migrating from ORACLE to postgres in the\nvery near future and we've run into a small problem. there's\na data type defined \"LINE\". we have named one of our tables \nas \"LINE\" also and it would require a great deal of code \nchanges to rename that table. is it possible to simply\n\"turn off\" the line type? any help is appreciated.\n\nthanks,\n mikeo \n\n", "msg_date": "Wed, 17 May 2000 14:43:40 -0400", "msg_from": "mikeo <[email protected]>", "msg_from_op": false, "msg_subject": "remove line type?" }, { "msg_contents": "I guess you could remove the line type from the pg_type table and see if\nthat helps.\n\n> hi,\n> we're looking at migrating from ORACLE to postgres in the\n> very near future and we've run into a small problem. there's\n> a data type defined \"LINE\". we have named one of our tables \n> as \"LINE\" also and it would require a great deal of code \n> changes to rename that table. is it possible to simply\n> \"turn off\" the line type? any help is appreciated.\n> \n> thanks,\n> mikeo \n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 17 May 2000 14:51:18 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: remove line type?" }, { "msg_contents": "If you do it in template1 database after initdb, all new databases will\nnot have that type either.\n\n> that worked!!! thanks!\n> \n> mikeo\n> \n> \n> At 02:51 PM 5/17/00 -0400, Bruce Momjian wrote:\n> >I guess you could remove the line type from the pg_type table and see if\n> >that helps.\n> >\n> >> hi,\n> >> we're looking at migrating from ORACLE to postgres in the\n> >> very near future and we've run into a small problem. there's\n> >> a data type defined \"LINE\". we have named one of our tables \n> >> as \"LINE\" also and it would require a great deal of code \n> >> changes to rename that table. is it possible to simply\n> >> \"turn off\" the line type? any help is appreciated.\n> >> \n> >> thanks,\n> >> mikeo \n> >> \n> >> \n> >\n> >\n> >-- \n> > Bruce Momjian | http://www.op.net/~candle\n> > [email protected] | (610) 853-3000\n> > + If your life is a hard drive, | 830 Blythe Avenue\n> > + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> >\n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 17 May 2000 15:04:52 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: remove line type?" }, { "msg_contents": "that worked!!! thanks!\n\nmikeo\n\n\nAt 02:51 PM 5/17/00 -0400, Bruce Momjian wrote:\n>I guess you could remove the line type from the pg_type table and see if\n>that helps.\n>\n>> hi,\n>> we're looking at migrating from ORACLE to postgres in the\n>> very near future and we've run into a small problem. there's\n>> a data type defined \"LINE\". we have named one of our tables \n>> as \"LINE\" also and it would require a great deal of code \n>> changes to rename that table. is it possible to simply\n>> \"turn off\" the line type? any help is appreciated.\n>> \n>> thanks,\n>> mikeo \n>> \n>> \n>\n>\n>-- \n> Bruce Momjian | http://www.op.net/~candle\n> [email protected] | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n>\n", "msg_date": "Wed, 17 May 2000 15:09:07 -0400", "msg_from": "mikeo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: remove line type?" }, { "msg_contents": "Patrick Robin <[email protected]> writes:\n> When I try to use pg_dump, I get this error. Can it have something to\n> do with a custom type I added. I made sure I added the input/output functions\n> and comparision functions for sorting and queries.\n> The type works fine in SQL queries in general.\n\n>> pg_dump -s scm\n> \\connect - d23adm\n> failed sanity check, type with oid 457690 was not found\n\nThat's probably an indication that you forgot to delete a function that\ntakes or returns an older custom type that you deleted.\n\nLook in pg_proc for a function containing 457690 in proargtypes or\nprorettype, and delete that tuple (or tuples if more than one).\n\npg_dump oughta be more helpful about where it sees the problem...\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 17 May 2000 23:18:54 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_dump return failed sanity check " }, { "msg_contents": "Thomas Lockhart writes:\n\n> Peter E (if I recall right) was proposing some changes to remove the\n> environment variable capabilities in Postgres. He also proposed making\n> a *list* of allowed locations as an environment variable as a way of\n> managing or controlling the allowed locations.\n\nThat was an interesting line of thought until the system catalog idea came\nup. I believe everyone would agree that keeping things system catalog\ncontrolled is the generally preferred choice. If you create a system\ncatalog pg_location(locname name, locpath text) then you still have in\nfact a list of allowed locations, but one that can be changed while the\nserver is up, that can be queried, that can easily be joined against\npg_database, etc. Heck, finely grained permissions are the next logical\nstep.\n\nTable spaces are another point of consideration. Surely you would\neventually want table space administration to be via query language\ncommands. In essence, the alternative locations are a table space kind of\nthingy. The only difference is that the granularity of control stops at\nthe database level, but that's only a difference of degree, not kind. In\nfact, if someone comes around to reworking the logical->physical relation\nname mapping then you could add a field pg_class.rellocation and voil�,\nthere's your table spaces.\n\nSo all in all I do like the system catalog driven model much better in\nterms of ease of use, functionality, extensibility, everything. And no,\nthere's no chicken-and-egg problem because the relation name mapping for\nshared system relations would presumably not be changed. (How would that\nwork anyway?)\n\n> Putting all of this stuff in a table is a possibility, but\n> 1) Ingres did this, but they had way too many tables involved in\n> defining and using tables imho. We should do better.\n\nWell, so far we'd have one table. Is there any reason why we would need\nmore? Why did they have so many? I don't mind many tables if they give\nmore functionality.\n\n> 2) If a dbadmin wants to *carefully* move database locations around,\n> the environment variables allow this to happen by just shutting down\n> the backend, tarring/untarring a disk area, redefining the environment\n> variable, and restarting the backend.\n\n1. shut down database\n2. move data area\n3. connect to template1\n4. update pg_location\n5. connect to the moved database\n\nThat's not very different.\n\n> 3) We don't (yet) have a way to move tables from within Postgres. So\n> hardcoding or \"hard storing\" absolute paths would make it pretty\n> difficult to accomplish (2).\n\nI don't know what you mean with \"hard storing\".\n\n\nAll in all this might be a relatively small job for great immediate and\nfuture benefit.\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Fri, 19 May 2000 01:50:18 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Question about databases in alternate locations..." }, { "msg_contents": "> > Peter E (if I recall right) was proposing some changes to remove the\n> > environment variable capabilities in Postgres. He also proposed making\n> > a *list* of allowed locations as an environment variable as a way of\n> > managing or controlling the allowed locations.\n> That was an interesting line of thought until the system catalog idea came\n> up. I believe everyone would agree that keeping things system catalog\n> controlled is the generally preferred choice. If you create a system\n> catalog pg_location(locname name, locpath text) then you still have in\n> fact a list of allowed locations, but one that can be changed while the\n> server is up, that can be queried, that can easily be joined against\n> pg_database, etc. Heck, finely grained permissions are the next logical\n> step.\n\nSo pg_location would hold the full path (absolute or logical) to every\nfile resource in every database? Or would it hold only a list of\nallowed paths? Or only a list of resources for each database (~1 row\nper database) and then table-specific info would be stored somewhere\nlocal to the database itself?\n\n> Table spaces are another point of consideration. Surely you would\n> eventually want table space administration to be via query language\n> commands. In essence, the alternative locations are a table space kind of\n> thingy. The only difference is that the granularity of control stops at\n> the database level, but that's only a difference of degree, not kind. In\n> fact, if someone comes around to reworking the logical->physical relation\n> name mapping then you could add a field pg_class.rellocation and voil�,\n> there's your table spaces.\n\nYes, this capability will be great.\n ALTER TABLE SET LOCATION=...\nand/or\n ALTER DATABASE SET LOCATION=...\nshould help administration and scalability.\n\n> So all in all I do like the system catalog driven model much better in\n> terms of ease of use, functionality, extensibility, everything. And no,\n> there's no chicken-and-egg problem because the relation name mapping for\n> shared system relations would presumably not be changed. (How would that\n> work anyway?)\n> > Putting all of this stuff in a table is a possibility, but\n> > 1) Ingres did this, but they had way too many tables involved in\n> > defining and using tables imho. We should do better.\n> Well, so far we'd have one table. Is there any reason why we would need\n> more? Why did they have so many? I don't mind many tables if they give\n> more functionality.\n\nI have no idea why they had so many. Probably because it grew\nincrementally, or possibly because they normalized their tables to the\ntheoretically correct point. It was ugly either way (right Bruce?).\n\n> > 2) If a dbadmin wants to *carefully* move database locations around,\n> > the environment variables allow this to happen by just shutting down\n> > the backend, tarring/untarring a disk area, redefining the environment\n> > variable, and restarting the backend.\n> 1. shut down database\n> 2. move data area\n> 3. connect to template1\n> 4. update pg_location\n> 5. connect to the moved database\n> That's not very different.\n\nBut hard to do? If pg_location has 5000 entries, and you've scattered\ntables all over the place (perhaps a bad decision, but we *should*\nhave the flexibility to do that) then it might be very error prone\nwhen working with absolute paths imho.\n\n> > 3) We don't (yet) have a way to move tables from within Postgres. So\n> > hardcoding or \"hard storing\" absolute paths would make it pretty\n> > difficult to accomplish (2).\n> I don't know what you mean with \"hard storing\".\n\nPutting absolute path names as pointers to tables or data areas. I'm\ngetting the sense I'm in a minority (in a group of 3? ;) in this\ndiscussion, but imho having some decoupling between logical paths in\nthe database and actual paths outside is A Good Thing. Always has been\na mark of good design in my experience.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Fri, 19 May 2000 13:43:45 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Question about databases in alternate locations..." }, { "msg_contents": "Thomas Lockhart writes:\n > So pg_location would hold the full path (absolute or logical) to every\n > file resource in every database? Or would it hold only a list of\n > allowed paths? Or only a list of resources for each database (~1 row\n > per database) and then table-specific info would be stored somewhere\n > local to the database itself?\n > \nIs a list of allowed paths really necessary? If initlocation has already\nbeen run so a directory tree with the proper structure and permissions\nexists there'd be no new security hole (ie, I couldn't ask the backend to\ncreate a database on any arbitrary partition; only one that's already been\nprepared by the administrator).\n\nI'd like to see a list of resources per database, with any table-specific\ninfo stored locally.\n\n > ALTER TABLE SET LOCATION=...\n > and/or\n > ALTER DATABASE SET LOCATION=...\n > should help administration and scalability.\n > \nDefinitely. Of course, I'd want to make sure any new LOCATION had been\nprepared by the administrator.\n\n > But hard to do? If pg_location has 5000 entries, and you've scattered\n > tables all over the place (perhaps a bad decision, but we *should*\n > have the flexibility to do that) then it might be very error prone\n > when working with absolute paths imho.\n > \nI'd think that a pg_location entry wouldn't be necessary for the majority\nof tables -- the default location would be just like it is now, under the\ndatabase directory. Creating a database directory in one place and\nscattering the tables all over creation would definitely be a Bad Decision,\nIMHO, but it would be doable.\n\n > Putting absolute path names as pointers to tables or data areas. I'm\n > getting the sense I'm in a minority (in a group of 3? ;) in this\n > discussion, but imho having some decoupling between logical paths in\n > the database and actual paths outside is A Good Thing. Always has been\n > a mark of good design in my experience.\n > \nHow about requiring an absolute path for the data(base) area, and\nallowing relative paths for the tables? Actually, if you want\nALTER DATABASE SET LOCATION=...\nto move tables, you'd either have to require relative paths for the\ntables or ignore tables that have absolute paths, right?\n\nHmm. And all I originally wanted was an easier way to create a database in\nan alternate location :-).\n\n\t\t\t- Rich\n\n-- \nRichard Kuhns\t\t\[email protected]\nPO Box 6249\t\t\tTel: (765)477-6000 \\\n100 Sawmill Road\t\t\t\t x319\nLafayette, IN 47903\t\t (800)489-4891 /\n", "msg_date": "Fri, 19 May 2000 11:25:35 -0500 (EST)", "msg_from": "\"Richard J. Kuhns\" <[email protected]>", "msg_from_op": false, "msg_subject": "[HACKERS] Re: Question about databases in alternate locations..." }, { "msg_contents": "Thomas Lockhart writes:\n\n> So pg_location would hold the full path (absolute or logical) to every\n> file resource in every database? Or would it hold only a list of\n> allowed paths?\n\nThe way I imagined it it would hold data like this:\n\n locname | locpath\n----------------+-------------------\n alt1 | /mnt/foo/db\n joes alt store | /home/joe/storage\n\nWhen I create a database I would then do CREATE DATABASE \"my_db\" WITH\nLOCATION = \"alt1\"; which would place the database at\n/mnt/foo/db/data/base/my_db. Then if I create another that I want at the\nsame place I do CREATE DATABASE \"another\" WITH LOCATION =\n\"alt1\";. pg_database would presumably contain a reference to\npg_location.oid instead of the current datpath attribute. So one could say\nI'm really just normalizing pg_database.\n\nIn some future life you might be able to do CREATE TABLE xxx (...) WITH\nLOCATION = \"joes alt store\" but then we'd have to think about how to\nresolve the path. One idea would be to get rid of per-database\nsubdirectories and just store all heap files in one directory, but I'm\nsure Bruce would hate that. :) But that's another day's story.\n\nSo yes, it is a list of allowed locations associated with freely choosable\ndescriptive names. Environment variables do essentially provide a similar\nservice but I find this much more administration friendly and\nflexible. (E.g., \"What sort of stuff is being stored at /var/abc/def?\" --\nuse a query)\n\n> > 1. shut down database\n> > 2. move data area\n> > 3. connect to template1\n> > 4. update pg_location\n> > 5. connect to the moved database\n> > That's not very different.\n> \n> But hard to do?\n\nALTER LOCATION \"name\" SET PATH TO '/new/path';? (Alternatively, use update\npg_location set locpath='/new/path' where locname='name'.) That isn't any\nharder than setting environment variables. It might in fact be easier.\n\n> but imho having some decoupling between logical paths in the database\n> and actual paths outside is A Good Thing. Always has been a mark of\n> good design in my experience.\n\nSure, that's exactly what this would provide. locname is the logical name\nof the \"storage location\", locpath is the physical path. It's just a\nmatter of whether you maintain that information in environment variables\n(which might get unset, forgotten, require postmaster shutdown, are\nsubject to certain rules we don't control) or in the database (which comes\nwith all the conveniences you might imagine).\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Sat, 20 May 2000 15:35:58 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Question about databases in alternate locations..." } ]
[ { "msg_contents": "> Rather than replacing just the storage manager, you'd be replacing\n> the access methods, buffer manager, transaction manager, and some\n> of the shared memory plumbing with our stuff. I wasn't sufficiently\n> clear in my earlier message, and talked about \"no-overwrite\" as if\n> it were the only component.\n> \n> Clearly, that's a lot of work. On the other hand, you'd have the\n> benefit of an extremely well-tested and widely deployed library to\n> provide those services. Lots of different groups have used the\n> software, so the abstractions that the API presents are well-thought\n> out and work well in most cases.\n\nTrue. But after replacement the system as whole would not be well-tested.\n\"A lot of work\" means \"a lot of bugs, errors etc\".\n\n> The group is interested in multi-version concurrency control, so that\n> readers never block on writers. If that's genuinely critical, we'd\n> be willing to see some work done to add it to Berkeley DB, so that it\n> can do either conventional 2PL without versioning, or MV. Naturally,\n\nThis would be the first system with both types of CC -:)\n\nWell, so, before replacing anything we would have to add MVCC to BDB.\nI still didn't look at your sources, 'll do in a few days...\n\nVadim\n", "msg_date": "Tue, 16 May 2000 09:26:28 -0700", "msg_from": "\"Mikheev, Vadim\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Berkeley DB license" }, { "msg_contents": "> > The group is interested in multi-version concurrency control, so that\n> > readers never block on writers. If that's genuinely critical, we'd\n> > be willing to see some work done to add it to Berkeley DB, so that it\n> > can do either conventional 2PL without versioning, or MV. Naturally,\n> \n> This would be the first system with both types of CC -:)\n> \n> Well, so, before replacing anything we would have to add MVCC to BDB.\n> I still didn't look at your sources, 'll do in a few days...\n\nVadim, I thought you said you were going to be doing a new storage\nmanager for 7.2, including an over-write storage manager that keeps MVCC\ntuples in a separate location. Could SDB work in that environment\neasier, without having MVCC integrated into SDB?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 16 May 2000 13:54:21 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Berkeley DB license" } ]
[ { "msg_contents": "> > Well, so, before replacing anything we would have to add \n> > MVCC to BDB. I still didn't look at your sources, 'll do\n> > in a few days...\n> \n> Vadim, I thought you said you were going to be doing a new storage\n> manager for 7.2, including an over-write storage manager that \n> keeps MVCC tuples in a separate location. Could SDB work in that\n> environment easier, without having MVCC integrated into SDB?\n\nHow can we integrate SDB code into PostgreSQL without MVCC support\nin SDB if we still want to have MVCC?! I missed something?\nOr you ask is replacement+changes_in_SDB_for_MVCC easier than\nWAL+new_our_smgr? I don't know.\n\nVadim\n", "msg_date": "Tue, 16 May 2000 11:13:24 -0700", "msg_from": "\"Mikheev, Vadim\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Berkeley DB license" }, { "msg_contents": "[Charset iso-8859-1 unsupported, filtering to ASCII...]\n> > > Well, so, before replacing anything we would have to add \n> > > MVCC to BDB. I still didn't look at your sources, 'll do\n> > > in a few days...\n> > \n> > Vadim, I thought you said you were going to be doing a new storage\n> > manager for 7.2, including an over-write storage manager that \n> > keeps MVCC tuples in a separate location. Could SDB work in that\n> > environment easier, without having MVCC integrated into SDB?\n> \n> How can we integrate SDB code into PostgreSQL without MVCC support\n> in SDB if we still want to have MVCC?! I missed something?\n> Or you ask is replacement+changes_in_SDB_for_MVCC easier than\n> WAL+new_our_smgr? I don't know.\n\nYou stated that the new storage manager will do over-writing, and that\nthe MVCC-needed tuples will be kept somewhere else and removed when not\nneeded. \n\nIt is possible to use SDB, and keep the MVCC-needed tuples somewhere\nelse, also in SDB, so we don't have to add MVCC into the SDB existing\ncode, we just need to use SDB to implement MVCC.\n\nThe issue was that SDB does two-phase locking, and I was asking if MVCC\ncould be layered on top of SDB, rather than being added into SDB.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 16 May 2000 14:28:50 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Berkeley DB license" }, { "msg_contents": "> How can we integrate SDB code into PostgreSQL without MVCC support\n> in SDB if we still want to have MVCC?! I missed something?\n> Or you ask is replacement+changes_in_SDB_for_MVCC easier than\n> WAL+new_our_smgr? I don't know.\n\nI guess I was asking if MVCC could be implemented on top of SDB, rather\nthan changes made to SDB itself.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 16 May 2000 14:29:48 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Berkeley DB license" } ]
[ { "msg_contents": "Tom Lane writes:\n\n> I have just been scanning some of the original Postgres papers\n> (in an unsuccessful search to find out how one uses \"set\" attributes;\n> anyone know?)\n\nI've been playing around with that a while ago in the hope that this would\nexplain this table-as-datatype thing but several findings led me to\nbelieve that this is long dead, removed, rotten code:\n\n* SET uses textin/textout\n\n* no functions defined with SET arguments or return values,\npg_proc.proretset is false for all rows\n\n* the only entry point for defining sets is in parser/parser.c, which is\nfittingly marked #ifdef SETS_FIXED\n\n\nThe function SetDefine in utils/adt/sets.c makes me think that a SET is\nmore or less a stored procedure without arguments. That is, you would\ndefine some SET type in terms of a query from another table and then you\ncould use predicates like `value in set'. The syntax for this must have\ngotten lost in the PostQUEL to SQL switch. All in all there's not much to\nrescue from there, I believe.\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Tue, 16 May 2000 20:32:29 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "SET type (was Re: WAL versus Postgres)" }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> I've been playing around with that a while ago in the hope that this would\n> explain this table-as-datatype thing but several findings led me to\n> believe that this is long dead, removed, rotten code:\n\nThere's an awful lot of code still there ;-). Undoubtedly it's\nsuffering from bit-rot, but I'm not sure the feature is as much\nof a lost cause as you think.\n\n> The function SetDefine in utils/adt/sets.c makes me think that a SET is\n> more or less a stored procedure without arguments.\n\nThat squares with what I've been reading in the old Postgres papers;\nthey didn't call it a \"set\", but they definitely put a lot of emphasis\non the notion of a stored procedure that would generate a set of tuples\nwhen called.\n\nWhat I'm confused about at the moment is that there seem to be two\ndifferent facilities in the Postgres code that could be equated to\nthat concept. There is the \"attisset\" stuff, and then there is the\nnotion of a function that takes a relation as input and produces\nanother relation as output. The latter still sort of works --- there\nare regression test cases that exercise it, and even though the\nimplementation has some fundamental shortcomings it is still useful.\nThis seems to be related to \"attisset\" but yet not be the same thing.\nLike I said, I'm confused...\n\n> That is, you would define some SET type in terms of a query from\n> another table and then you could use predicates like `value in\n> set'. The syntax for this must have gotten lost in the PostQUEL to SQL\n> switch.\n\nIt's possible to invoke both facilities via the nested-dot notation.\nIf I understood which gets invoked when, I'd be a lot further along.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 17 May 2000 01:49:15 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SET type (was Re: WAL versus Postgres) " } ]
[ { "msg_contents": "Tom Lane writes:\n\n> If there are multiple possibilities, we choose the one which is the\n> \"least promoted\" by some yet-to-be-determined metric.\n\nA \"metric\" is often one step away from \"unpredictable behaviour\", at least\nfor users.\n\nBut if you look in practice then there are not really any existing\nfunctions that could benefit from this, at least not if the other\nproposals go through and/or a few function entries are added or removed.\n\nLet's say you have a function foo(float8, int2) and one foo(float4, int8)\nand you call it with (you guessed it) float4 and int2. Which do you take?\nIf there is a good reason that these two functions exist separate in the\nway they are then the decision should probably not be made by some\ncasting-cost metric but by the user. If there is no good reason that the\nfunctions are like this then perhaps the implementator should be made\naware of the (useless?) ambiguity and maybe provide a grand unified\n(float8, int8) version.\n\nIf you do want to support this sort of ambiguity resolution then the\nmetric should IMHO be how many arguments you had to cast away from the\ninput type at all. That *might* give reasonable behaviour for users,\nthough I'm not completely sure.\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Tue, 16 May 2000 20:32:59 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Re: type conversion discussion " }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> But if you look in practice then there are not really any existing\n> functions that could benefit from this,\n\nWhile looking at existing functions can help detect problems in a\nproposal like this, the standard set of functions is *not* the be-all\nand end-all; the whole point is that we are trying to build an\nextensible system. So I'm pretty suspicious of any argument that\nbegins in the above fashion ;-)\n\n> Let's say you have a function foo(float8, int2) and one foo(float4, int8)\n> and you call it with (you guessed it) float4 and int2. Which do you take?\n\nA good point; I wouldn't object to returning an error if we determine\nthat there are multiple equally-good possibilities. But, again, the\nsticky question is equally good according to what metric?\n\n> If you do want to support this sort of ambiguity resolution then the\n> metric should IMHO be how many arguments you had to cast away from the\n> input type at all.\n\nMost of the cases we will be dealing with in practice are operators of\none or two arguments, so a metric that only has two or three possible\nvalues (respectively) is not going to be very useful... especially\nsince the exact-match case is not interesting. With the above metric\nwe'd never be able to resolve any ambiguous unary-operator cases at all!\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 16 May 2000 17:40:36 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: type conversion discussion " }, { "msg_contents": "Tom Lane writes:\n\n> > Let's say you have a function foo(float8, int2) and one foo(float4, int8)\n> > and you call it with (you guessed it) float4 and int2. Which do you take?\n> \n> A good point; I wouldn't object to returning an error if we determine\n> that there are multiple equally-good possibilities. But, again, the\n> sticky question is equally good according to what metric?\n\nIMO that metric should be \"existance\". Anything else is bound to be\nnon-obvious. Surely some breadth-first or depth-first search through the\nimaginary casting tree would yield reasonable results, but it's still\nconfusing. I don't see any good reasons why one would have such\nambiguously overloaded functions, though I'd be very interested to see\none.\n\nIn fact one might consider preventing *creation* of such setups. That's\nwhat programming languages would do. If I'm not mistaken then what I'm\nsaying is that overloaded functions must form a lattice when ordered\naccording to the elsewhere proposed promotion hierarchy of their\narguments. That ought to be a doable thing to check for and then we could\nalso use lattice concepts to find the best fitting function. Gotta work\nthis out in detail though.\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Wed, 17 May 2000 19:31:00 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Re: type conversion discussion " }, { "msg_contents": "I wrote:\n\n> If I'm not mistaken then what I'm saying is that overloaded functions\n> must form a lattice [...]\n\nIndeed, I was. This attacks the problem from a different side, namely not\nallowing setups that are ambiguous in the first place.\n\nIn detail:\n\nIf T1 and T2 are datatypes then T1 < T2 iff T1 is allowed to be implicitly\nconverted to T2, according to the promotion scheme du jour. If neither T1\npromotes to T2 nor vice versa then T1 and T2 are not comparable.\n\nLet A and B be functions with argument types (a1, a2, ... an) and (b1, b2,\n... bn) respectively. Then we say that A < B iff ai < bi for all i=1..n.\nIf neither A < B nor B > A then A and B are not comparable. (That would\ninclude such cases as foo(int2, float8) and foo(int8, float4).)\n\nDefine a partition on the set of all functions. Two functions F(f1, f2,\n... fm) and G(g1, g2, ... gn) are in the same equivalence class iff \"F\" =\n\"G\" (same name), m = n, and fi and gi are comparable for all i = 1..m. Now\nI propose that you always ensure (preferably at function creation time)\nthat each equivalence class is a lattice under the partial order described\nin the previous paragraph.\n\nThis helps because... Given a function call Q that you want to resolve you\nfirst identify its equivalence class. Then there are three possibilities:\n\n1) Q is not comparable to any other function in the equivalence class.\nThat means that the provided set of functions is incomplete. Proof: Assume\nthere was a function P which could handle call Q. Then, since you can only\ncast \"up\", the arguments of P must at each position be >= those of Q. But\nthen P > Q.\n\n2) Q is greater than any element in the equivalence class. This also means\nthat the set of provided function is unable to handle Q.\n\n3) There exists at least one function P in the equivalence class such that\nQ <= P. Consider the set A of all functions in the equivalence class that\nare >= Q. Since the class is a lattice, A has a (unique) greatest lower\nbound. That's the function you call.\n\nRaise your hand if you are lost...\n\nAn example, say you define foo(int2, float8) and now want to add foo(int8,\nfloat4), then the system would reject that because of potential ambiguity\nand require adding foo(int8, float8) and foo(int2, float4) before creating\nfoo(int8, float4); or you would perhaps realize what you are doing and\ninstead just provide foo(int8, float8), or whatever you really wanted in\nthe first place.\n\n(Actually, the requirement that you add foo(int8, float8) is useless,\nsince there is no down-casting and we don't use the least upper bound\nproperty of lattices, so probably in practice one could settle for a\n\"half-lattice\" requirement.)\n\n\nNow I realize that this a pretty deep proposal, but at least it's a\nsolidly founded one (I think), it prevents the programmer-user from doing\nunwise things, and it avoids any unpredictable \"metrics\".\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Thu, 18 May 2000 00:06:15 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Re: type conversion discussion " }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> I propose that you always ensure (preferably at function creation time)\n> that each equivalence class is a lattice under the partial order described\n> in the previous paragraph.\n\nEr ... weren't you just objecting to metric-based resolution on the\ngrounds that it'd be unintelligible to users? This has got that\nproblem ten times worse. In fact, I'm not sure *you* understand it.\n\n> 3) There exists at least one function P in the equivalence class such that\n> Q <= P. Consider the set A of all functions in the equivalence class that\n> are >= Q. Since the class is a lattice, A has a (unique) greatest lower\n> bound. That's the function you call.\n\nI don't think so. The lattice property only says that the set A has a\nglb within the equivalence class. AFAICT it doesn't promise that the\nglb will be >= Q, so you can't necessarily use the glb as the function\nto call. The reason why is that Q isn't necessarily part of the set of\ngiven functions, so it's not necessarily one of the lower bounds that\nthe lattice property says there is a greatest of.\n\nIf that wasn't what you had in mind, then you are confusing a lattice\non the set of *all possible* functions with a lattice over the set of\nfunctions that actually are present ... and it looks to me like\ndifferent parts of your argument assume different things.\n\n> An example, say you define foo(int2, float8) and now want to add foo(int8,\n> float4), then the system would reject that because of potential ambiguity\n\nIf your equivalence set contains just the four functions (abbreviating\nin the obvious way)\n\tf(i2, i2)\n\tf(i2, f8)\n\tf(i8, f4)\n\tf(f8, f8)\nthen each two-element subset of this set has a unique glb and lub within\nthe set, which makes it a lattice if I'm reading my dusty old abstract\nalgebra textbook correctly. There may be a property that makes things\nwork the way you suggest but it's not the lattice property.\n\nA more general comment is that mathematical purity is only one of the\nconsiderations here, and not necessarily even one of the most important\nconsiderations ;-)\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 17 May 2000 19:28:47 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: type conversion discussion " }, { "msg_contents": "Tom Lane writes:\n\n> Er ... weren't you just objecting to metric-based resolution on the\n> grounds that it'd be unintelligible to users?\n\nWell, since punting in the face of multiple possibilities isn't really an\nimprovement over that status quo, I kept looking. The key difference\nbetween this (or similarly-minded) proposals and the context in which you\noriginally mentioned the metrics is that mine works at function creation\ntime, not at call resolution time.\n\nA function creation is usually a well-controlled event, so an error\nmessage along the lines of \"if you create this function then you create an\nambiguity with function x because a call y could not be resolved\ndeterministically\" is helpful. \"Cannot resolve call x between y and z\" at\ncall time is annoying (get the programer on the phone and ask him to clean\nup his mess). Part of my proposal was a method for statically determining\nambiguities before it's too late.\n\nSecondly, yes, lattices and related business are more complicated than say\na breadth-first search. But that need only be internally. Externally, it\ntends to give intuitive behaviour because there is never an ambiguity,\nwhereas a BFS would presumably rely on the order of the arguments or other\nsuch incidental things.\n\n> > 3) There exists at least one function P in the equivalence class such that\n> > Q <= P. Consider the set A of all functions in the equivalence class that\n> > are >= Q. Since the class is a lattice, A has a (unique) greatest lower\n> > bound. That's the function you call.\n> \n> I don't think so. The lattice property only says that the set A has a\n> glb within the equivalence class. AFAICT it doesn't promise that the\n> glb will be >= Q, so you can't necessarily use the glb as the function\n> to call.\n\nSince all functions in A are >=Q by definition, Q is at least _a_ lower\nbound on A. The glb(A) is also a lower bound on A, and since it's the\ngreatest it must also be >=Q.\n\nThe case where glb(A)=Q is when the call Q matches one existing signature\nexactly, in that case you call that function. Otherwise the glb represents\nthe \"next function up\", and if there is an algorithm to find the glb of a\nposet then there is also an algorithm to resolve function calls.\n\n\n> A more general comment is that mathematical purity is only one of the\n> considerations here, and not necessarily even one of the most important\n> considerations ;-)\n\nWell, at least it was fun for me to work it out. :-)\n\nNo, seriously, what are the considerations?\n\n* ease of implementation\n* efficiency\n* predictability\n* intuitiveness\n* verifyability\n* scalability\n* simplicity\n\nI think I got at least half of that covered, with no contenders yet at the\nsurface.\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Fri, 19 May 2000 05:43:53 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Re: type conversion discussion " }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n>> I don't think so. The lattice property only says that the set A has a\n>> glb within the equivalence class. AFAICT it doesn't promise that the\n>> glb will be >= Q, so you can't necessarily use the glb as the function\n>> to call.\n\n> Since all functions in A are >=Q by definition, Q is at least _a_ lower\n> bound on A. The glb(A) is also a lower bound on A, and since it's the\n> greatest it must also be >=Q.\n\nNo, you're not catching my point. glb(A) is the greatest lower bound\n*within the set of available functions*. Q, the requested call\nsignature, is *not* in that set (if it were then we'd not have any\nambiguity to resolve, because there's an exact match). The fact that\nthe set of available functions forms a lattice gives you no guarantee\nwhatever that glb(A) >= Q, because Q is not constrained by the lattice\nproperty.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 18 May 2000 23:46:13 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: type conversion discussion " }, { "msg_contents": "One thing that has gotten lost here is whether there is any market at all\nfor putting in some line of defence against (a tbd degree of) ambiguity at\nfunction creation time to reduce the possible problems (implementation and\nuser-side) at call time? What do you think?\n\nTom Lane writes:\n\n> glb(A) is the greatest lower bound *within the set of available\n> functions*.\n\nCorrect.\n\n> Q, the requested call signature, is *not* in that set\n\nCorrect.\n\n> The fact that the set of available functions forms a lattice gives you\n> no guarantee whatever that glb(A) >= Q, because Q is not constrained\n> by the lattice property.\n\nI know. I don't use the lattice property to deduce that fact hat\nglb(A)>=Q. I use the lattice property to derive the existance of glb(A).\nThe result glb(A)>=Q comes from\n\n1. Q is a lower bound on A (by definition of A)\n2. glb(A) is a lower bound on A (by definition of glb)\n3. glb(A)>=Q (by definiton of \"greatest\")\n\nRecall that A was defined as the set of functions >=Q in Q's equivalence\nclass, and was guaranteed to be non-empty by treating the other cases\nseparately.\n\nI think it works. :) In all but the most complicated cases this really\ndecays to the obvious behaviour, but on the other hand it scales\ninfinitely.\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Sat, 20 May 2000 15:36:45 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Re: type conversion discussion " } ]
[ { "msg_contents": "> You stated that the new storage manager will do over-writing, and that\n> the MVCC-needed tuples will be kept somewhere else and removed when not\n> needed. \n> \n> It is possible to use SDB, and keep the MVCC-needed tuples somewhere\n> else, also in SDB, so we don't have to add MVCC into the SDB existing\n> code, we just need to use SDB to implement MVCC.\n\nPossible, in theory.\n\n> The issue was that SDB does two-phase locking, and I was asking if MVCC\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\nDue to this fact seems we'll have to change SDB anyway. With MVCC per-tuple\nlocking is not needed. Short-term per-buffer _latches_ are used to prevent\nconcurrent changes in a buffer, no locks made via lock manager.\nI'm not sure does SDB API allow _any_ access to modified tuples or not.\nI would rather assume that it doesn't.\n\n> could be layered on top of SDB, rather than being added into SDB.\n\nAs I said - possible, in theory, - and also not good thing to do, in theory.\nMVCC and 2PL are quite different approaches to problem of concurrency\ncontrol. So, how good is layering one approach over another, who knows?\n\nVadim\n", "msg_date": "Tue, 16 May 2000 11:51:19 -0700", "msg_from": "\"Mikheev, Vadim\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Berkeley DB license" }, { "msg_contents": "At 11:51 AM 5/16/00 -0700, Mikheev, Vadim wrote:\n\n> > The issue was that SDB does two-phase locking, and I was asking if MVCC\n> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n> Due to this fact seems we'll have to change SDB anyway. With MVCC per-tuple\n> locking is not needed. Short-term per-buffer _latches_ are used to prevent\n> concurrent changes in a buffer, no locks made via lock manager.\n\nThe locking, logging, and buffer management interfaces are all available\nindependently. Certainly it would be possible to modify the current\nimplementation to do System R-style shadow paging, or some other technique,\nbut to continue to use the current log manager to roll back or reapply\nchanges as necessary.\n\nIn that case, you'd take out the current 2PL lock acquisition calls during\na transaction, and add code to create shadow pages for updates.\n\nCertainly the interfaces are there. It'd be interesting to us if users\ncould select 2PL or MV for concurrency control on initialization, and\nit appears on the face of it that that should be possible.\n\n> I'm not sure does SDB API allow _any_ access to modified tuples or not.\n> I would rather assume that it doesn't.\n\nAt present, Berkeley DB provides only 2PL. We don't permit dirty reads,\nbut that's an implementation detail that would be fairly simple to change.\nBerkeley DB does page-level, not record-level, locking. There are short-\nterm mutex locks that protect shared structures.\n\nMy bet is that it would be better to use the lock manager, rather than\n(say) spinlocks, to protect pages during updates; it's possible during\nmulti-page updates, like btree page splits. Since the updating thread\nmay be descheduled during relatively long operations, you'd rather not\nbe holding a spinlock. And you can acquire and release locks via lock\nmanager calls without trouble.\n\n> MVCC and 2PL are quite different approaches to problem of concurrency\n> control.\n\nI'm joining the discussion quite late. Can you summarize for me the\nthinking behind adopting MVCC in place of 2PL? Have you got performance\nnumbers for the two?\n\n\t\t\t\t\tmike\n\n", "msg_date": "Tue, 16 May 2000 14:14:13 -0700", "msg_from": "\"Michael A. Olson\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: Berkeley DB license" } ]
[ { "msg_contents": " From the CREATE TRIGGER, it seems to say that only C functions are\nsupported by triggers. I see in the plpgsql manuals that it supports\ntriggers too.\n\nWhat function languages do triggers support? C, plpgsql, pltcl, SQL?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 17 May 2000 00:26:37 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Trigger function languages" }, { "msg_contents": "On Wed, 17 May 2000, Bruce Momjian wrote:\n\nAny languages you can write stored procs in.\n\n> >From the CREATE TRIGGER, it seems to say that only C functions are\n> supported by triggers. I see in the plpgsql manuals that it supports\n> triggers too.\n> \n> What function languages do triggers support? C, plpgsql, pltcl, SQL?\n> \n> \n\n-- \n Alex G. Perel -=- AP5081\[email protected] -=- [email protected]\n play -=- work \n\t \nDisturbed Networks - Powered exclusively by FreeBSD\n== The Power to Serve -=- http://www.freebsd.org/ \n\n", "msg_date": "Thu, 18 May 2000 15:07:20 -0400 (EDT)", "msg_from": "Alex Perel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Trigger function languages" } ]
[ { "msg_contents": "Hi.\n\nI've installed the v7.0 postgresql RPMS from ftp.postgresql.org. I configured\nmy syslog to catch the local0 facility and log it to /var/log/pgsql with this\nentry in /etc/syslog.conf:\n\nlocal0.* /var/log/pgsql\n\n\nThis catches just about everything. However, if I fire up \"psql template1\" and\nissue the command \"\\d\", the following is broadcast to all tty's on the machine:\n\nMessage from syslogd@galaxy at Wed May 17 09:20:12 2000 ...\ngalaxy \n\nMy guess is that something is not logging to the correct facility. Has anyone\nelse seen this / found a work around for this?\n\nmy pg_options file contains:\nverbose=1\nquery=4\nsyslog=2\n\n\nMike\n", "msg_date": "Wed, 17 May 2000 09:26:05 -0500", "msg_from": "Michael Schout <[email protected]>", "msg_from_op": true, "msg_subject": "7.0 RPMS and syslog problem." }, { "msg_contents": "Upon further investigation, I found that when the hostname is broadcast to everyone, the following also appears in /var/log/messages:\n\nMay 17 09:28:11 galaxy \nMay 17 09:28:11 galaxy syslogd: Cannot glue message parts together\nMay 17 09:28:11 galaxy 135>May 17 09:28:11 postgres[18087]: query: SELECT c.relname as \"Name\", 'table'::text as \"Type\", u.usename as \"Owner\" FROM pg_class c, pg_user u WHERE c.relowner = u.usesysid AND c.relkind = 'r' AND not exists (select 1 from pg_views where viewname = c.relname) AND c.relname !~ '^pg_' UNION SELECT c.relname as \"Name\", 'table'::text as \"Type\", NULL as \"Owner\" FROM pg_class c WHERE c.relkind = 'r' AND not exists (select 1 from pg_views where viewname = c.relname) AND not exists (select 1 from pg_user where usesysid = c.relowner) AND c.relname !~ '^pg_' UNION SELECT c.relname as \"Name\", 'view'::text as \"Type\", u.usename as \"Owner\" FROM pg_class c, pg_user u WHERE c.relowner = u.usesysid AND c.relkind = 'r' AND exists (select 1 from pg_views where viewname = c.relname) AND c.relname !~ '^pg_' UNION SELECT c.relname as \"Name\", 'view'::text as \"Type\", NULL as \"Owner\" FROM pg_class c WHERE c.relkind = 'r' AND exists (select 1 from pg_views whe!\nre viewname = c.relname) AND not exists (select 1 fro\nMay 17 09:28:11 galaxy m pg_user where usesysid = c.relowner) AND c.relname !~ '^pg_' UNION SELECT c.relname as \"Name\", (CASE WHEN relkind = 'S' THEN 'sequence'::text ELSE 'index'::text END) as \"Type\", u.usename as \"Owner\" FROM pg_class c, pg_user u WHERE c.relowner = u.usesysid AND relkind in ('S') AND c.relname !~ '^pg_' UNION SELECT c.relname as \"Name\", (CASE WHEN relkind = 'S' THEN 'sequence'::text ELSE 'index'::text END) as \"Type\", NULL as \"Owner\" FROM pg_class c WHERE not exists (select 1 from pg_user where usesysid = c.relowner) AND relkind in ('S') AND c.relname !~ '^pg_' ORDER BY \"Name\"\n\nSo the problem seems to stem from the \"Cannot glue message parts together\"\n\nThe hostname is being broadcast to with the level of \"emerg\" because if I\nremove:\n\n*.emerg *\n\nfrom my syslog.conf file, I dont see the console message getting broadcast\nanymore.\n\nIf anyone else has seen this or has any ideas, please let me know :).\n\nMike\n", "msg_date": "Wed, 17 May 2000 09:44:53 -0500", "msg_from": "Michael Schout <[email protected]>", "msg_from_op": true, "msg_subject": "Re: 7.0 RPMS and syslog problem. (more)" }, { "msg_contents": "Michael Schout <[email protected]> writes:\n> Upon further investigation, I found that when the hostname is broadcast to everyone, the following also appears in /var/log/messages:\n> May 17 09:28:11 galaxy \n> May 17 09:28:11 galaxy syslogd: Cannot glue message parts together\n\nHmm. We were just discussing this a few weeks ago, when someone\nsuggested making the syslog option be the default and I wanted to\nknow if it was really robust enough for that. Seems it's not.\nOn at least some platforms, syslog can't cope with log messages\nexceeding a few hundred characters. Postgres 7.0 is quite capable\nof putting out log messages in the megabyte range, if you have debug\nlogging turned on --- just shove a sufficiently long query at it.\n\nSo I'm afraid the answer is that syslog and verbose logging won't\nplay together, at least not on your platform. Sorry.\n\nWe really need a better logging answer...\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 17 May 2000 11:59:15 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 7.0 RPMS and syslog problem. (more) " }, { "msg_contents": "Tom Lane wrote:\n \n> Michael Schout <[email protected]> writes:\n> > Upon further investigation, I found that when the hostname is broadcast to everyone, the following also appears in /var/log/messages:\n> > May 17 09:28:11 galaxy\n> > May 17 09:28:11 galaxy syslogd: Cannot glue message parts together\n \n> Hmm. We were just discussing this a few weeks ago, when someone\n> suggested making the syslog option be the default and I wanted to\n> know if it was really robust enough for that. Seems it's not.\n\n> So I'm afraid the answer is that syslog and verbose logging won't\n> play together, at least not on your platform. Sorry.\n \n> We really need a better logging answer...\n\nSplitting the logging message out to syslog in tprintf() (or its\nreplacement) would work, if we knew the syslog gluing buffer maxlen. A\n'packetized' logging system of this sort might work....\n\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Wed, 17 May 2000 12:55:59 -0400", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 7.0 RPMS and syslog problem. (more)" }, { "msg_contents": "On Wed, May 17, 2000 at 11:59:15AM -0400, Tom Lane wrote:\n> Michael Schout <[email protected]> writes:\n> \n> Hmm. We were just discussing this a few weeks ago, when someone\n> suggested making the syslog option be the default and I wanted to\n> know if it was really robust enough for that. Seems it's not.\n\nPerhaps the maintainer of the binary RPMS on ftp.postgresql.org\ncould rebuild with syslog logging turned off by default?\n\nMike\n", "msg_date": "Wed, 17 May 2000 17:52:14 -0500", "msg_from": "Michael Schout <[email protected]>", "msg_from_op": true, "msg_subject": "Re: 7.0 RPMS and syslog problem. (more)" }, { "msg_contents": "Michael Schout wrote:\n> On Wed, May 17, 2000 at 11:59:15AM -0400, Tom Lane wrote:\n> > Hmm. We were just discussing this a few weeks ago, when someone\n> > suggested making the syslog option be the default and I wanted to\n> > know if it was really robust enough for that. Seems it's not.\n \n> Perhaps the maintainer of the binary RPMS on ftp.postgresql.org\n> could rebuild with syslog logging turned off by default?\n\nSet the pg_options -- ie, set syslog to 0, or set query to 0, if you\ndon't need query logging. There's no need to recompile, as the runtime\noption sets things properly. I would prefer to not disable syslog when\nit is working fine for those who have shorter queries. I will, however,\nchange the default pg_options with syslog set to 0 -- same effect, but\nit still allows those with less difficult query-length requirements to\nuse the feature -- with the caveat that too long a query with query\nlogging on will crash out syslog.\n\nSo, I have two notices to add to the ramifordistat page, and to the\nREADME.rpm..... I won't build a -2 release as yet, until we get the\nactual query length of overflow (volunteers?) so that it can be\ndocumented.\n\nIf you want to completely disable syslog on your own RPMS, feel free to\ndownload the src.rpm, edit rpm-pgsql-7.0.patch to remove the USE_SYSLOG\ndefine, and rebuild with rpm -ba. Then install the resulting RPMs. If\nyou wouldn't mind, change the release number so that no one gets\nconfused.\n\nHTH\n\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Wed, 17 May 2000 23:37:10 -0400", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 7.0 RPMS and syslog problem. (more)" } ]
[ { "msg_contents": "I can't wait to apply this...\n\n> \n> I'm resubmitting this patch from a while ago, now that 7.0 is out. If\n> you cast your minds back, this patch allows update and delete to work on\n> inheritance hierarchies just like it now works on select. It also uses\n> the Informix/Illustra model for subclasses - i.e. \"ONLY\", as was\n> discussed at length before.\n> \n> Please point out anything I've screwed up so I can post a final version.\n> In particular I forgot where you change the initdb db version thingy,\n> but I don't want to do that anyway till everything else is correct.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 17 May 2000 12:01:55 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: OO Patch" }, { "msg_contents": "Chris writes:\n\n> I'm resubmitting this patch from a while ago, now that 7.0 is out.\n\nI don't recall that discussion ever ending with a definite design\ndocument. Could you at least point out which of the many proposals this\nrefers to? What I do remember in fact is that the core decided that the\nvarious proposals need to be checked against SQL3 before anything can\nhappen and as far as I can see you definitely didn't do that.\n\n> If you cast your minds back, this patch allows update and delete to\n> work on inheritance hierarchies just like it now works on select.\n\nI don't think that's a good idea. If I have an inheritance hierarchy A < B\n< C and I update a row in A \"only\" then I break the hierarchy. (I'm also\nwondering how you would do that, since you would have to make a copy of\nthe row for A and then keep the old copies around for B and C.) SQL3\nviolation right there. That also goes for the various ALTER TABLE [ONLY]\nsyntax additions. If I add a row to A only then B is no longer a subtable\nof A. One thing I see you didn't change is the CREATE TABLE syntax,\nalthough that could posibly have used it.\n\n> It also uses the Informix/Illustra model for subclasses - i.e. \"ONLY\",\n> as was discussed at length before.\n\nSELECT ONLY is cool, that's the SQL3 way.\n\n\nAll in all I think it's great that you're tackling this but this patch\nlooks very suspect to me. Rolling your own inheritance model when there's\ndecades of scientific research out there that some presumably smart guys\nwrote down in a (now official) standards document doesn't seem like a wise\nthing to do.\n\n\nPS: Could you elaborate on that FAQ_DEV change?\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Thu, 18 May 2000 00:04:31 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: OO Patch" }, { "msg_contents": "I'm resubmitting this patch from a while ago, now that 7.0 is out. If\nyou cast your minds back, this patch allows update and delete to work on\ninheritance hierarchies just like it now works on select. It also uses\nthe Informix/Illustra model for subclasses - i.e. \"ONLY\", as was\ndiscussed at length before.\n\nPlease point out anything I've screwed up so I can post a final version.\nIn particular I forgot where you change the initdb db version thingy,\nbut I don't want to do that anyway till everything else is correct.", "msg_date": "Thu, 18 May 2000 11:12:26 +1000", "msg_from": "Chris <[email protected]>", "msg_from_op": false, "msg_subject": "OO Patch" } ]
[ { "msg_contents": "Just been playing with SQL Server 7's DTS (Data Transform Service) and\nthought, \"Could SQL load data into Postgres?\".\n\nThe main idea (ok excuse as I didn't fancy doing much work this\nafternoon), was publishing data on our Intranet (which uses Postgres).\n\nAnyhow, loaded the ODBC driver onto the SQL server, played and it only\nworks. I'll write up how to do it tonight (as there are a few gotcha's),\nbut it's one for the docs under \"How do I migrate data from SQL7 to\nPostgreSQL?\".\n\nPeter\n\n-- \nPeter Mount\nEnterprise Support\nMaidstone Borough Council\nAny views stated are my own, and not those of Maidstone Borough Council.\n\n\n", "msg_date": "Wed, 17 May 2000 17:24:16 +0100", "msg_from": "Peter Mount <[email protected]>", "msg_from_op": true, "msg_subject": "MSSQL7 & PostgreSQL 7.0" }, { "msg_contents": "Yes, major issue.\n\n> Just been playing with SQL Server 7's DTS (Data Transform Service) and\n> thought, \"Could SQL load data into Postgres?\".\n> \n> The main idea (ok excuse as I didn't fancy doing much work this\n> afternoon), was publishing data on our Intranet (which uses Postgres).\n> \n> Anyhow, loaded the ODBC driver onto the SQL server, played and it only\n> works. I'll write up how to do it tonight (as there are a few gotcha's),\n> but it's one for the docs under \"How do I migrate data from SQL7 to\n> PostgreSQL?\".\n> \n> Peter\n> \n> -- \n> Peter Mount\n> Enterprise Support\n> Maidstone Borough Council\n> Any views stated are my own, and not those of Maidstone Borough Council.\n> \n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 17 May 2000 12:45:03 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: MSSQL7 & PostgreSQL 7.0" } ]
[ { "msg_contents": "\nWow, I thought the last guy that brought this up might be just unlucky\n(slower hardware then I've been running, etc) ... but, so far, my vacuum\nhas been running for ~2.5hrs and *looks* stuck as far as 'vacuum verbose'\nshows, but files keep getting updated ...\n\n... I don't know if this gives you any more information then we've had in\nthe past, or if this has been pretty much talked out, but figured one more\nreport, that is still running live, can't hurt ...\n\n Tom, its on the machine you have access to ... if there is anything you\nmight want to look at that might help, please feel free ... its the\nudmsearch database, and from what I can tell, its PID 44718 ...\n\nVACUUM VERBOSE has been hung at:\n\nNOTICE: --Relation url--\nNOTICE: Pages 8607: Changed 0, reaped 8391, Empty 0, New 0; Tup 88821: Vac 65416, Keep/VTL 0/0, Crash 0, UnUsed 98291, MinLen 125, MaxLen 914; Re-using: Free/Avail. Space 21878184/21835520; EndEmpty/Avail. Pages 0/7059. CPU 1.92s/0.00u sec.\nNOTICE: Index url_crc: Pages 1385; Tuples 88821: Deleted 65416. CPU 2.23s/0.00u sec.\nNOTICE: Index url_url: Pages 4199; Tuples 88821: Deleted 38844. CPU 2.25s/0.00u sec.\nNOTICE: Index url_pkey: Pages 1001; Tuples 88821: Deleted 38844. CPU 1.41s/0.00u sec.\nNOTICE: Rel url: Pages: 8607 --> 6091; Tuple(s) moved: 16669. CPU 10.10s/0.00u sec.\nNOTICE: Index url_crc: Pages 1387; Tuples 88821: Deleted 16669. CPU 0.75s/0.00u sec.\nNOTICE: Index url_url: Pages 4202; Tuples 88821: Deleted 16669. CPU 1.49s/0.00u sec.\nNOTICE: Index url_pkey: Pages 1002; Tuples 88821: Deleted 16669. CPU 0.65s/0.00u sec.\nNOTICE: --Relation ndict--\nNOTICE: Pages 65736: Changed 5857, reaped 5913, Empty 0, New 0; Tup 10181746: Vac 993304, Keep/VTL 0/0, Crash 0, UnUsed 0, MinLen 42, MaxLen 42; Re-using: Free/Avail. Space 43847288/43847288; EndEmpty/Avail. Pages 0/5913. CPU 19.83s/0.00u sec.\nNOTICE: Index n_word: Pages 26102; Tuples 10181746: Deleted 993304. CPU 66.69s/0.00u sec.\nNOTICE: Index n_url: Pages 50739; Tuples 10181746: Deleted 993304. CPU 69.34s/0.00u sec.\n\nUptime on the machine is normal/low:\n\npgsql% uptime\n12:27PM up 7 days, 5:55, 3 users, load averages: 3.23, 3.27, 3.33\n\nIts a Dual-PIII 450, 512Meg of RAM, the pgsql databases are on their own\ndedicated drive, seperate from the system drive, and not even swapping:\n\npgsql% pstat -s\nDevice 512-blocks Used Avail Capacity Type\n/dev/rda0s1b 1048320 0 1048320 0% Interleaved\n/dev/rda1s1b 1048320 0 1048320 0% Interleaved\nTotal 2096640 0 2096640 0%\n\nlsof shows (if it matters):\n\npgsql% lsof -p 44718\nCOMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME\npostgres 44718 pgsql cwd VDIR 13,131084 2048 1253933 /pgsql/data/base/udmsearch\npostgres 44718 pgsql rtd VDIR 13,131072 512 2 /\npostgres 44718 pgsql txt VREG 13,131084 4669647 103175 /pgsql/bin/postgres\npostgres 44718 pgsql txt VREG 13,131076 76648 212707 /usr/libexec/ld-elf.so.1\npostgres 44718 pgsql txt VREG 13,131076 11128 56672 /usr/lib/libdescrypt.so.2\npostgres 44718 pgsql txt VREG 13,131076 118044 56673 /usr/lib/libm.so.2\npostgres 44718 pgsql txt VREG 13,131076 33476 56680 /usr/lib/libutil.so.3\npostgres 44718 pgsql txt VREG 13,131076 148316 56532 /usr/lib/libreadline.so.4\npostgres 44718 pgsql txt VREG 13,131076 251416 56674 /usr/lib/libncurses.so.5\npostgres 44718 pgsql txt VREG 13,131076 550996 56746 /usr/lib/libc.so.4\npostgres 44718 pgsql 0r VCHR 2,2 0t0 7967 /dev/null\npostgres 44718 pgsql 1w VREG 13,131084 233611833 761881 /pgsql/logs/postmaster.5432.93622\npostgres 44718 pgsql 2w VREG 13,131084 19771345 761882 /pgsql/logs/5432.93622\npostgres 44718 pgsql 3r VREG 13,131084 1752 1253891 /pgsql/data/base/udmsearch/pg_internal.init\npostgres 44718 pgsql 4u VREG 13,131084 3547136 15922 /pgsql/data/pg_log\npostgres 44718 pgsql 5u IPv4 0xd46352e0 0t0 TCP pgsql.tht.net:5432->hub.org:2135 (ESTABLISHED)\npostgres 44718 pgsql 6u VREG 13,131084 8192 15874 /pgsql/data/pg_variable\npostgres 44718 pgsql 7u VREG 13,131084 16384 1253978 /pgsql/data/base/udmsearch/pg_class\npostgres 44718 pgsql 8u VREG 13,131084 16384 1253977 /pgsql/data/base/udmsearch/pg_class_oid_index\npostgres 44718 pgsql 9u VREG 13,131084 57344 1253981 /pgsql/data/base/udmsearch/pg_attribute\npostgres 44718 pgsql 10u VREG 13,131084 32768 1253979 /pgsql/data/base/udmsearch/pg_attribute_relid_attnum_index\npostgres 44718 pgsql 11u VREG 13,131084 8192 1253989 /pgsql/data/base/udmsearch/pg_am\npostgres 44718 pgsql 12u VREG 13,131084 8192 1253949 /pgsql/data/base/udmsearch/pg_rewrite\npostgres 44718 pgsql 13u VREG 13,131084 8192 1253973 /pgsql/data/base/udmsearch/pg_index\npostgres 44718 pgsql 14u VREG 13,131084 8192 1253984 /pgsql/data/base/udmsearch/pg_amproc\npostgres 44718 pgsql 15u VREG 13,131084 16384 1253987 /pgsql/data/base/udmsearch/pg_amop\npostgres 44718 pgsql 16u VREG 13,131084 73728 1253957 /pgsql/data/base/udmsearch/pg_operator\npostgres 44718 pgsql 17u VREG 13,131084 16384 1253972 /pgsql/data/base/udmsearch/pg_index_indexrelid_index\npostgres 44718 pgsql 18u VREG 13,131084 32768 1253956 /pgsql/data/base/udmsearch/pg_operator_oid_index\npostgres 44718 pgsql 19u VREG 13,131084 16384 1253972 /pgsql/data/base/udmsearch/pg_index_indexrelid_index\npostgres 44718 pgsql 20u VREG 13,131084 16384 1253938 /pgsql/data/base/udmsearch/pg_type\npostgres 44718 pgsql 21u VREG 13,131084 16384 1253937 /pgsql/data/base/udmsearch/pg_type_oid_index\npostgres 44718 pgsql 22u VREG 13,131084 8192 1253983 /pgsql/data/base/udmsearch/pg_attrdef\npostgres 44718 pgsql 23u VREG 13,131084 16384 1253976 /pgsql/data/base/udmsearch/pg_class_relname_index\npostgres 44718 pgsql 24u VREG 13,131084 8192 1253942 /pgsql/data/base/udmsearch/pg_trigger\npostgres 44718 pgsql 25u VREG 13,131084 16384 1253939 /pgsql/data/base/udmsearch/pg_trigger_tgrelid_index\npostgres 44718 pgsql 26u VREG 13,131084 8192 15921 /pgsql/data/pg_shadow\npostgres 44718 pgsql 27u VREG 13,131084 16384 1253982 /pgsql/data/base/udmsearch/pg_attrdef_adrelid_index\npostgres 44718 pgsql 28u VREG 13,131084 538509312 1254004 /pgsql/data/base/udmsearch/ndict\npostgres 44718 pgsql 29u VREG 13,131084 213909504 1254006 /pgsql/data/base/udmsearch/n_word\npostgres 44718 pgsql 30u VREG 13,131084 416432128 1254005 /pgsql/data/base/udmsearch/n_url\n\n\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Wed, 17 May 2000 13:34:32 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "Ouch .. VACUUM *is* killer ..." }, { "msg_contents": "\nWow, it finally came back:\n\nNOTICE: Rel ndict: Pages: 65736 --> 59893; Tuple(s) moved: 993304. CPU 1924.28s/0.00u sec.\n\nStill processing, mind you, but at least I know its not hung up :)\n\n\nOn Wed, 17 May 2000, The Hermit Hacker wrote:\n\n> \n> Wow, I thought the last guy that brought this up might be just unlucky\n> (slower hardware then I've been running, etc) ... but, so far, my vacuum\n> has been running for ~2.5hrs and *looks* stuck as far as 'vacuum verbose'\n> shows, but files keep getting updated ...\n> \n> ... I don't know if this gives you any more information then we've had in\n> the past, or if this has been pretty much talked out, but figured one more\n> report, that is still running live, can't hurt ...\n> \n> Tom, its on the machine you have access to ... if there is anything you\n> might want to look at that might help, please feel free ... its the\n> udmsearch database, and from what I can tell, its PID 44718 ...\n> \n> VACUUM VERBOSE has been hung at:\n> \n> NOTICE: --Relation url--\n> NOTICE: Pages 8607: Changed 0, reaped 8391, Empty 0, New 0; Tup 88821: Vac 65416, Keep/VTL 0/0, Crash 0, UnUsed 98291, MinLen 125, MaxLen 914; Re-using: Free/Avail. Space 21878184/21835520; EndEmpty/Avail. Pages 0/7059. CPU 1.92s/0.00u sec.\n> NOTICE: Index url_crc: Pages 1385; Tuples 88821: Deleted 65416. CPU 2.23s/0.00u sec.\n> NOTICE: Index url_url: Pages 4199; Tuples 88821: Deleted 38844. CPU 2.25s/0.00u sec.\n> NOTICE: Index url_pkey: Pages 1001; Tuples 88821: Deleted 38844. CPU 1.41s/0.00u sec.\n> NOTICE: Rel url: Pages: 8607 --> 6091; Tuple(s) moved: 16669. CPU 10.10s/0.00u sec.\n> NOTICE: Index url_crc: Pages 1387; Tuples 88821: Deleted 16669. CPU 0.75s/0.00u sec.\n> NOTICE: Index url_url: Pages 4202; Tuples 88821: Deleted 16669. CPU 1.49s/0.00u sec.\n> NOTICE: Index url_pkey: Pages 1002; Tuples 88821: Deleted 16669. CPU 0.65s/0.00u sec.\n> NOTICE: --Relation ndict--\n> NOTICE: Pages 65736: Changed 5857, reaped 5913, Empty 0, New 0; Tup 10181746: Vac 993304, Keep/VTL 0/0, Crash 0, UnUsed 0, MinLen 42, MaxLen 42; Re-using: Free/Avail. Space 43847288/43847288; EndEmpty/Avail. Pages 0/5913. CPU 19.83s/0.00u sec.\n> NOTICE: Index n_word: Pages 26102; Tuples 10181746: Deleted 993304. CPU 66.69s/0.00u sec.\n> NOTICE: Index n_url: Pages 50739; Tuples 10181746: Deleted 993304. CPU 69.34s/0.00u sec.\n> \n> Uptime on the machine is normal/low:\n> \n> pgsql% uptime\n> 12:27PM up 7 days, 5:55, 3 users, load averages: 3.23, 3.27, 3.33\n> \n> Its a Dual-PIII 450, 512Meg of RAM, the pgsql databases are on their own\n> dedicated drive, seperate from the system drive, and not even swapping:\n> \n> pgsql% pstat -s\n> Device 512-blocks Used Avail Capacity Type\n> /dev/rda0s1b 1048320 0 1048320 0% Interleaved\n> /dev/rda1s1b 1048320 0 1048320 0% Interleaved\n> Total 2096640 0 2096640 0%\n> \n> lsof shows (if it matters):\n> \n> pgsql% lsof -p 44718\n> COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME\n> postgres 44718 pgsql cwd VDIR 13,131084 2048 1253933 /pgsql/data/base/udmsearch\n> postgres 44718 pgsql rtd VDIR 13,131072 512 2 /\n> postgres 44718 pgsql txt VREG 13,131084 4669647 103175 /pgsql/bin/postgres\n> postgres 44718 pgsql txt VREG 13,131076 76648 212707 /usr/libexec/ld-elf.so.1\n> postgres 44718 pgsql txt VREG 13,131076 11128 56672 /usr/lib/libdescrypt.so.2\n> postgres 44718 pgsql txt VREG 13,131076 118044 56673 /usr/lib/libm.so.2\n> postgres 44718 pgsql txt VREG 13,131076 33476 56680 /usr/lib/libutil.so.3\n> postgres 44718 pgsql txt VREG 13,131076 148316 56532 /usr/lib/libreadline.so.4\n> postgres 44718 pgsql txt VREG 13,131076 251416 56674 /usr/lib/libncurses.so.5\n> postgres 44718 pgsql txt VREG 13,131076 550996 56746 /usr/lib/libc.so.4\n> postgres 44718 pgsql 0r VCHR 2,2 0t0 7967 /dev/null\n> postgres 44718 pgsql 1w VREG 13,131084 233611833 761881 /pgsql/logs/postmaster.5432.93622\n> postgres 44718 pgsql 2w VREG 13,131084 19771345 761882 /pgsql/logs/5432.93622\n> postgres 44718 pgsql 3r VREG 13,131084 1752 1253891 /pgsql/data/base/udmsearch/pg_internal.init\n> postgres 44718 pgsql 4u VREG 13,131084 3547136 15922 /pgsql/data/pg_log\n> postgres 44718 pgsql 5u IPv4 0xd46352e0 0t0 TCP pgsql.tht.net:5432->hub.org:2135 (ESTABLISHED)\n> postgres 44718 pgsql 6u VREG 13,131084 8192 15874 /pgsql/data/pg_variable\n> postgres 44718 pgsql 7u VREG 13,131084 16384 1253978 /pgsql/data/base/udmsearch/pg_class\n> postgres 44718 pgsql 8u VREG 13,131084 16384 1253977 /pgsql/data/base/udmsearch/pg_class_oid_index\n> postgres 44718 pgsql 9u VREG 13,131084 57344 1253981 /pgsql/data/base/udmsearch/pg_attribute\n> postgres 44718 pgsql 10u VREG 13,131084 32768 1253979 /pgsql/data/base/udmsearch/pg_attribute_relid_attnum_index\n> postgres 44718 pgsql 11u VREG 13,131084 8192 1253989 /pgsql/data/base/udmsearch/pg_am\n> postgres 44718 pgsql 12u VREG 13,131084 8192 1253949 /pgsql/data/base/udmsearch/pg_rewrite\n> postgres 44718 pgsql 13u VREG 13,131084 8192 1253973 /pgsql/data/base/udmsearch/pg_index\n> postgres 44718 pgsql 14u VREG 13,131084 8192 1253984 /pgsql/data/base/udmsearch/pg_amproc\n> postgres 44718 pgsql 15u VREG 13,131084 16384 1253987 /pgsql/data/base/udmsearch/pg_amop\n> postgres 44718 pgsql 16u VREG 13,131084 73728 1253957 /pgsql/data/base/udmsearch/pg_operator\n> postgres 44718 pgsql 17u VREG 13,131084 16384 1253972 /pgsql/data/base/udmsearch/pg_index_indexrelid_index\n> postgres 44718 pgsql 18u VREG 13,131084 32768 1253956 /pgsql/data/base/udmsearch/pg_operator_oid_index\n> postgres 44718 pgsql 19u VREG 13,131084 16384 1253972 /pgsql/data/base/udmsearch/pg_index_indexrelid_index\n> postgres 44718 pgsql 20u VREG 13,131084 16384 1253938 /pgsql/data/base/udmsearch/pg_type\n> postgres 44718 pgsql 21u VREG 13,131084 16384 1253937 /pgsql/data/base/udmsearch/pg_type_oid_index\n> postgres 44718 pgsql 22u VREG 13,131084 8192 1253983 /pgsql/data/base/udmsearch/pg_attrdef\n> postgres 44718 pgsql 23u VREG 13,131084 16384 1253976 /pgsql/data/base/udmsearch/pg_class_relname_index\n> postgres 44718 pgsql 24u VREG 13,131084 8192 1253942 /pgsql/data/base/udmsearch/pg_trigger\n> postgres 44718 pgsql 25u VREG 13,131084 16384 1253939 /pgsql/data/base/udmsearch/pg_trigger_tgrelid_index\n> postgres 44718 pgsql 26u VREG 13,131084 8192 15921 /pgsql/data/pg_shadow\n> postgres 44718 pgsql 27u VREG 13,131084 16384 1253982 /pgsql/data/base/udmsearch/pg_attrdef_adrelid_index\n> postgres 44718 pgsql 28u VREG 13,131084 538509312 1254004 /pgsql/data/base/udmsearch/ndict\n> postgres 44718 pgsql 29u VREG 13,131084 213909504 1254006 /pgsql/data/base/udmsearch/n_word\n> postgres 44718 pgsql 30u VREG 13,131084 416432128 1254005 /pgsql/data/base/udmsearch/n_url\n> \n> \n> \n> Marc G. Fournier ICQ#7615664 IRC Nick: Scrappy\n> Systems Administrator @ hub.org \n> primary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Wed, 17 May 2000 13:47:25 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Ouch .. VACUUM *is* killer ..." }, { "msg_contents": "I had one start on Monday, it just finished an hour ago.\n\nI had added 33 million rows of varchar(255) (paying with the full text\nindexing) :-)\n\nNow I have to create indexes and all kinds of neat stuff, I wish I had a\nquad Xeon for this!\n\n- Mitch\n\n\"The only real failure is quitting.\"\n\n\n----- Original Message -----\nFrom: The Hermit Hacker <[email protected]>\nTo: <[email protected]>\nSent: Wednesday, May 17, 2000 12:47 PM\nSubject: Re: [HACKERS] Ouch .. VACUUM *is* killer ...\n\n\n>\n> Wow, it finally came back:\n>\n> NOTICE: Rel ndict: Pages: 65736 --> 59893; Tuple(s) moved: 993304. CPU\n1924.28s/0.00u sec.\n>\n> Still processing, mind you, but at least I know its not hung up :)\n>\n>\n> On Wed, 17 May 2000, The Hermit Hacker wrote:\n>\n> >\n> > Wow, I thought the last guy that brought this up might be just unlucky\n> > (slower hardware then I've been running, etc) ... but, so far, my vacuum\n> > has been running for ~2.5hrs and *looks* stuck as far as 'vacuum\nverbose'\n> > shows, but files keep getting updated ...\n> >\n> > ... I don't know if this gives you any more information then we've had\nin\n> > the past, or if this has been pretty much talked out, but figured one\nmore\n> > report, that is still running live, can't hurt ...\n> >\n> > Tom, its on the machine you have access to ... if there is anything you\n> > might want to look at that might help, please feel free ... its the\n> > udmsearch database, and from what I can tell, its PID 44718 ...\n> >\n> > VACUUM VERBOSE has been hung at:\n> >\n> > NOTICE: --Relation url--\n> > NOTICE: Pages 8607: Changed 0, reaped 8391, Empty 0, New 0; Tup 88821:\nVac 65416, Keep/VTL 0/0, Crash 0, UnUsed 98291, MinLen 125, MaxLen 914;\nRe-using: Free/Avail. Space 21878184/21835520; EndEmpty/Avail. Pages 0/7059.\nCPU 1.92s/0.00u sec.\n> > NOTICE: Index url_crc: Pages 1385; Tuples 88821: Deleted 65416. CPU\n2.23s/0.00u sec.\n> > NOTICE: Index url_url: Pages 4199; Tuples 88821: Deleted 38844. CPU\n2.25s/0.00u sec.\n> > NOTICE: Index url_pkey: Pages 1001; Tuples 88821: Deleted 38844. CPU\n1.41s/0.00u sec.\n> > NOTICE: Rel url: Pages: 8607 --> 6091; Tuple(s) moved: 16669. CPU\n10.10s/0.00u sec.\n> > NOTICE: Index url_crc: Pages 1387; Tuples 88821: Deleted 16669. CPU\n0.75s/0.00u sec.\n> > NOTICE: Index url_url: Pages 4202; Tuples 88821: Deleted 16669. CPU\n1.49s/0.00u sec.\n> > NOTICE: Index url_pkey: Pages 1002; Tuples 88821: Deleted 16669. CPU\n0.65s/0.00u sec.\n> > NOTICE: --Relation ndict--\n> > NOTICE: Pages 65736: Changed 5857, reaped 5913, Empty 0, New 0; Tup\n10181746: Vac 993304, Keep/VTL 0/0, Crash 0, UnUsed 0, MinLen 42, MaxLen 42;\nRe-using: Free/Avail. Space 43847288/43847288; EndEmpty/Avail. Pages 0/5913.\nCPU 19.83s/0.00u sec.\n> > NOTICE: Index n_word: Pages 26102; Tuples 10181746: Deleted 993304. CPU\n66.69s/0.00u sec.\n> > NOTICE: Index n_url: Pages 50739; Tuples 10181746: Deleted 993304. CPU\n69.34s/0.00u sec.\n> >\n> > Uptime on the machine is normal/low:\n> >\n> > pgsql% uptime\n> > 12:27PM up 7 days, 5:55, 3 users, load averages: 3.23, 3.27, 3.33\n> >\n> > Its a Dual-PIII 450, 512Meg of RAM, the pgsql databases are on their own\n> > dedicated drive, seperate from the system drive, and not even swapping:\n> >\n> > pgsql% pstat -s\n> > Device 512-blocks Used Avail Capacity Type\n> > /dev/rda0s1b 1048320 0 1048320 0% Interleaved\n> > /dev/rda1s1b 1048320 0 1048320 0% Interleaved\n> > Total 2096640 0 2096640 0%\n> >\n> > lsof shows (if it matters):\n> >\n> > pgsql% lsof -p 44718\n> > COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME\n> > postgres 44718 pgsql cwd VDIR 13,131084 2048 1253933\n/pgsql/data/base/udmsearch\n> > postgres 44718 pgsql rtd VDIR 13,131072 512 2 /\n> > postgres 44718 pgsql txt VREG 13,131084 4669647 103175\n/pgsql/bin/postgres\n> > postgres 44718 pgsql txt VREG 13,131076 76648 212707\n/usr/libexec/ld-elf.so.1\n> > postgres 44718 pgsql txt VREG 13,131076 11128 56672\n/usr/lib/libdescrypt.so.2\n> > postgres 44718 pgsql txt VREG 13,131076 118044 56673\n/usr/lib/libm.so.2\n> > postgres 44718 pgsql txt VREG 13,131076 33476 56680\n/usr/lib/libutil.so.3\n> > postgres 44718 pgsql txt VREG 13,131076 148316 56532\n/usr/lib/libreadline.so.4\n> > postgres 44718 pgsql txt VREG 13,131076 251416 56674\n/usr/lib/libncurses.so.5\n> > postgres 44718 pgsql txt VREG 13,131076 550996 56746\n/usr/lib/libc.so.4\n> > postgres 44718 pgsql 0r VCHR 2,2 0t0 7967 /dev/null\n> > postgres 44718 pgsql 1w VREG 13,131084 233611833 761881\n/pgsql/logs/postmaster.5432.93622\n> > postgres 44718 pgsql 2w VREG 13,131084 19771345 761882\n/pgsql/logs/5432.93622\n> > postgres 44718 pgsql 3r VREG 13,131084 1752 1253891\n/pgsql/data/base/udmsearch/pg_internal.init\n> > postgres 44718 pgsql 4u VREG 13,131084 3547136 15922\n/pgsql/data/pg_log\n> > postgres 44718 pgsql 5u IPv4 0xd46352e0 0t0 TCP\npgsql.tht.net:5432->hub.org:2135 (ESTABLISHED)\n> > postgres 44718 pgsql 6u VREG 13,131084 8192 15874\n/pgsql/data/pg_variable\n> > postgres 44718 pgsql 7u VREG 13,131084 16384 1253978\n/pgsql/data/base/udmsearch/pg_class\n> > postgres 44718 pgsql 8u VREG 13,131084 16384 1253977\n/pgsql/data/base/udmsearch/pg_class_oid_index\n> > postgres 44718 pgsql 9u VREG 13,131084 57344 1253981\n/pgsql/data/base/udmsearch/pg_attribute\n> > postgres 44718 pgsql 10u VREG 13,131084 32768 1253979\n/pgsql/data/base/udmsearch/pg_attribute_relid_attnum_index\n> > postgres 44718 pgsql 11u VREG 13,131084 8192 1253989\n/pgsql/data/base/udmsearch/pg_am\n> > postgres 44718 pgsql 12u VREG 13,131084 8192 1253949\n/pgsql/data/base/udmsearch/pg_rewrite\n> > postgres 44718 pgsql 13u VREG 13,131084 8192 1253973\n/pgsql/data/base/udmsearch/pg_index\n> > postgres 44718 pgsql 14u VREG 13,131084 8192 1253984\n/pgsql/data/base/udmsearch/pg_amproc\n> > postgres 44718 pgsql 15u VREG 13,131084 16384 1253987\n/pgsql/data/base/udmsearch/pg_amop\n> > postgres 44718 pgsql 16u VREG 13,131084 73728 1253957\n/pgsql/data/base/udmsearch/pg_operator\n> > postgres 44718 pgsql 17u VREG 13,131084 16384 1253972\n/pgsql/data/base/udmsearch/pg_index_indexrelid_index\n> > postgres 44718 pgsql 18u VREG 13,131084 32768 1253956\n/pgsql/data/base/udmsearch/pg_operator_oid_index\n> > postgres 44718 pgsql 19u VREG 13,131084 16384 1253972\n/pgsql/data/base/udmsearch/pg_index_indexrelid_index\n> > postgres 44718 pgsql 20u VREG 13,131084 16384 1253938\n/pgsql/data/base/udmsearch/pg_type\n> > postgres 44718 pgsql 21u VREG 13,131084 16384 1253937\n/pgsql/data/base/udmsearch/pg_type_oid_index\n> > postgres 44718 pgsql 22u VREG 13,131084 8192 1253983\n/pgsql/data/base/udmsearch/pg_attrdef\n> > postgres 44718 pgsql 23u VREG 13,131084 16384 1253976\n/pgsql/data/base/udmsearch/pg_class_relname_index\n> > postgres 44718 pgsql 24u VREG 13,131084 8192 1253942\n/pgsql/data/base/udmsearch/pg_trigger\n> > postgres 44718 pgsql 25u VREG 13,131084 16384 1253939\n/pgsql/data/base/udmsearch/pg_trigger_tgrelid_index\n> > postgres 44718 pgsql 26u VREG 13,131084 8192 15921\n/pgsql/data/pg_shadow\n> > postgres 44718 pgsql 27u VREG 13,131084 16384 1253982\n/pgsql/data/base/udmsearch/pg_attrdef_adrelid_index\n> > postgres 44718 pgsql 28u VREG 13,131084 538509312 1254004\n/pgsql/data/base/udmsearch/ndict\n> > postgres 44718 pgsql 29u VREG 13,131084 213909504 1254006\n/pgsql/data/base/udmsearch/n_word\n> > postgres 44718 pgsql 30u VREG 13,131084 416432128 1254005\n/pgsql/data/base/udmsearch/n_url\n> >\n> >\n> >\n> > Marc G. Fournier ICQ#7615664 IRC Nick:\nScrappy\n> > Systems Administrator @ hub.org\n> > primary: [email protected] secondary:\nscrappy@{freebsd|postgresql}.org\n> >\n>\n> Marc G. Fournier ICQ#7615664 IRC Nick:\nScrappy\n> Systems Administrator @ hub.org\n> primary: [email protected] secondary:\nscrappy@{freebsd|postgresql}.org\n>\n\n", "msg_date": "Wed, 17 May 2000 12:57:19 -0400", "msg_from": "\"Mitch Vincent\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Ouch .. VACUUM *is* killer ..." } ]
[ { "msg_contents": "Now that I know that triggers can call non-C functions, I would like to\nknow how non-C functions access information about the triggered row.\n\nI see plpgsql has an interface to the trigger information. Do SQL\nfunctions us OLD/NEW to reference that information, like they do in\nrules?\n\nThe trigger programmers guide page says PROCEDURE is a C function, so I\nwill need to correct that:\n\n\tProcedure\n\t\n\t The procedure name is the C function called. \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 17 May 2000 13:06:57 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Using triggers with non-C functions" }, { "msg_contents": "On Wed, 17 May 2000, Bruce Momjian wrote:\n\nWell, I know that with PL/pgSQL you use NEW and OLD, but I'm not\nsure how this works with other languages.\n\n> Now that I know that triggers can call non-C functions, I would like to\n> know how non-C functions access information about the triggered row.\n> \n> I see plpgsql has an interface to the trigger information. Do SQL\n> functions us OLD/NEW to reference that information, like they do in\n> rules?\n> \n> The trigger programmers guide page says PROCEDURE is a C function, so I\n> will need to correct that:\n> \n> \tProcedure\n> \t\n> \t The procedure name is the C function called. \n> \n> \n> \n\n-- \n Alex G. Perel -=- AP5081\[email protected] -=- [email protected]\n play -=- work \n\t \nDisturbed Networks - Powered exclusively by FreeBSD\n== The Power to Serve -=- http://www.freebsd.org/ \n\n", "msg_date": "Thu, 18 May 2000 15:08:28 -0400 (EDT)", "msg_from": "Alex Perel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Using triggers with non-C functions" } ]
[ { "msg_contents": "> 1. MVCC semantics. If we can't keep MVCC then the deal's dead in the\n> water, no question. Vadim's by far the best man to look at \n> this issue; Vadim, do you have time to think about it soon?\n\nI'll comment just after weekends...\nBut Mike Mascari raised up very interest issue about am I *allowed* to\nview the code at all. If we'll decide to continue with our own\nsmgr+WAL I'll not be able to free my mind from what I've seen\nin SDB code. So..?\nOk, I'll read API documentation first...\n\nVadim\n\n", "msg_date": "Wed, 17 May 2000 10:32:43 -0700", "msg_from": "\"Mikheev, Vadim\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Berkeley DB license " }, { "msg_contents": "At 10:32 AM 5/17/00 -0700, Mikheev, Vadim wrote:\n\n> But Mike Mascari raised up very interest issue about am I *allowed* to\n> view the code at all. If we'll decide to continue with our own\n> smgr+WAL I'll not be able to free my mind from what I've seen\n> in SDB code. So..?\n\nBerkeley DB implements a write-ahead log and two-phase locking using\nwell-known techniques from the literature (see, eg, Transaction\nProcessing by Gray et al). Hell, I've seen the server code for\nTeradata, Illustra, and Informix, and have managed to make a living\nin the industry for fifteen years anyway.\n\n\t\t\t\t\tmike\n\n", "msg_date": "Wed, 17 May 2000 11:21:41 -0700", "msg_from": "\"Michael A. Olson\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: Berkeley DB license " } ]
[ { "msg_contents": "I believe there is a bug, or at least a not very nice feature in initdb.\n\nIf initdb does not succeed, it attempts to \"exit_nicely\". By default\nthis involved deleting the $PGDATA directory!! So if you have put other\nthings in you $PGDATA directory or (as in my case) you attempt to create\na database in an existing directory that contains lots of stuff that you\nreally, really wanted to keep, and initdb fails, initdb very \"nicely\"\ndeletes them for you if it fails.\n\nIt seems like it would be a whole lot \"nicer\" if initdb only deleted the\nfiles that it attempted to create OR if the default was not to delete\nanything.\n\nIn any case, I have changed the default on my installation to NEVER\n\"clean up\" for me.\n\nJeff Collins\n\n\n\n\n", "msg_date": "Wed, 17 May 2000 15:19:47 -0400", "msg_from": "Jeffery Collins <[email protected]>", "msg_from_op": true, "msg_subject": "initdb and \"exit_nicely\"..." }, { "msg_contents": "Jeffery Collins writes:\n\n> It seems like it would be a whole lot \"nicer\" if initdb only deleted\n> the files that it attempted to create OR if the default was not to\n> delete anything.\n\nOkay, I could go for the former. What do others think?\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Thu, 18 May 2000 00:11:48 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: initdb and \"exit_nicely\"..." }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n>> It seems like it would be a whole lot \"nicer\" if initdb only deleted\n>> the files that it attempted to create OR if the default was not to\n>> delete anything.\n\n> Okay, I could go for the former. What do others think?\n\nIt'd be a bit of a pain but I can see the problem. You certainly\nshouldn't delete the $PGDATA directory itself unless you created it,\nIMHO. Doing more than that would imply that initdb's cleanup\nfunction would have to know the list of files/subdirectories that\nnormally get created during initdb, so as to remove only those and\nnot anything else that might be lying around in the directory.\nThat'd be a considerable maintenance headache since most of said files\nare not created directly by the script...\n\nBTW, what about collisions? Suppose the reason the initdb fails\nis that there's already a (bogus) pg_log, or some such --- is\nthe script supposed to know not to delete it? That strikes me\nas way too difficult, since now the script has to know not only\nwhich files get created but exactly when.\n\nA slightly more reasonable example is where the admin has already\ninserted his own pg_hba.conf in the directory; would be nice if initdb\ndidn't overwrite it (nor delete it on failure), but I'm not sure it's\nworth the trouble.\n\nSomething that would be a lot simpler is to refuse to run at all\nif the $PGDATA dir exists and is nonempty ;-)\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 17 May 2000 20:04:22 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: initdb and \"exit_nicely\"... " }, { "msg_contents": "> A slightly more reasonable example is where the admin has already\n> inserted his own pg_hba.conf in the directory; would be nice if initdb\n> didn't overwrite it (nor delete it on failure), but I'm not sure it's\n> worth the trouble.\n\nI am inclined to leave it as is too. I can imagine many bug reports if\nthat directory is not flushed on failure.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 17 May 2000 22:28:39 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: initdb and \"exit_nicely\"..." }, { "msg_contents": "Tom Lane wrote:\n\n> Peter Eisentraut <[email protected]> writes:\n> >> It seems like it would be a whole lot \"nicer\" if initdb only deleted\n> >> the files that it attempted to create OR if the default was not to\n> >> delete anything.\n>\n> > Okay, I could go for the former. What do others think?\n>\n> It'd be a bit of a pain but I can see the problem. You certainly\n> shouldn't delete the $PGDATA directory itself unless you created it,\n> IMHO. Doing more than that would imply that initdb's cleanup\n> function would have to know the list of files/subdirectories that\n> normally get created during initdb, so as to remove only those and\n> not anything else that might be lying around in the directory.\n> That'd be a considerable maintenance headache since most of said files\n> are not created directly by the script...\n>\n\nI agree this would be a pain. After thinking about it, I think the best\napproach is to do one or both of the following:\n\n - Change the default to not delete $PGDATA.\n - Prompt the user for confirmation before deleting $PGDATA.\n\nAnything else sounds like a pain in the *ss and very error prone.\n\nJeff Collins\n\n\n", "msg_date": "Thu, 18 May 2000 08:39:46 -0400", "msg_from": "Jeffery Collins <[email protected]>", "msg_from_op": true, "msg_subject": "Re: initdb and \"exit_nicely\"..." }, { "msg_contents": "I wrote:\n\n> Jeffery Collins writes:\n> > It seems like it would be a whole lot \"nicer\" if initdb only deleted\n> > the files that it attempted to create OR if the default was not to\n> > delete anything.\n> \n> Okay, I could go for the former. What do others think?\n\nHere's a patch that might do what you need but I'm somewhat suspicious of\nthis situation. Recycling an old PGDATA directory is not supported, in\nfacts it's explicitly prevented with certain checks in initdb. So\napparently you precreated the data directory and put \"interesting\nthings\" of your own in it, which is not necessarily something that's\nencouraged either.\n\nIt does make sense to leave the PGDATA directory and only clean out the\n_contents_ on failure, that is, use `rm -rf $PGDATA/*' instead of `rm -rf\n$PGDATA' but I doubt that that can be done portably.\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden", "msg_date": "Fri, 19 May 2000 01:50:54 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: initdb and \"exit_nicely\"..." }, { "msg_contents": "Tom Lane writes:\n\n> Something that would be a lot simpler is to refuse to run at all\n> if the $PGDATA dir exists and is nonempty ;-)\n\nYes, why not? Initialize first, customize second. I thought initdb\nprovides a default pg_hba.conf file etc. so there's little to no reason\nto do it the other way around.\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Fri, 19 May 2000 01:58:04 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: initdb and \"exit_nicely\"... " } ]
[ { "msg_contents": "i know tom lane is probably sick of me, but i'm trying to figure out how\nthe cost estimators work for an index scan. i'm logging a lot of the\ncalculations that go into the cost estimate for some sample queries to\nsee what factors are most important in a cost estimate. it seems to me\nthat in the queries that i'm running, a vast majority of the cost comes\nfrom reading the tuples from the relation. i think the cost of\nnonsequential access seems reasonable (it's about 2), but the\ncalculation of pages fetched from the relation seems high. i'm not sure\nwhat the actual should be, but some queries have it at 2-4x the number\nof pages in the relation, which seemed high, so i started looking at\nthat. i don't understand why the function that is used would be a good\nmodel of what is actually happening. here's the comment & the function:\n\n * Estimate number of main-table tuples and pages fetched.\n *\n * If the number of tuples is much smaller than the number of pages in\n * the relation, each tuple will cost a separate nonsequential fetch.\n * If it is comparable or larger, then probably we will be able to\n * avoid some fetches. We use a growth rate of log(#tuples/#pages +\n * 1) --- probably totally bogus, but intuitively it gives the right\n * shape of curve at least.\n\n pages_fetched = ceil(baserel->pages * log(tuples_fetched /\nbaserel->pages + 1.0));\n\ni'm at a total loss to explain how this works. for all i know, it's\ncorrect and it is that costly, i don't know. it just seems to me that\nthere should be some notion of tuple size figured in to know how many\ntuples fit in a page. can somebody explain it to me?\n\nthanks,\n\njeff\n", "msg_date": "Wed, 17 May 2000 14:45:10 -0500", "msg_from": "Jeff Hoffmann <[email protected]>", "msg_from_op": true, "msg_subject": "question about index cost estimates" }, { "msg_contents": "Jeff Hoffmann <[email protected]> writes:\n> calculation of pages fetched from the relation seems high. i'm not sure\n> what the actual should be, but some queries have it at 2-4x the number\n> of pages in the relation, which seemed high, so i started looking at\n> that.\n\nIf the table is bigger than the disk cache, and we are hitting pages\nmore than once, then we are likely to have to fetch some pages more than\nonce. The model is assuming that the tuples we want are scattered more\nor less randomly over the whole table. If #tuples to be fetched\napproaches or exceeds #pages in table, then clearly we are going to be\ntouching some pages more than once because we want more than one tuple\nfrom them. The $64 question is did the page fall out of buffer cache\nbetween references? Worst case (if your table vastly exceeds your\navailable buffer space), practically every tuple fetch will require a\nseparate physical read, and then the number of page fetches is\nessentially the number of tuples returned --- which of course can be\na lot more than the number of pages in the table.\n\nRight now these considerations are divided between cost_index() and\ncost_nonsequential_access() in a way that might well be wrong.\nI've been intending to dig into the literature and try to find a better\ncost model but haven't gotten to it yet.\n\n> it just seems to me that there should be some notion of tuple size\n> figured in to know how many tuples fit in a page.\n\nIt seemed to me that the critical ratios are #tuples fetched vs #pages\nin table and table size vs. cache size. I could be wrong though...\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 17 May 2000 18:07:53 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: question about index cost estimates " }, { "msg_contents": "Tom Lane wrote:\n \n> If the table is bigger than the disk cache, and we are hitting pages\n> more than once, then we are likely to have to fetch some pages more than\n> once. The model is assuming that the tuples we want are scattered more\n> or less randomly over the whole table. If #tuples to be fetched\n> approaches or exceeds #pages in table, then clearly we are going to be\n> touching some pages more than once because we want more than one tuple\n> from them.\n\ni understand that, but still, if you have a tuple thats, say, 144 bytes,\nyou're going to have a higher chance of revisiting the same page than if\nthe tuple is 4k. it may be something that works itself out of the\nequation somewhere.\n \n> The $64 question is did the page fall out of buffer cache\n> between references? Worst case (if your table vastly exceeds your\n> available buffer space), practically every tuple fetch will require a\n> separate physical read, and then the number of page fetches is\n> essentially the number of tuples returned --- which of course can be\n> a lot more than the number of pages in the table.\n\ni understand that, too. i understand the need for a curve like that,\nmaybe my question should be the slope of the curve. it turns out that\nplaying around with my queries, it seems like when i was selecting about\n10% of the records, the page fetch estimate is pretty accurate, although\nwhen the selectivity falls to significantly below that, the page fetch\nestimate stays quite a bit higher than actual. \n\npart of what i'm doing is looking at the executor stats for each query. \ncan you explain what the shared blocks, local blocks, and direct blocks\nmean? i'm assuming that the shared blocks refers to the reads to the\ndatabase files & indexes. what do the other mean? are the shared\nblocks shared amongst all of the backends or does each have its own\npool? i'm getting a 90%+ buffer hit rate on index scans, but i'm\nassuming that's because i'm the only one doing anything on this machine\nright now and that would go down with more processes.\n\n> \n> Right now these considerations are divided between cost_index() and\n> cost_nonsequential_access() in a way that might well be wrong.\n> I've been intending to dig into the literature and try to find a better\n> cost model but haven't gotten to it yet.\n\nwell, there have been a lot of things that i've looked at and thought\n\"that doesn't sound right\", like for example the random page access\ncost, which defaults to 4.0. i thought that was too high, but i ran\nbonnie on my hard drive and it looks like that actual value is around\n3.9. so now i'm a believer. somebody must have put some thought into\nsomewhere. i'm still taking everything with a grain of salt until i can\nexplain it, though.\n\njeff\n", "msg_date": "Wed, 17 May 2000 19:58:00 -0500", "msg_from": "Jeff Hoffmann <[email protected]>", "msg_from_op": true, "msg_subject": "Re: question about index cost estimates" }, { "msg_contents": "Jeff Hoffmann <[email protected]> writes:\n> i understand that, but still, if you have a tuple thats, say, 144 bytes,\n> you're going to have a higher chance of revisiting the same page than if\n> the tuple is 4k.\n\nAre you? If you are pulling ten tuples from a thousand-page relation,\nseems to me the odds of revisiting any one page are somewhere around\n0.01 (too lazy to do the exact combinatorics right now). Doesn't really\nmatter what the average tuple size is --- or if you prefer, we already\naccounted for that because we are looking at target tuples divided by\ntotal pages rather than target tuples divided by total tuples.\n\n> maybe my question should be the slope of the curve. it turns out that\n> playing around with my queries, it seems like when i was selecting about\n> 10% of the records, the page fetch estimate is pretty accurate, although\n> when the selectivity falls to significantly below that, the page fetch\n> estimate stays quite a bit higher than actual. \n\nHmm. I would think that the behavior for very low selectivity ought to\nbe pretty close to the model: it's hard to see how it can be anything\nbut one page fetch per selected tuple, unless you are seeing strong\nclustering effects. I do *not* claim that the slope of the curve is\nnecessarily right for higher selectivities though... that needs to be\nlooked at.\n\n> part of what i'm doing is looking at the executor stats for each query. \n> can you explain what the shared blocks, local blocks, and direct blocks\n> mean?\n\nThe filesystem blocks in/out are from the kernel getrusage() call.\nOn my box, at least, these seem to mean physical I/O operations\ninitiated on behalf of the process --- AFAICT, touching a page that\nis already in kernel disk buffers does not increment the filesystem\nblocks count.\n\nThe other numbers are from PrintBufferUsage() in bufmgr.c. It looks\nlike the shared block counts are the number of block read and write\nrequests made to the kernel by bufmgr.c, and the hit rate indicates\nthe fraction of buffer fetch requests made to bufmgr.c that were\nsatisfied in Postgres' own buffer area (ie, without a kernel request).\n\nThe local block counts are the same numbers for non-shared relations,\nwhich basically means tables created in the current transaction.\nThey'll probably be zero in most cases of practical interest.\n\nThe \"direct\" counts seem to be broken at the moment --- I can't find\nany code that increments them. It looks like the intent was to count\nblock I/O operations on temporary files (sorttemp and suchlike).\nThis is not of interest for pure indexscans...\n\n> are the shared blocks shared amongst all of the backends or does each\n> have its own pool?\n\nShared among all --- that's what the shared memory block is (mostly)\nfor...\n\n> i'm getting a 90%+ buffer hit rate on index scans,\n> but i'm assuming that's because i'm the only one doing anything on\n> this machine right now and that would go down with more processes.\n\nIt also suggests that your test table isn't much bigger than the buffer\ncache ;-) --- or at least the part of it that you're touching isn't.\n\n> i'm still taking everything with a grain of salt until i can\n> explain it, though.\n\nGood man. I don't think anyone but me has looked at this stuff in a\nlong while, and it could use review.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 17 May 2000 22:35:33 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: question about index cost estimates " }, { "msg_contents": "> -----Original Message-----\n> From: [email protected] [mailto:[email protected]]On\n> Behalf Of Jeff Hoffmann\n>\n\n[snip] \n\n> \n> pages_fetched = ceil(baserel->pages * log(tuples_fetched /\n> baserel->pages + 1.0));\n>\n\nUnfortunately I didn't understand this well either.\n\npages_fetched seems to be able to be greater than\nbaserel->pages. But if there's sufficiently large buffer\nspace pages_fetched would be <= baserel->pages.\nAre there any assupmtions about buffer space ?\n\nRegards.\n\nHiroshi Inoue\[email protected] \n", "msg_date": "Thu, 18 May 2000 12:48:52 +0900", "msg_from": "\"Hiroshi Inoue\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: question about index cost estimates" }, { "msg_contents": "Tom Lane wrote:\n> \n> Jeff Hoffmann <[email protected]> writes:\n> > i understand that, but still, if you have a tuple thats, say, 144 bytes,\n> > you're going to have a higher chance of revisiting the same page than if\n> > the tuple is 4k.\n> \n> Are you? If you are pulling ten tuples from a thousand-page relation,\n> seems to me the odds of revisiting any one page are somewhere around\n> 0.01 (too lazy to do the exact combinatorics right now). \n\nwell, for something like 10 tuples out of say 50,000 - 100,000 records,\nthat probably wouldn't come into play and you'd almost definitely\ntalking 1 page access per tuple. i was thinking it would come into play\nwith higher selectivity. actually i don't know if i'm thinking through\nthe probabilities and formulas require as much as just brainstorming\nwhat things _might_ be coming into play.\n\n> Doesn't really\n> matter what the average tuple size is --- or if you prefer, we already\n> accounted for that because we are looking at target tuples divided by\n> total pages rather than target tuples divided by total tuples.\n\ni thought about that after i sent my message. i find that a lot of\ntimes, i'll answer my own question roughly 15 seconds after sending the\nmessage. i'm still trying to figure out how that happens.\n\n> > i'm still taking everything with a grain of salt until i can\n> > explain it, though.\n> \n> Good man. I don't think anyone but me has looked at this stuff in a\n> long while, and it could use review.\n\ni can tell you that the more i look at things and start understanding\nthe logic, the more it seems that the numbers are fairly reasonable if\nyou don't or can't know a lot of the factors like caching or\nclustering. unfortunately those factors can make a lot of difference\nin actual results. i was looking at the cost estimate of an rtree index\nscan, but it turns out that when you add in the page accesses for the\nactual tuples, it's rare to see an index scan cost that matters to the\ntotal cost for reasonably large tables. because of that, i'm not sure\nwhat the value of fixing the rtree index scan cost estimator, other than\nmaking it semantically correct. it seems in the big picture, we're\nsubject to the whims of the disk cache more than anything.\n\ni think i'm starting to see what you said about fixing selectivity being\nmore important, although i'm still not convinced that making the\nselectivity more accurate is going to come close to solving the\nproblem. the selectivity problem is better defined, though, and\ntherefore easier to fix than the other problems in cost estimation. i'm\nassuming you're considering adding some sort of histogram to the stats\nthat vacuum collects. is that something that you're seriously\nconsidering or do you have another plan to introduce a useful concept of\nthe distribution of an attribute? the concept of making a histogram on\na b-tree is pretty simple, but if you're going to do it, you should\nprobably be taking into account r-trees, where you'd need a 2-d\nhistogram making the job just a bit tougher.\n\njeff\n", "msg_date": "Wed, 17 May 2000 23:32:55 -0500", "msg_from": "Jeff Hoffmann <[email protected]>", "msg_from_op": true, "msg_subject": "Re: question about index cost estimates" }, { "msg_contents": "Hiroshi Inoue wrote:\n> \n> > -----Original Message-----\n> > From: [email protected] [mailto:[email protected]]On\n> > Behalf Of Jeff Hoffmann\n> >\n> \n> [snip]\n> \n> >\n> > pages_fetched = ceil(baserel->pages * log(tuples_fetched /\n> > baserel->pages + 1.0));\n> >\n> \n> Unfortunately I didn't understand this well either.\n> \n> pages_fetched seems to be able to be greater than\n> baserel->pages. \n\nnot only does it seem that way, you can expect it to happen fairly\nfrequently, even if you're pulling only 1-2% of the records with a\nquery. if you don't believe it, check the actual performance of a few\nqueries.\n\n> But if there's sufficiently large buffer\n> space pages_fetched would be <= baserel->pages.\n> Are there any assupmtions about buffer space ?\n> \n\nthe # of pages fetched would be the same, it'd just be cheaper to pull\nthem from the buffer instead of from disk. that's what isn't being\ntaken into consideration properly in the estimate.\n\nthe real question is what assumptions can you make about buffer space? \nyou don't know how many concurrent accesses there are (all sharing\nbuffer space). i also don't think you can count on knowing the size of\nthe buffer space. therefore, the buffer space is set to some constant\nintermediate value & it is taken account of, at least in the\ncost_nonsequential_tuple. \n\nthe question is this: shouldn't you be able to make an educated guess at\nthis by dividing the total buffer space allocated by the backend by the\nnumber of postmaster processes running at the time? or don't you know\nthose things?\n\njeff\n", "msg_date": "Wed, 17 May 2000 23:49:23 -0500", "msg_from": "Jeff Hoffmann <[email protected]>", "msg_from_op": true, "msg_subject": "Re: question about index cost estimates" }, { "msg_contents": "Jeff Hoffmann <[email protected]> writes:\n> it seems in the big picture, we're subject to the whims of the disk\n> cache more than anything.\n\nAin't that the truth. Maybe we ought to think about whether there's any\nway to find out how big the kernel's disk cache is. Even if it only\nworked on some platforms, we'd be no worse off on the other ones...\n\n> I'm assuming you're considering adding some sort of histogram to the\n> stats that vacuum collects. is that something that you're seriously\n> considering or do you have another plan to introduce a useful concept\n> of the distribution of an attribute? the concept of making a\n> histogram on a b-tree is pretty simple, but if you're going to do it,\n> you should probably be taking into account r-trees, where you'd need a\n> 2-d histogram making the job just a bit tougher.\n\nI was considering a histogram for b-trees, but I have to admit I hadn't\nthought about r-trees. Seems like a 2-D histogram would be too bulky\nto be feasible. Could we get any mileage out of two 1-D histograms\n(ie, examine the two coordinates independently)? Or is that too\nsimplistic to be worth bothering with?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 18 May 2000 01:02:28 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: question about index cost estimates " }, { "msg_contents": "Jeff Hoffmann <[email protected]> writes:\n> the question is this: shouldn't you be able to make an educated guess at\n> this by dividing the total buffer space allocated by the backend by the\n> number of postmaster processes running at the time? or don't you know\n> those things?\n\nTwo things here:\n\nOne, we could easily find out the number of active backends, and we\ncertainly know the number of shared disk buffers. BUT: it'd be a\ndebugging nightmare if the planner's choices depended on the number\nof other backends that were running at the instant of planning. Even\nthough that'd theoretically be the right thing to do, I don't think\nwe want to go there. (If you want an argument less dependent on\nmere implementation convenience, consider that in many scenarios\nthe N backends will be accessing more or less the same set of tables.\nSo the assumption that each backend only gets the use of 1/N of the\nshared buffer space is too pessimistic anyway.)\n\nTwo, the Postgres shared buffer cache is only the first-line cache.\nWe also have the Unix kernel's buffer cache underneath us, though\nwe must share it with whatever else is going on on the machine.\nAs far as I've been able to measure there is relatively little cost\ndifference between finding a page in the Postgres cache and finding\nit in the kernel cache --- certainly a kernel call is still much\ncheaper than an actual disk access. So the most relevant number\nseems to be the fraction of the kernel's buffer cache that's\neffectively available to Postgres. Right now we have no way at\nall to measure that number, so we punt and treat it as a user-\nsettable parameter (I think I made the default setting 10Mb or so).\nIt'd be worthwhile looking into whether we can do better than\nguessing about the kernel cache size.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 18 May 2000 01:14:10 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: question about index cost estimates " }, { "msg_contents": "\"Hiroshi Inoue\" <[email protected]> writes:\n> pages_fetched seems to be able to be greater than\n> baserel->pages. But if there's sufficiently large buffer\n> space pages_fetched would be <= baserel->pages.\n> Are there any assupmtions about buffer space ?\n\nRight now cost_index doesn't try to account for that, because\nit doesn't have any way of knowing the relevant buffer-space\nparameter. (As I said to Jeff, we have to consider kernel\nbuffer space not just the number of Postgres shared buffers.)\n\ncost_nonsequential_access does have a dependence on (a totally\nbogus estimate of) effective cache size, but it's a considerably\nweaker dependence than you suggest above. If we had a reliable\nestimate of cache size I'd be inclined to restructure this code\nquite a bit...\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 18 May 2000 01:44:04 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: question about index cost estimates " }, { "msg_contents": "> -----Original Message-----\n> From: Tom Lane [mailto:[email protected]]\n> \n> \"Hiroshi Inoue\" <[email protected]> writes:\n> > pages_fetched seems to be able to be greater than\n> > baserel->pages. But if there's sufficiently large buffer\n> > space pages_fetched would be <= baserel->pages.\n> > Are there any assupmtions about buffer space ?\n> \n> Right now cost_index doesn't try to account for that, because\n> it doesn't have any way of knowing the relevant buffer-space\n> parameter. (As I said to Jeff, we have to consider kernel\n> buffer space not just the number of Postgres shared buffers.)\n> \n> cost_nonsequential_access does have a dependence on (a totally\n> bogus estimate of) effective cache size, but it's a considerably\n> weaker dependence than you suggest above.\n\nThanks. I just confirmed my question because I didn't understand\nwhether effecive cache size is irrelevant to the calculation or not. \n\n> If we had a reliable\n> estimate of cache size I'd be inclined to restructure this code\n> quite a bit...\n>\n\nYes,I know that reliable estimate is very significant but I have\nno idea unfortunately.\n\nRegards.\n\nHiroshi Inoue\[email protected] \n", "msg_date": "Thu, 18 May 2000 16:49:44 +0900", "msg_from": "\"Hiroshi Inoue\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: question about index cost estimates " }, { "msg_contents": "Chris Bitmead <[email protected]> writes:\n> On the postgres side, what if you actually made the cache _smaller_,\n> only caching important stuff like system tables or indexes. You yourself\n> said it doesn't matter if you get it from the cache or the kernel, so\n> why not let the kernel do it and prevent double buffering?\n\nIn fact I don't think it's productive to use an extremely large -B\nsetting. This is something that easily could be settled by experiment\n--- do people actually see any significant improvement in performance\nfrom increasing -B above, say, a few hundred?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 18 May 2000 10:24:56 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: question about index cost estimates " }, { "msg_contents": "Tom Lane wrote:\n\n> I was considering a histogram for b-trees, but I have to admit I hadn't\n> thought about r-trees. Seems like a 2-D histogram would be too bulky\n> to be feasible. Could we get any mileage out of two 1-D histograms\n> (ie, examine the two coordinates independently)? Or is that too\n> simplistic to be worth bothering with?\n> \n\ni don't think it would do any good to look at them separately. you\nmight as well just assume a uniform distribution if you're going to do\nthat. \n\ndoes anybody on the list know anything about fractals & wavelets? i\nknow _nothing_ about it, but i know that you can use wavelets to\ncompress photos (i.e., a 2-d data source) and my understanding is that\nyou're essentially converting the image into a mathematical function. \nit's also my understanding that wavelets work well on multiple scales,\ni.e., you can zoom in on a picture to get more detail or zoom out and\nget less detail. my thought is if you've got a histogram, would\nsomething like this be useful?\n\n like i said, i have no idea how it works or if it is of any interest or\nif it's even practical, just throwing something out there.\n\njeff\n", "msg_date": "Thu, 18 May 2000 10:53:26 -0500", "msg_from": "Jeff Hoffmann <[email protected]>", "msg_from_op": true, "msg_subject": "Re: question about index cost estimates" }, { "msg_contents": "Jeff Hoffmann <[email protected]> writes:\n> does anybody on the list know anything about fractals & wavelets?\n\nNow there's an interesting idea: regard the stats as a lossy compression\nof the probability density of the original dataset. Hmm ... this\ndoesn't do anything for the problem of computing the pdf cheaply to\nbegin with, but it might help with storing it compactly in pg_statistic.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 18 May 2000 12:07:27 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: question about index cost estimates " }, { "msg_contents": "Tom Lane wrote:\n> \n> Jeff Hoffmann <[email protected]> writes:\n> > does anybody on the list know anything about fractals & wavelets?\n> \n> Now there's an interesting idea: regard the stats as a lossy compression\n> of the probability density of the original dataset. Hmm ... this\n> doesn't do anything for the problem of computing the pdf cheaply to\n> begin with, but it might help with storing it compactly in pg_statistic.\n> \n> regards, tom lane\n\nyeah, that's exactly what i meant. you can probably tell math &\nstatistics aren't my strongest points. i'm trying to learn a little\nabout fractals because about all i knew before today is those pretty\nlittle pictures. just doing a quick search for their use with\ndatabases, though, i found at least one paper on selectivity estimates &\nfractals which i'm going to read. just for your reference, the paper is\nfrom VLDB and it called \"Estimating the Selectivity of Spatial Queries\nUsing the 'Correlation' fractal Dimension\". here's the link:\nhttp://www.vldb.org/conf/1995/P299.PDF. i've only read the abstract,\nbut it sounds pretty promising. i'm starting to wish that i knew what\nwas going on -- this is getting to be interesting to me. i've been\nusing postgresql for years, but up until the last few days, i was trying\nto treat it as a black box as much as possible -- i probably should have\ntried getting involved a lot earlier...\n\njeff\n", "msg_date": "Thu, 18 May 2000 13:56:46 -0500", "msg_from": "Jeff Hoffmann <[email protected]>", "msg_from_op": true, "msg_subject": "Re: question about index cost estimates" }, { "msg_contents": "Jeff Hoffmann writes:\n\n> just for your reference, the paper is from VLDB and it called\n> \"Estimating the Selectivity of Spatial Queries Using the 'Correlation'\n> fractal Dimension\". here's the link:\n> http://www.vldb.org/conf/1995/P299.PDF. i've only read the abstract,\n> but it sounds pretty promising.\n\nThat paper was very interesting. I think it might in fact contain the\nanswer for writing smart selectivity estimators for geometrical data, now\nsomeone only needs to be brave enough to try to code it. Btw., it also\ncontains a number interesting references regarding R-trees and selectivity\nestimation in general.\n\nWhodda thunk there's genuine information to be found on the Web ... :)\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Fri, 19 May 2000 19:44:42 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: question about index cost estimates" } ]
[ { "msg_contents": "\ni used to have a database on freebsd using 6.5.3.\n\ni had several concurrent processes which would do inserts via COPY and\nqueries.\n\non that system, i don't recall the COPY processes as being blocked by the \nquery processes.\n\nnow i'm running that app on solaris 7 with pgsql 7.0.\n\ni'm finding that a big long select is blocking other processes which\nare doing COPY's.\n\ni'm also finding that other queries are blocking.\n\nthe only real difference between what was running before and what is\nrunning now is the use of an ORDER BY clause in the big long select, since 7.0\nseems to need this to return records in the same order as 6.5.3 (i know, i shouldhave\nbeen using the ORDER BY in 6.5.3, but, such is the way it is).\n\n\nis the blocking i'm seeing supposed to be happening?\n\nor did i miss some flag or something when installing on solaris?\n\n\n-- \n[ Jim Mercer [email protected] +1 416 410-5633 ]\n[ Reptilian Research -- Longer Life through Colder Blood ]\n[ Don't be fooled by cheap Finnish imitations; BSD is the One True Code. ]\n", "msg_date": "Wed, 17 May 2000 19:45:26 -0400", "msg_from": "Jim Mercer <[email protected]>", "msg_from_op": true, "msg_subject": "table level locking different in 7.0?" }, { "msg_contents": "Jim Mercer <[email protected]> writes:\n> i had several concurrent processes which would do inserts via COPY and\n> queries.\n> on that system, i don't recall the COPY processes as being blocked by the \n> query processes.\n> now i'm running that app on solaris 7 with pgsql 7.0.\n> i'm finding that a big long select is blocking other processes which\n> are doing COPY's.\n\nHmm. In 7.0, COPY IN acquires an exclusive lock on the target table,\nwhich is something I put in in a fit of paranoia. It may not really\nbe necessary --- probably a regular write lock would be good enough.\n(6.5's COPY code neglected to acquire any lock at all, which is surely\n*not* good enough, but maybe I overreacted.)\n\nComments anyone?\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 17 May 2000 20:19:36 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: table level locking different in 7.0? " }, { "msg_contents": "> Jim Mercer <[email protected]> writes:\n>> i had several concurrent processes which would do inserts via COPY and\n>> queries.\n>> on that system, i don't recall the COPY processes as being blocked by the \n>> query processes.\n>> now i'm running that app on solaris 7 with pgsql 7.0.\n>> i'm finding that a big long select is blocking other processes which\n>> are doing COPY's.\n\n> Hmm. In 7.0, COPY IN acquires an exclusive lock on the target table,\n> which is something I put in in a fit of paranoia. It may not really\n> be necessary --- probably a regular write lock would be good enough.\n\nOK, fix committed. Jim, if you're in a hurry for this fix, just change\nAccessExclusiveLock to RowExclusiveLock at line 289 of\nbackend/commands/copy.c.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 17 May 2000 21:57:45 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: table level locking different in 7.0? " }, { "msg_contents": "> Jim Mercer <[email protected]> writes:\n> > i had several concurrent processes which would do inserts via COPY and\n> > queries.\n> > on that system, i don't recall the COPY processes as being blocked by the \n> > query processes.\n> > now i'm running that app on solaris 7 with pgsql 7.0.\n> > i'm finding that a big long select is blocking other processes which\n> > are doing COPY's.\n> \n> Hmm. In 7.0, COPY IN acquires an exclusive lock on the target table,\n> which is something I put in in a fit of paranoia. It may not really\n> be necessary --- probably a regular write lock would be good enough.\n> (6.5's COPY code neglected to acquire any lock at all, which is surely\n> *not* good enough, but maybe I overreacted.)\n\nI see no reason a write lock would not be good enough, unless we do some\nspecial stuff in copy which I have forgotten.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 17 May 2000 22:29:50 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: table level locking different in 7.0?" }, { "msg_contents": "> > Jim Mercer <[email protected]> writes:\n> >> i had several concurrent processes which would do inserts via COPY and\n> >> queries.\n> >> on that system, i don't recall the COPY processes as being blocked by the \n> >> query processes.\n> >> now i'm running that app on solaris 7 with pgsql 7.0.\n> >> i'm finding that a big long select is blocking other processes which\n> >> are doing COPY's.\n> \n> > Hmm. In 7.0, COPY IN acquires an exclusive lock on the target table,\n> > which is something I put in in a fit of paranoia. It may not really\n> > be necessary --- probably a regular write lock would be good enough.\n> \n> OK, fix committed. Jim, if you're in a hurry for this fix, just change\n> AccessExclusiveLock to RowExclusiveLock at line 289 of\n> backend/commands/copy.c.\n\nFYI, I have been telling people to grab tomorrow's snapshot from\nftp:/pub/dev if they need changes that have been applied. At this\npoint, we don't have any funny stuff in the cvs tree.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 17 May 2000 22:31:51 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: table level locking different in 7.0?" } ]
[ { "msg_contents": "> Hmm. In 7.0, COPY IN acquires an exclusive lock on the target table,\n> which is something I put in in a fit of paranoia. It may not really\n> be necessary --- probably a regular write lock would be good enough.\n> (6.5's COPY code neglected to acquire any lock at all, which is surely\n> *not* good enough, but maybe I overreacted.)\n\nOh, seems I forgot about COPY in 6.5... -:(\nROW EXCLUSIVE lock is required (just like for INSERT, DELETE, UPDATE)...\n\nVadim\n", "msg_date": "Wed, 17 May 2000 17:34:25 -0700", "msg_from": "\"Mikheev, Vadim\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: table level locking different in 7.0? " }, { "msg_contents": "\"Mikheev, Vadim\" <[email protected]> writes:\n> ROW EXCLUSIVE lock is required (just like for INSERT, DELETE, UPDATE)...\n\nOK, will fix (I have another little fix to make in copy.c anyway)\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 17 May 2000 20:41:25 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: table level locking different in 7.0? " }, { "msg_contents": "On Wed, May 17, 2000 at 08:41:25PM -0400, Tom Lane wrote:\n> \"Mikheev, Vadim\" <[email protected]> writes:\n> > ROW EXCLUSIVE lock is required (just like for INSERT, DELETE, UPDATE)...\n> \n> OK, will fix (I have another little fix to make in copy.c anyway)\n\ncan i get a patch relative to 7.0-release?\n\nthis is effecting a production database.\n\n-- \n[ Jim Mercer [email protected] +1 416 410-5633 ]\n[ Reptilian Research -- Longer Life through Colder Blood ]\n[ Don't be fooled by cheap Finnish imitations; BSD is the One True Code. ]\n", "msg_date": "Wed, 17 May 2000 20:56:05 -0400", "msg_from": "Jim Mercer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: table level locking different in 7.0?" } ]
[ { "msg_contents": ">>>>> \"B\" == Bruce Momjian <[email protected]> writes:\n\nB> What function languages do triggers support? C, plpgsql, pltcl, SQL?\n\n You can add plruby.\n\n Ruby is the interpreted scripting language for quick and easy\nobject-oriented programming. It has many features to process text files and\nto do system management tasks (as in Perl). It is simple, straight-forward,\nextensible, and portable.\n\n http://www.ruby-lang.org/en/\n\n plruby can be found at \n\n http://www.ruby-lang.org/en/raa-list.rhtml?name=PL%2FRuby\n\n\nGuy Decoux\n", "msg_date": "Thu, 18 May 2000 06:25:08 +0200 (MET DST)", "msg_from": "ts <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Trigger function languages" }, { "msg_contents": "ts wrote:\n> \n> >>>>> \"B\" == Bruce Momjian <[email protected]> writes:\n> \n> B> What function languages do triggers support? C, plpgsql, pltcl, SQL?\n> \n> You can add plruby.\n\nHow safe is plruby ?\n\nDoes it allow access to file system and other possibly unsafe resources ?\n\n------------\nHannu\n", "msg_date": "Thu, 18 May 2000 09:58:12 +0300", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Trigger function languages" }, { "msg_contents": ">>>>> \"H\" == Hannu Krosing <[email protected]> writes:\n\nH> How safe is plruby ?\n\n I'll say that ruby is more safe than perl.\n\n Ruby define 4 security levels :\n\n/* safe-level:\n 0 - strings from streams/environment/ARGV are tainted (default)\n 1 - no dangerous operation by tainted string\n 2 - process/file operations prohibited\n 3 - all genetated strings are tainted\n 4 - no global (non-tainted) variable modification/no direct output\n*/\n\n You have a more complete description at :\n\n http://blade.nagaokaut.ac.jp/cgi-bin/scat.rb/ruby/ruby-talk/2689\n\n\n plruby, by default, is compiled with $SAFE >= 4\n\nGuy Decoux\n\n", "msg_date": "Thu, 18 May 2000 09:44:11 +0200 (MET DST)", "msg_from": "ts <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Trigger function languages" } ]
[ { "msg_contents": "\n> It seemed to me that the critical ratios are #tuples fetched vs #pages\n> in table and table size vs. cache size. I could be wrong though...\n\nAnother metric that would be interesing is a number that represents \nthe divergence of the heap order from the index sort order.\n\nIf heap data is more or less in the same order as the index \n(e.g. cluster index) the table size is irrelevant.\nSo the measure would be something like \nclustered*tablesize vs cache size instead of only tablesize.\n\nAndreas\n", "msg_date": "Thu, 18 May 2000 09:29:31 +0200", "msg_from": "Zeugswetter Andreas SB <[email protected]>", "msg_from_op": true, "msg_subject": "AW: question about index cost estimates " }, { "msg_contents": "Zeugswetter Andreas SB <[email protected]> writes:\n> Another metric that would be interesing is a number that represents \n> the divergence of the heap order from the index sort order.\n\nOh, absolutely! Got any ideas about how to get that number in a\nreasonable amount of time (not too much more than VACUUM takes now)?\nI've been drawing a blank :-(\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 18 May 2000 03:40:41 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: AW: question about index cost estimates " }, { "msg_contents": "Tom Lane writes:\n\n> > Another metric that would be interesing is a number that represents \n> > the divergence of the heap order from the index sort order.\n\n> Got any ideas about how to get that number in a reasonable amount of\n> time (not too much more than VACUUM takes now)?\n\nThere are a few fairly simple-minded methods to determine the sortedness\nof a list. Any more sophisticated methods are probably not much better in\npractice and take too much to calculate.\n\n1. The number of items out of place\n\n2. The number of pairs out of order\n\n3. The number of adjacent pairs out of order.\n\n4. Length of list minus length of longest increasing (not necessarily\nadjacent) subsequence\n\nFor our application I would suggest 3. because it can be calculated in\nlinear time and it only gives points to situations that you really want,\nnamely sequential disk access. You could take score/(length-1) to get 1\nfor a really unsorted list and 0 for a completely sorted one.\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Fri, 19 May 2000 19:38:43 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: AW: question about index cost estimates " } ]
[ { "msg_contents": "\n\n> -----Original Message-----\n> From: Peter Eisentraut [mailto:[email protected]]\n> Sent: 17 May 2000 17:18\n> To: Dave Page\n> Cc: '[email protected]'\n> Subject: Re: [HACKERS] ODBC & v7.0(Rel) Errors with Users and \n> Databases\n> \n> \n> Dave Page writes:\n> \n> > ERROR: DROP DATABASE: May not be called in a transaction block\n> \n> This command can't be rolled back so you aren't allowed to \n> try. This was\n> thought as an improvement. In general, a database isn't a \n> database object\n> so one shouldn't be transacting around with them. (Same goes \n> for users.)\n\nThis makes perfect sense of course.\n\n> > The ODBC log (and knowledge that it isn't pgAdmin or M$ \n> ADO) shows that the\n> > ODBC driver is automatically wrapping the query in a transaction. \n> \n> I don't know anything about ODBC but it certainly should \n> provide a means\n> to execute a command without that wrapping block. Is this a special\n> function or do you just execute some exec(\"DROP DATABASE\") style call?\n\nYes, I just issue the DROP DATABASE sql exactly as I would issue and INSERT,\nDELETE or UPDATE query. From my fumblings around in the source for the ODBC\ndriver I have found what I believe to be the offending code in statement.c\nat line 748 in the version shipped with 7.0, however I know nothing about\nhow the driver works and my C is far from good so (having very little spare\ntime also) I'm reluctant to try to fix it myself:\n\n\t/*\tBegin a transaction if one is not already in progress */\n\t/*\tThe reason is because we can't use declare/fetch cursors\nwithout\n\t\tstarting a transaction first.\n\t*/\n\tif ( ! self->internal && ! CC_is_in_trans(conn) &&\n(globals.use_declarefetch || STMT_UPDATE(self))) {\n\n\t\tmylog(\" about to begin a transaction on statement = %u\\n\",\nself);\n\t\tres = CC_send_query(conn, \"BEGIN\", NULL);\n\nAgain, any assistance would be greatfully received!\n\nRegards, \n \nDave. \n \n-- \n\"If you stand still, sooner or later something will eat you.\"\n - James Burke\nhttp://www.vale-housing.co.uk/ (Work)\nhttp://www.pgadmin.freeserve.co.uk/ (Home of pgAdmin) \n", "msg_date": "Thu, 18 May 2000 08:27:51 -0000", "msg_from": "Dave Page <[email protected]>", "msg_from_op": true, "msg_subject": "RE: ODBC & v7.0(Rel) Errors with Users and Databases" }, { "msg_contents": "> Again, any assistance would be greatfully received!\n\nI'm extremely busy this week, but if the problem still exists next\nweek I'll plan on taking a look at it. Sorry that the time just isn't\nthere at the moment :(\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Thu, 18 May 2000 15:34:30 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ODBC & v7.0(Rel) Errors with Users and Databases" } ]
[ { "msg_contents": "\n\n> -----Original Message-----\n> From: Bruce Momjian [mailto:[email protected]]\n> Sent: 17 May 2000 16:45\n> To: Peter Mount\n> Cc: PostgreSQL Developers List (E-mail)\n> Subject: Re: [HACKERS] MSSQL7 & PostgreSQL 7.0\n> \n> \n> Yes, major issue.\n> \n> > Just been playing with SQL Server 7's DTS (Data Transform \n> Service) and\n> > thought, \"Could SQL load data into Postgres?\".\n> > \n> > The main idea (ok excuse as I didn't fancy doing much work this\n> > afternoon), was publishing data on our Intranet (which uses \n> Postgres).\n> > \n> > Anyhow, loaded the ODBC driver onto the SQL server, played \n> and it only\n> > works. I'll write up how to do it tonight (as there are a \n> few gotcha's),\n> > but it's one for the docs under \"How do I migrate data from SQL7 to\n> > PostgreSQL?\".\n> > \n\nFeel free to tell me where to go if you think this is a shameless plug(!)\nbut pgAdmin also includes tools for migrating data from any ODBC datasource\n(as well as ascii text files) to PostgreSQL. I'm open to suggestions if\nanyone feels there are improvements worth making....\n\nRegards, \n \nDave. \n \n-- \n\"If you stand still, sooner or later something will eat you.\"\n - James Burke\nhttp://www.vale-housing.co.uk/ (Work)\nhttp://www.pgadmin.freeserve.co.uk/ (Home of pgAdmin) \n", "msg_date": "Thu, 18 May 2000 08:34:20 -0000", "msg_from": "Dave Page <[email protected]>", "msg_from_op": true, "msg_subject": "RE: MSSQL7 & PostgreSQL 7.0" } ]
[ { "msg_contents": "I don't think it's a shameless plug at all.\n\nHowever, I think it's sometimes useful to know alternative methods of\ndoing things, especially when you have to work for people who think\nMicrosoft are the best thing since sliced bread ;-)\n\nPeter\n\n-- \nPeter Mount\nEnterprise Support\nMaidstone Borough Council\nAny views stated are my own, and not those of Maidstone Borough Council.\n\n\n\n-----Original Message-----\nFrom: Dave Page [mailto:[email protected]]\nSent: Thursday, May 18, 2000 9:34 AM\nTo: '[email protected]'\nSubject: RE: [HACKERS] MSSQL7 & PostgreSQL 7.0\n\n\nFeel free to tell me where to go if you think this is a shameless\nplug(!)\nbut pgAdmin also includes tools for migrating data from any ODBC\ndatasource\n(as well as ascii text files) to PostgreSQL. I'm open to suggestions if\nanyone feels there are improvements worth making....\n\nRegards, \n \nDave. \n \n-- \n\"If you stand still, sooner or later something will eat you.\"\n - James Burke\nhttp://www.vale-housing.co.uk/ (Work)\nhttp://www.pgadmin.freeserve.co.uk/ (Home of pgAdmin) \n", "msg_date": "Thu, 18 May 2000 09:47:06 +0100", "msg_from": "Peter Mount <[email protected]>", "msg_from_op": true, "msg_subject": "RE: MSSQL7 & PostgreSQL 7.0" }, { "msg_contents": "Peter Mount wrote:\n> \n> I don't think it's a shameless plug at all.\n> \n> However, I think it's sometimes useful to know alternative methods of\n> doing things, especially when you have to work for people who think\n> Microsoft are the best thing since sliced bread ;-)\n> \n> Peter\n> \n> --\n> Peter Mount\n> Enterprise Support\n> Maidstone Borough Council\n> Any views stated are my own, and not those of Maidstone Borough Council.\n> \n> -----Original Message-----\n> From: Dave Page [mailto:[email protected]]\n> Sent: Thursday, May 18, 2000 9:34 AM\n> To: '[email protected]'\n> Subject: RE: [HACKERS] MSSQL7 & PostgreSQL 7.0\n> \n> Feel free to tell me where to go if you think this is a shameless\n> plug(!)\n> but pgAdmin also includes tools for migrating data from any ODBC\n> datasource\n> (as well as ascii text files) to PostgreSQL. I'm open to suggestions if\n> anyone feels there are improvements worth making....\n\nFirstly I have to say its a great bit of work (pgAdmin), however it'd be\nnice to see an open source tool mirroring the functionality of the DTS. I\nespecially liked the way you could script the mappings between fields, as\nwell as the GUI builder for complex transforms (in fact I absolutely loved\nthat (when it worked :), I had too long on the beta). As well as being able\nto use pre- and post- conditional actions upon success/failure of certain\nparts, like emailing, calling other programs etc.\n\nRegards,\nJoe\n\n> Regards,\n> \n> Dave.\n> \n> --\n> \"If you stand still, sooner or later something will eat you.\"\n> - James Burke\n> http://www.vale-housing.co.uk/ (Work)\n> http://www.pgadmin.freeserve.co.uk/ (Home of pgAdmin)\n\n-- \nJoe Shevland\nPrincipal Consultant\nKPI Logistics Pty Ltd\nhttp://www.kpi.com.au\nmailto:[email protected]\n", "msg_date": "Thu, 18 May 2000 19:50:30 +1000", "msg_from": "Joe Shevland <[email protected]>", "msg_from_op": false, "msg_subject": "Re: MSSQL7 & PostgreSQL 7.0" } ]
[ { "msg_contents": "This is how to get MS-SQL7 to copy data (either whole tables, or from\nqueries) into PostgreSQL. There are other ways of doing this (like\npgaccess), but this describes how to get the SQL server itself to do the\njob.\n\nThe key to this is Microsoft's DTS (Data Transformation Services). This\nis (for once) a useful utility that allows you to transfer & transform\ndata from one source to another. The beauty of it, is that the data can\ncome from any source, to any destination, which is what we will use\nhere.\n\nPrerequisites:\n\n\tMS-SQL7.0 running under NT4 Server, SP6A\n\tPostgreSQL ODBC driver 6.40.00.07 installed on the SQL server.\n\tPostgreSQL 7.0 Final running under Linux, SuSE 6.3\nStep 1:\n\nIf it doesn't exist, create the database on the PostgreSQL server.\n\nThen on the NT box, create an ODBC Data Source. Set the Database,\nServer, User Name & Password fields as normal.\nUnder Driver, uncheck ReadOnly, and check Parse Statements and Unknowns\nas LongVarChar.\nUnder Advanced, clear ReadOnly, but under OID Options, check Show Column\nand Fake Index.\n\nNB: You may not need to do all of the above (other than ReadOnly :) )\nbut they are the settings that worked for me.\n\nStep 2:\n\nOpen SQL Server Enterprise Manager, and expand the source SQL server.\nRight click Data Transformation Services, and select New Package. You\nshould be presented with the DTS Package editor.\n\nNow, under the Data menu, select Microsoft OLE DB Provider for SQL\nServer. Select the authentication scheme and select the source database.\n\nBack under the Data menu, select Other Connection. Select the Data\nSource created in Step 1.\n\nStep 3:\n\nClick once the SQL server icon, then while holding down the Control key,\nclick the PostgreSQL icon. Then under Workflow, select Add Transform. An\narrow should appear showing the direction the data will flow.\n\nDouble click the arrow to show the Transformation properties.\n\nUnder the Source tab, you have two choices. Either select a table name,\nin that case you are copying an entire table, or select SQL Query, and\nenter a select statement to limit or customise the data.\n\nUnder the Destination tab, select the table you want to copy into.\n\nNow this is where we have a problem. If the destination table doesnt\nexist, DTS has a Create Table button. However, I couldn't get it to\ncreate the table correctly. This was because it wraps the table name and\nthe field names within double quotes. Postgresql then creates the table,\nbut it interprets the quotes to mean \"don't convert the names to\nlowercase\". Later, when DTS does the copy, it doesn't include the\nquotes, so postgres duely lowercases the names, and basically it fails.\nSo, when using the Create Table facility, manually lowercase the table\nand column names before clicking OK to create the table. Also, make sure\nthe column types match. It seems that DTS loves the first column to be\nan int4, even if the source isn't.\n\nNow, under the transformation tab you should see each column from the\nsource (on the left) connected by an arrow to a destination column (on\nthe right). Now, here you can change which one you want it to copy to,\nor even write a short script to convert, merge a column, etc. Normally\nyou can leave it asis, but the script capability is really useful.\n\nOnce you are ready, save the package, then click go.\n\nYou should then see a nice little animation of some cogs mangling your\ndata, and if all's well you should get notification that it's completed.\n\nErrors:\n\nWhen errors occur, double click the entry that failed, and it displays\nthe error message. The most common ones are where a transformation\nfailed. Usually this is caused by a wrong data type on the destination\ntable, or if using a script on a column, it's not converting properly.\n\n-- \nPeter Mount\nEnterprise Support\nMaidstone Borough Council\nAny views stated are my own, and not those of Maidstone Borough Council.\n\n\n", "msg_date": "Thu, 18 May 2000 09:52:06 +0100", "msg_from": "Peter Mount <[email protected]>", "msg_from_op": true, "msg_subject": "LONG: How to migrate data from MS-SQL7 to PostgreSQL 7.0" }, { "msg_contents": "> This is how to get MS-SQL7 to copy data (either whole tables, or from\n> queries) into PostgreSQL...\n\nNice writeup. Can I fold it into our docs chapter on populating\ndatabases (or some other appropriate place)?\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Thu, 18 May 2000 15:20:55 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: LONG: How to migrate data from MS-SQL7 to PostgreSQL 7.0" }, { "msg_contents": "> > This is how to get MS-SQL7 to copy data (either whole tables, or from\n> > queries) into PostgreSQL...\n> \n> Nice writeup. Can I fold it into our docs chapter on populating\n> databases (or some other appropriate place)?\n> \n> - Thomas\n\nI was hoping you would grab it. :-)\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 18 May 2000 11:39:18 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: LONG: How to migrate data from MS-SQL7 to PostgreSQL 7.0" }, { "msg_contents": "> > This is how to get MS-SQL7 to copy data (either whole tables, or from\n> > queries) into PostgreSQL...\n> \n> Nice writeup. Can I fold it into our docs chapter on populating\n> databases (or some other appropriate place)?\n> \n\nI like how the Zope site has \"howtos\" and \"tips\"[1]. I think this might\nbe better because of the dynamic nature of this information. I'd be glad\nto contribute what I have about getting Applixware to talk to PostgreSQL[2],\nBut I need to check things out with the Applixware 5.0 and PostgreSQL 7.0.\n\nYou could even use Zope itself to do this stuff.\n\n-- cary\n\n[1] http://www.zope.org/Documentation\n[2] http://www.radix.net/~cobrien/applix/applix.txt\n", "msg_date": "Thu, 18 May 2000 19:18:43 -0400 (EDT)", "msg_from": "\"Cary O'Brien\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: LONG: How to migrate data from MS-SQL7 to PostgreSQL 7.0" } ]
[ { "msg_contents": "As usual when replying from here, replies prefixed with PM:\n\n-- \nPeter Mount\nEnterprise Support\nMaidstone Borough Council\nAny views stated are my own, and not those of Maidstone Borough Council.\n\n\n> > (as well as ascii text files) to PostgreSQL. I'm open to \n> suggestions if\n> > anyone feels there are improvements worth making....\n> \n> Firstly I have to say its a great bit of work (pgAdmin), \n\nThanks... \n\n> however it'd be\n> nice to see an open source tool mirroring the functionality \n> of the DTS. I\n> especially liked the way you could script the mappings \n> between fields, as\n> well as the GUI builder for complex transforms (in fact I \n> absolutely loved\n> that (when it worked :), I had too long on the beta). As well \n> as being able\n> to use pre- and post- conditional actions upon \n> success/failure of certain\n> parts, like emailing, calling other programs etc.\n\nThats an interesting idea. What exactly do you script though? Things\nlike\n(pseudo-code of course):\n\nIf SOURCETYPE = \"AutoNumber\" Then\n DESTTYPE = \"int4\"\n DESTDEFAULT = \"nextval('record_id')\" \n EXECSQL \"CREATE SEQUENCE record_id\"\nEnd If\n\nor have I got the wrong end of the stick completely?\n\nPM: -------------\nThe scripting is used to convert one or more columns in the source into\na single column in the destination. You have one for each column.\n\nie:\n\nSource\t\tDestination\n\nid\t---copy--->\tid\ndate\t-\\________\\\tdatetime\ntime\t-/\nname\t---copy--->\tname\n\nThe copy bits aren't scripted but the middle one is.\n\nHere, there's a script handling the conversion of date & time into a\nsingle value.\n\nI have a case of this here, where I use DTS to run dumpacl on our domain\ncontrollers to return the security logs, and pipes the output through a\ntransform into a table. NT Events have two fields for the date, but I\nneed one, so the script for the above case is:\n\nFunction Main()\n\tDTSDestination(\"datetime\") = DTSSource(\"date\")+\"\n\"+DTSSource(\"time\")\n\tMain = DTSTransformStat_OK\nEnd Function\n\nNow that was a little bit of VB Script, but it could easily be Perl,\nJavascript etc.\n", "msg_date": "Thu, 18 May 2000 12:06:40 +0100", "msg_from": "Peter Mount <[email protected]>", "msg_from_op": true, "msg_subject": "RE: MSSQL7 & PostgreSQL 7.0" } ]
[ { "msg_contents": "\n> -----Original Message-----\n> From: Joe Shevland [mailto:[email protected]]\n> Sent: 18 May 2000 09:51\n> To: Peter Mount\n> Cc: 'Dave Page'; '[email protected]'\n> Subject: Re: [HACKERS] MSSQL7 & PostgreSQL 7.0\n>\n> > (as well as ascii text files) to PostgreSQL. I'm open to \n> suggestions if\n> > anyone feels there are improvements worth making....\n> \n> Firstly I have to say its a great bit of work (pgAdmin), \n\nThanks... \n\n> however it'd be\n> nice to see an open source tool mirroring the functionality \n> of the DTS. I\n> especially liked the way you could script the mappings \n> between fields, as\n> well as the GUI builder for complex transforms (in fact I \n> absolutely loved\n> that (when it worked :), I had too long on the beta). As well \n> as being able\n> to use pre- and post- conditional actions upon \n> success/failure of certain\n> parts, like emailing, calling other programs etc.\n\nThats an interesting idea. What exactly do you script though? Things like\n(pseudo-code of course):\n\nIf SOURCETYPE = \"AutoNumber\" Then\n DESTTYPE = \"int4\"\n DESTDEFAULT = \"nextval('record_id')\" \n EXECSQL \"CREATE SEQUENCE record_id\"\nEnd If\n\nor have I got the wrong end of the stick completely?\n\nRegards, \n \nDave. \n \n-- \n\"If you stand still, sooner or later something will eat you.\"\n - James Burke\nhttp://www.vale-housing.co.uk/ (Work)\nhttp://www.pgadmin.freeserve.co.uk/ (Home of pgAdmin) \n", "msg_date": "Thu, 18 May 2000 11:28:52 -0000", "msg_from": "Dave Page <[email protected]>", "msg_from_op": true, "msg_subject": "RE: MSSQL7 & PostgreSQL 7.0" } ]
[ { "msg_contents": "Yes, as that's why I posted it ;-)\n\nPeter\n\n-- \nPeter Mount\nEnterprise Support\nMaidstone Borough Council\nAny views stated are my own, and not those of Maidstone Borough Council.\n\n\n\n-----Original Message-----\nFrom: Thomas Lockhart [mailto:[email protected]]\nSent: Thursday, May 18, 2000 4:21 PM\nTo: Peter Mount\nCc: PostgreSQL Developers List (E-mail); PostgreSQL Interfaces (E-mail)\nSubject: Re: [HACKERS] LONG: How to migrate data from MS-SQL7 to\nPostgreSQL 7.0\n\n\n> This is how to get MS-SQL7 to copy data (either whole tables, or from\n> queries) into PostgreSQL...\n\nNice writeup. Can I fold it into our docs chapter on populating\ndatabases (or some other appropriate place)?\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Thu, 18 May 2000 16:36:56 +0100", "msg_from": "Peter Mount <[email protected]>", "msg_from_op": true, "msg_subject": "RE: LONG: How to migrate data from MS-SQL7 to PostgreSQ\n\tL 7.0" } ]
[ { "msg_contents": "\nis anyone working on or have working a fail-over implentation for the\npostgresql stuff. i'd be interested in seeing if and how any might be\ndealing with just general issues as well as the database syncing issues.\n\nwe are looking to do this with heartbeat and lvs in mind. also if anyone\nis load ballancing their databases that would be cool to talk about to.\n\n---\nTodd M. Shrider\t\t\tVA Linux Systems\t\nSystems Engineer\[email protected]\t\twww.valinux.com\n\n", "msg_date": "Thu, 18 May 2000 08:48:48 -0700 (PDT)", "msg_from": "\"Todd M. Shrider\" <[email protected]>", "msg_from_op": true, "msg_subject": "failing over with postgresql" }, { "msg_contents": "For a SCADA system (Supervisory Control and Data Akquisition) which consists\nof one master and one hot-standby server I have implemented such a\nsolution. To these UNIX servers client workstations are connected (NT and/or\nUNIX). The database client programms run on client and server side.\n\nWhen developing this approach I had to goals in mind:\n1) Not to get dependend on the PostgreSQL sources since they change very\ndynamically.\n2) Not to get dependend on the fe/be protocol since there are discussions\naround to change it.\n\nSo the approach is quite simple: Forward all database requests to the\nstandby server on TCP/IP level.\n\nOn both servers the postmaster listens on port 5433 and not on 5432. On\nstandard port 5432 my program listens instead. This program forks twice for\nevery incomming connection. The first instance forwards all packets from the\nfrontend to both backends. The second instance receives the packets from all\nbackends and forwards the packets from the master backend to the frontend.\nSo a frontend running on a server machine connects to port 5432 of\nlocalhost.\n\nOn the client machine runs another program (on NT as a service). This\nprogram forks for every incomming connections twice. The first instance\nforwards all packets to port 5432 of the current master server and the\nsecond instance forwards the packets from the master server to the frontend.\n\nDuring standby computer startup the database of the master computer is\ndumped, zipped, copied to the standby computer, unzipped and loaded into\nthat database.\nIf a standby startup took place, all client connections are aborted to allow\na login into the standby database. The frontends need to reconnect in this\ncase. So the database of the standby computer is always in sync.\n\nThe disadvantage of this method is that a query cannot be canceled in the\nstandby server since the request key of this connections gets lost. But we\ncan live with that.\n\nBoth programms are able to run on Unix and on (native!) NT. On NT threads\nare created instead of forked processes.\n\nThis approach is simple, but it is effective and it works.\n\nWe hope to survive this way until real replication will be implemented in\nPostgreSQL.\n\nAndreas Kardos\n\n-----Urspr�ngliche Nachricht-----\nVon: Todd M. Shrider <[email protected]>\nAn: <[email protected]>\nGesendet: Donnerstag, 18. Mai 2000 17:48\nBetreff: [HACKERS] failing over with postgresql\n\n\n>\n> is anyone working on or have working a fail-over implentation for the\n> postgresql stuff. i'd be interested in seeing if and how any might be\n> dealing with just general issues as well as the database syncing issues.\n>\n> we are looking to do this with heartbeat and lvs in mind. also if anyone\n> is load ballancing their databases that would be cool to talk about to.\n>\n> ---\n> Todd M. Shrider VA Linux Systems\n> Systems Engineer\n> [email protected] www.valinux.com\n>\n\n", "msg_date": "Tue, 23 May 2000 17:56:20 +0200", "msg_from": "\"Kardos, Dr. Andreas\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: failing over with postgresql" } ]
[ { "msg_contents": "Tom Lane wrote:\n>\n> Zeugswetter Andreas SB <[email protected]> writes:\n> > Another metric that would be interesing is a number that represents\n> > the divergence of the heap order from the index sort order.\n>\n> Oh, absolutely! Got any ideas about how to get that number in a\n> reasonable amount of time (not too much more than VACUUM takes now)?\n> I've been drawing a blank :-(\n>\n\nIt seems hard to find the practical theory.\nSo how about a kind of sample scanning in a separate ANALYZE\ncommand ?\n\nRegards.\n\nHiroshi Inoue\[email protected]\n", "msg_date": "Fri, 19 May 2000 01:15:28 +0900", "msg_from": "Hiroshi Inoue <[email protected]>", "msg_from_op": true, "msg_subject": "RE: AW: question about index cost estimates" }, { "msg_contents": "Hiroshi Inoue <[email protected]> writes:\n> So how about a kind of sample scanning in a separate ANALYZE\n> command ?\n\nI do think it'd be a good idea to separate out the stats-gathering\ninto an ANALYZE command that can be invoked separately from VACUUM.\nFor one thing, there's no good reason to hold down an exclusive\nlock on the target table while we are gathering stats.\n\nBut that doesn't answer the question: how can we measure the extent\nto which a table is in the same order as an index? And even that\nis too one-dimensional a concept to apply to r-tree indexes, for\nexample. What do we do for r-trees?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 18 May 2000 12:29:03 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: AW: question about index cost estimates " } ]
[ { "msg_contents": "Info on the new slashdot.org setup\n\n<http://slashdot.org/article.pl?sid=00/05/18/1427203&mode=nocomment>\n\ninteresting because of the plans (meaning $$$) they have to improve\nMySql, and because they are the flagship MySql site/application. \n\nIn the comment page, replying to the usual \"Why not PostgreSql?\" thread\nsomeone pointed out an extract from the MySql docs that seems to me\nblatantly false\n(http://slashdot.org/comments.pl?sid=00/05/18/1427203&cid=131).\n\n-- \nAlessio F. Bragadini [email protected]\nAPL Financial Services http://www.sevenseas.org/~alessio\nNicosia, Cyprus phone: +357-2-750652\n\n\"It is more complicated than you think\"\n -- The Eighth Networking Truth from RFC 1925\n", "msg_date": "Thu, 18 May 2000 19:33:43 +0300", "msg_from": "Alessio Bragadini <[email protected]>", "msg_from_op": true, "msg_subject": "The New Slashdot Setup (includes MySql server)" }, { "msg_contents": "> Info on the new slashdot.org setup\n> \n> <http://slashdot.org/article.pl?sid=00/05/18/1427203&mode=nocomment>\n> \n> interesting because of the plans (meaning $$$) they have to improve\n> MySql, and because they are the flagship MySql site/application. \n> \n> In the comment page, replying to the usual \"Why not PostgreSql?\" thread\n> someone pointed out an extract from the MySql docs that seems to me\n> blatantly false\n> (http://slashdot.org/comments.pl?sid=00/05/18/1427203&cid=131).\n\nJust finished reading the thread. I am surprised how many people\nslammed them on their MySQL over PostgreSQL decision. People are\nslamming MySQL all over the place. :-)\n\nSeems like inertia was the reason to stay with MySQL. What that means\nto me is that for their application space, PostgreSQL already has\nsuperior technology, and people realize it. This means we are on our\nway up, and MySQL is, well, ....\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 18 May 2000 13:12:38 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: The New Slashdot Setup (includes MySql server)" }, { "msg_contents": "\nthanks for the pointer ... I just posted my response ... specifically\npointing out how \"accurate\" the MySQL docs tend to be *rofl*\n\n\nOn Thu, 18 May 2000, Alessio Bragadini wrote:\n\n> Info on the new slashdot.org setup\n> \n> <http://slashdot.org/article.pl?sid=00/05/18/1427203&mode=nocomment>\n> \n> interesting because of the plans (meaning $$$) they have to improve\n> MySql, and because they are the flagship MySql site/application. \n> \n> In the comment page, replying to the usual \"Why not PostgreSql?\" thread\n> someone pointed out an extract from the MySql docs that seems to me\n> blatantly false\n> (http://slashdot.org/comments.pl?sid=00/05/18/1427203&cid=131).\n> \n> -- \n> Alessio F. Bragadini [email protected]\n> APL Financial Services http://www.sevenseas.org/~alessio\n> Nicosia, Cyprus phone: +357-2-750652\n> \n> \"It is more complicated than you think\"\n> -- The Eighth Networking Truth from RFC 1925\n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Thu, 18 May 2000 14:18:12 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: The New Slashdot Setup (includes MySql server)" }, { "msg_contents": "on 5/18/00 1:12 PM, Bruce Momjian at [email protected] wrote:\n\n> Seems like inertia was the reason to stay with MySQL. What that means\n> to me is that for their application space, PostgreSQL already has\n> superior technology, and people realize it. This means we are on our\n> way up, and MySQL is, well, ....\n\nThere is this growing desire among some OpenACS people to replicate the\nSlashdot functionality in an OpenACS module (probably a weekend's worth of\nwork). I wish I had a bit more free time to do it. It's time to show what\ncan be done with a real RDBMS (and a real web application environment, but\nthat's a different story).\n\n-Ben\n\n", "msg_date": "Thu, 18 May 2000 13:42:13 -0400", "msg_from": "Benjamin Adida <[email protected]>", "msg_from_op": false, "msg_subject": "Re: The New Slashdot Setup (includes MySql server)" }, { "msg_contents": "On Thu, 18 May 2000, Bruce Momjian wrote:\n\n> > Info on the new slashdot.org setup\n> > \n> > <http://slashdot.org/article.pl?sid=00/05/18/1427203&mode=nocomment>\n> > \n> > interesting because of the plans (meaning $$$) they have to improve\n> > MySql, and because they are the flagship MySql site/application. \n> > \n> > In the comment page, replying to the usual \"Why not PostgreSql?\" thread\n> > someone pointed out an extract from the MySql docs that seems to me\n> > blatantly false\n> > (http://slashdot.org/comments.pl?sid=00/05/18/1427203&cid=131).\n> \n> Just finished reading the thread. I am surprised how many people\n> slammed them on their MySQL over PostgreSQL decision. People are\n> slamming MySQL all over the place. :-)\n> \n> Seems like inertia was the reason to stay with MySQL. What that means\n> to me is that for their application space, PostgreSQL already has\n> superior technology, and people realize it. This means we are on our\n> way up, and MySQL is, well, ....\n\nIn SlashDot's defence here ... I dooubt there is much they do that would\nrequire half of what we offer ... it *very* little INSERT/UPDATE/DELETE\nand *alot* of SELECT ...\n\n\n", "msg_date": "Thu, 18 May 2000 15:08:56 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: The New Slashdot Setup (includes MySql server)" }, { "msg_contents": "\nokay, that is a good point ... I know what the difference in performance\nthe two vs one select issue can produce ...\n\n\nOn Thu, 18 May 2000, Alfred Perlstein wrote:\n\n> * The Hermit Hacker <[email protected]> [000518 11:51] wrote:\n> > On Thu, 18 May 2000, Bruce Momjian wrote:\n> > \n> > > > Info on the new slashdot.org setup\n> > > > \n> > > > <http://slashdot.org/article.pl?sid=00/05/18/1427203&mode=nocomment>\n> > > > \n> > > > interesting because of the plans (meaning $$$) they have to improve\n> > > > MySql, and because they are the flagship MySql site/application. \n> > > > \n> > > > In the comment page, replying to the usual \"Why not PostgreSql?\" thread\n> > > > someone pointed out an extract from the MySql docs that seems to me\n> > > > blatantly false\n> > > > (http://slashdot.org/comments.pl?sid=00/05/18/1427203&cid=131).\n> > > \n> > > Just finished reading the thread. I am surprised how many people\n> > > slammed them on their MySQL over PostgreSQL decision. People are\n> > > slamming MySQL all over the place. :-)\n> > > \n> > > Seems like inertia was the reason to stay with MySQL. What that means\n> > > to me is that for their application space, PostgreSQL already has\n> > > superior technology, and people realize it. This means we are on our\n> > > way up, and MySQL is, well, ....\n> > \n> > In SlashDot's defence here ... I dooubt there is much they do that would\n> > require half of what we offer ... it *very* little INSERT/UPDATE/DELETE\n> > and *alot* of SELECT ...\n> \n> If those guys still are doing multiple selects for each page view after\n> at least 2 years of being around and choking on the load, they seriously\n> need to get a clue. mod_perl... belch!\n> \n> -- \n> -Alfred Perlstein - [[email protected]|[email protected]]\n> \"I have the heart of a child; I keep it in a jar on my desk.\"\n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Thu, 18 May 2000 15:54:50 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: The New Slashdot Setup (includes MySql server)" }, { "msg_contents": "* The Hermit Hacker <[email protected]> [000518 11:51] wrote:\n> On Thu, 18 May 2000, Bruce Momjian wrote:\n> \n> > > Info on the new slashdot.org setup\n> > > \n> > > <http://slashdot.org/article.pl?sid=00/05/18/1427203&mode=nocomment>\n> > > \n> > > interesting because of the plans (meaning $$$) they have to improve\n> > > MySql, and because they are the flagship MySql site/application. \n> > > \n> > > In the comment page, replying to the usual \"Why not PostgreSql?\" thread\n> > > someone pointed out an extract from the MySql docs that seems to me\n> > > blatantly false\n> > > (http://slashdot.org/comments.pl?sid=00/05/18/1427203&cid=131).\n> > \n> > Just finished reading the thread. I am surprised how many people\n> > slammed them on their MySQL over PostgreSQL decision. People are\n> > slamming MySQL all over the place. :-)\n> > \n> > Seems like inertia was the reason to stay with MySQL. What that means\n> > to me is that for their application space, PostgreSQL already has\n> > superior technology, and people realize it. This means we are on our\n> > way up, and MySQL is, well, ....\n> \n> In SlashDot's defence here ... I dooubt there is much they do that would\n> require half of what we offer ... it *very* little INSERT/UPDATE/DELETE\n> and *alot* of SELECT ...\n\nIf those guys still are doing multiple selects for each page view after\nat least 2 years of being around and choking on the load, they seriously\nneed to get a clue. mod_perl... belch!\n\n-- \n-Alfred Perlstein - [[email protected]|[email protected]]\n\"I have the heart of a child; I keep it in a jar on my desk.\"\n", "msg_date": "Thu, 18 May 2000 12:16:19 -0700", "msg_from": "Alfred Perlstein <[email protected]>", "msg_from_op": false, "msg_subject": "Re: The New Slashdot Setup (includes MySql server)" }, { "msg_contents": "The Hermit Hacker wrote:\n> \n> thanks for the pointer ... I just posted my response ... specifically\n> pointing out how \"accurate\" the MySQL docs tend to be *rofl*\n\nAnd now there is a response to your response stating the following\n\n> \n> The MySQL people have said exactly the same sort of things about\n> the PostgreSQL people. So please stop the name-calling and\n> the quotes around \"test\", it's not going to get you anywhere. \n> \n> That being said, the standard MySQL benchmark _still_ is 30 times\n> faster for MySQL 3.23 than on PostgreSQL 7.0 (with fsync turned off,\n> _and_ nonstandard speed-up PostgreSQL features like VACUUM enabled,\n\nbtw, how does one \"enable\" vacuum ?\n\n> I might add). The main reason seems to be some sort of failure to \n> use the index in the SELECT and UPDATE test loops on the part of \n> PostgreSQL. \n> \n> The benchmark, for the curious, works like this: \n> \n> First it creates a table with an index: \n> \n> create table bench1 (id int NOT NULL,id2 int NOT NULL,id3 int NOT NULL,dummy1 char(30)); create unique\n> index bench1_index_ on bench1 using btree (id,id2); create index bench1_index_1 on bench1 using btree (id3); \n> \n> Then it fills the table with 300.000 entries with unique id values. \n> \n> Then, it issues a query like this: \n> \n> update bench1 set dummy1='updated' where id=1747 \n> \n> which causes the backend to do one thousand read() calls. For each query. \n\ncould it be that just updating 1 unique index causes 1k read()'s ?\n\n> No wonder it's slow. An EXPLAIN query states that it's using the \n> index, though. I have no clue what happens here. I've sent this\n> to the pgsql-general mailing list and have just reposted it to -hackers. \n\nI somehow missed it (on -hackers at least) so I repost it here\n\n> Oh yes, the benchmark also revealed that CREATE TABLE in PostgreSQL 7.0 \n> leaks about 2k of memory.\n\n-------------------\nHannu\n", "msg_date": "Fri, 19 May 2000 11:00:29 +0300", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance (was: The New Slashdot Setup (includes MySql server))" }, { "msg_contents": "Hi,\n\nHannu Krosing:\n> And now there is a response to your response stating the following\n> \nThat response is from me, actually. (I subscribed to -hackers two hours\nago, so I'm sorry if I missed anything.)\n\n> > That being said, the standard MySQL benchmark _still_ is 30 times\n> > faster for MySQL 3.23 than on PostgreSQL 7.0 (with fsync turned off,\n> > _and_ nonstandard speed-up PostgreSQL features like VACUUM enabled,\n> \n> btw, how does one \"enable\" vacuum ?\n\nrun-all-tests ... --fast.\n\nThe code has stuff like\n\n\t $server->vacuum(1,\\$dbh) if $opt_fast and defined $server->{vacuum};\n\nsprinkled at strategic places.\n\n-- \nMatthias Urlichs | noris network GmbH | [email protected] | ICQ: 20193661\nThe quote was selected randomly. Really. | http://smurf.noris.de/\n-- \nSilence is the element in which great things fashion themselves.\n --Thomas Carlyle\n", "msg_date": "Fri, 19 May 2000 11:14:24 +0200", "msg_from": "\"Matthias Urlichs\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance (was: The New Slashdot Setup (includes MySql server))" }, { "msg_contents": "Hi,\n\nChris:\n> VACUUM is not a speed-up feature, it's a slow-down feature. It reclaims\n> space and that takes time. It does update system statistics which can\n> help performance if done after a data load or perhaps once a day.\n> \nOK, thanks for the clarification.\n\n> But \"sprinkling the code\" with vacuum sounds like a big performance\n> killer. Hope you are not counting vacuum as part of your 1000 read()\n> calls.\n> \nNonono, the 1000 read() calls are triggered by a simple INSERT or UPDATE\ncall. They actually scan the pg_index table of the benchmark database. \n\nWhy they do that is another question entirely. (a) these tables should\nhave indices, and (b) whatever postgres wants to know should have been\ncached someplace. Oh yes, (c) what's in pg_index that needs to be 4\nMBytes big?\n\n-- \nMatthias Urlichs | noris network GmbH | [email protected] | ICQ: 20193661\nThe quote was selected randomly. Really. | http://smurf.noris.de/\n-- \nMan is the only animal that laughs and weeps; for he is\nthe only animal that is struck with the difference between\nwhat things are and what they ought to be.\n -- William Hazlitt (1778-1830)\n", "msg_date": "Fri, 19 May 2000 12:18:03 +0200", "msg_from": "\"Matthias Urlichs\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance (was: The New Slashdot Setup (includes MySql server))" }, { "msg_contents": "Hi,\n\nChris:\n> Matthias Urlichs wrote:\n> \n> > Nonono, the 1000 read() calls are triggered by a simple INSERT or UPDATE\n> > call. They actually scan the pg_index table of the benchmark database.\n> \n> Does this only happen on the first call to INSERT/UPDATE after\n> connecting to the database, or does it happen with all subsequent calls\n> too?\n> \nAll of them. Whatever the server is looking up here, it's _not_ cached.\n\n-- \nMatthias Urlichs | noris network GmbH | [email protected] | ICQ: 20193661\nThe quote was selected randomly. Really. | http://smurf.noris.de/\n-- \nGRITCH 1. n. A complaint (often caused by a GLITCH (q.v.)). 2. v. To\n complain. Often verb-doubled: \"Gritch gritch\". 3. Glitch.\n -- From the AI Hackers' Dictionary\n", "msg_date": "Fri, 19 May 2000 12:39:40 +0200", "msg_from": "\"Matthias Urlichs\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance (was: The New Slashdot Setup (includes MySql server))" }, { "msg_contents": "> -----Original Message-----\n> From: [email protected] [mailto:[email protected]]On\n> Behalf Of Matthias Urlichs\n> \n> Why they do that is another question entirely. (a) these tables should\n> have indices, and (b) whatever postgres wants to know should have been\n> cached someplace. Oh yes, (c) what's in pg_index that needs to be 4\n> MBytes big?\n>\n\nWhat does 'vacuum pg_index' show ?\n\nRegards.\n\nHiroshi Inoue\[email protected]\n", "msg_date": "Fri, 19 May 2000 19:51:38 +0900", "msg_from": "\"Hiroshi Inoue\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: Performance (was: The New Slashdot Setup (includes MySql server))" }, { "msg_contents": "Matthias Urlichs wrote:\n\n> Hi,\n>\n> Chris:\n> > Matthias Urlichs wrote:\n> >\n> > > Nonono, the 1000 read() calls are triggered by a simple INSERT or UPDATE\n> > > call. They actually scan the pg_index table of the benchmark database.\n> >\n> > Does this only happen on the first call to INSERT/UPDATE after\n> > connecting to the database, or does it happen with all subsequent calls\n> > too?\n> >\n> All of them. Whatever the server is looking up here, it's _not_ cached.\n>\n\nMaybe shared buffer isn't so large as to keep all the(4.1M) pg_index pages.\nSo it would read pages from disk every time,\nUnfortunately pg_index has no index to scan the index entries of a relation now.\n\nHowever why is pg_index so large ?\n\nRegards.\n\nHiroshi Inoue\[email protected]\n\n\n", "msg_date": "Fri, 19 May 2000 20:12:00 +0900", "msg_from": "Hiroshi Inoue <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance (was: The New Slashdot Setup (includes MySql server))" }, { "msg_contents": "Hi,\n\nHiroshi Inoue:\n> What does 'vacuum pg_index' show ?\n> \ntest=> vacuum pg_index;\nNOTICE: Skipping \"pg_index\" --- only table owner can VACUUM it\nVACUUM\n\nOK, so I suppose I should do it as the postgres user...\ntest=> vacuum pg_index;\nVACUUM\n\nThe debug output says:\nDEBUG: --Relation pg_index--\nDEBUG: Pages 448: Changed 0, reaped 448, Empty 0, New 0; Tup 34: Vac 21443, Keep/VTL 0/0, Crash 0, UnUsed 0, MinLen 164, MaxLen 164;\nRe-using: Free/Avail. Space 3574948/3567176; EndEmpty/Avail. Pages 0/447. CPU 0.46s/0.00u sec.\nDEBUG: Index pg_index_indexrelid_index: Pages 86; Tuples 34: Deleted 21443. CPU 0.05s/0.36u sec.\nDEBUG: Rel pg_index: Pages: 448 --> 1; Tuple(s) moved: 2. CPU 0.03s/0.03u sec.\nDEBUG: Index pg_index_indexrelid_index: Pages 86; Tuples 34: Deleted 2. CPU 0.01s/0.00u sec.\n\n... which helped. A lot.\n\nThanks, everybody. The first quick benchmark run I did afterwards states\nthat PostgreSQL is now only half as fast as MySQL, instead of the factor\nof 30 seen previously, on the MySQL benchmark test. ;-)\n\n-- \nMatthias Urlichs | noris network GmbH | [email protected] | ICQ: 20193661\nThe quote was selected randomly. Really. | http://smurf.noris.de/\n-- \nDorian Graying:\n\tThe unwillingness to gracefully allow one's body to show signs\nof aging.\n\t\t\t-Douglas Coupland, Generation X\n", "msg_date": "Fri, 19 May 2000 13:40:08 +0200", "msg_from": "\"Matthias Urlichs\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance (was: The New Slashdot Setup (includes MySql server))" }, { "msg_contents": "Hi,\n\nHiroshi Inoue:\n> \n> Maybe shared buffer isn't so large as to keep all the(4.1M) pg_index pages.\n\nThat seems to be the case.\n\n> So it would read pages from disk every time, Unfortunately pg_index\n> has no index to scan the index entries of a relation now.\n> \nWell, it's reasonable that you can't keep an index on the table which\nstates what the indices are. ;-)\n\n... on the other hand, Apple's HFS file system stores all the information\nabout the on-disk locations of their files as a B-Tree in, in, you\nguessed it, a B-Tree which is saved on disk as an (invisible) file.\nThus, the thing stores the information on where its sectors are located\nat, inside itself.\nTo escape this catch-22 situation, the location of the first three\nextents (which is usually all it takes anyway) is stored elsewhere.\n\nPossibly, something like this would work with postgres too.\n\n> However why is pg_index so large ?\n> \nCreating ten thousand tables will do that to you.\n\nIs there an option I can set to increase the appropriate cache, so that\nthe backend can keep the data in memory?\n\n-- \nMatthias Urlichs | noris network GmbH | [email protected] | ICQ: 20193661\nThe quote was selected randomly. Really. | http://smurf.noris.de/\n-- \nFamous last words:\n They'd never (be stupid enough to) make him a manager.\n", "msg_date": "Fri, 19 May 2000 14:04:24 +0200", "msg_from": "\"Matthias Urlichs\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance (was: The New Slashdot Setup (includes MySql server))" }, { "msg_contents": "On Fri, 19 May 2000, Matthias Urlichs wrote:\n\n> Hi,\n> \n> Hiroshi Inoue:\n> > What does 'vacuum pg_index' show ?\n> > \n> test=> vacuum pg_index;\n> NOTICE: Skipping \"pg_index\" --- only table owner can VACUUM it\n> VACUUM\n> \n> OK, so I suppose I should do it as the postgres user...\n> test=> vacuum pg_index;\n> VACUUM\n> \n> The debug output says:\n> DEBUG: --Relation pg_index--\n> DEBUG: Pages 448: Changed 0, reaped 448, Empty 0, New 0; Tup 34: Vac 21443, Keep/VTL 0/0, Crash 0, UnUsed 0, MinLen 164, MaxLen 164;\n> Re-using: Free/Avail. Space 3574948/3567176; EndEmpty/Avail. Pages 0/447. CPU 0.46s/0.00u sec.\n> DEBUG: Index pg_index_indexrelid_index: Pages 86; Tuples 34: Deleted 21443. CPU 0.05s/0.36u sec.\n> DEBUG: Rel pg_index: Pages: 448 --> 1; Tuple(s) moved: 2. CPU 0.03s/0.03u sec.\n> DEBUG: Index pg_index_indexrelid_index: Pages 86; Tuples 34: Deleted 2. CPU 0.01s/0.00u sec.\n> \n> ... which helped. A lot.\n> \n> Thanks, everybody. The first quick benchmark run I did afterwards states\n> that PostgreSQL is now only half as fast as MySQL, instead of the factor\n> of 30 seen previously, on the MySQL benchmark test. ;-)\n\nWow, shock of shocks ... MySQL has more inaccuracies in their docs? *grin*\n\n\n", "msg_date": "Fri, 19 May 2000 09:42:30 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance (was: The New Slashdot Setup (includes MySql server))" }, { "msg_contents": "> ... which helped. A lot.\n>\n> Thanks, everybody. The first quick benchmark run I did afterwards states\n> that PostgreSQL is now only half as fast as MySQL, instead of the factor\n> of 30 seen previously, on the MySQL benchmark test. ;-)\n\nwhile (horse == DEAD) {\n\nbeat();\n\n}\n\n... Anyway.. I can see this being true (the MySQL being twice as fast as\nPostgreSQL) however I don't think that MySQL being faster than PostgreSQL\nwas ever up for debate. When you take a RDBMS and strip out a huge amount of\nfeatures, of course you're going to get a faster end product. It's just not\nnearly as safe, feature rich or easy to work with (from a programmers\nstandpoint).\n\nI looked at MySQL to use for my applications, for all of ten seconds.... To\ncode in and around, MySQL just isn't a useable RDBMS for me and I can hardly\nsee how it's useful for anyone doing the kind of programming I do..\n\nWhat it is very good for is something like RADIUS/POP3 authentication, I\nuse it at my ISP to keep all my user authentication in one place... However\nthe only thing I catred about was speed there, and there are all of two\nthings I ever do to that database. I SELECT (once every auth request) and\noccasionally I INSERT and possibly UPDATE, that coupled with the fact that\nthere are only two to three things in the database per user (username,\npassword and domain for POP3 auth) -- it's just not a very complicated thing\nto do... I use a SQL backend because it's very easy to maintain and I can\neasily write software to manipulate the data held in the tables -- that's\nall.\n\nWith the other applications I and my company write, it's a totally different\nstory. I just don't see how a person can write any kind of a larger\napplication and not need all the features MySQL lacks...\n\nI like MySQL for certain things -- however I've never considered \"MySQL vs\nPostgreSQL\" -- they're just two totally different databases for totally\ndifferent uses IMHO.\n\n-Mitch\n\n\n\n\n", "msg_date": "Fri, 19 May 2000 09:47:39 -0400", "msg_from": "\"Mitch Vincent\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance (was: The New Slashdot Setup (includes MySql server))" }, { "msg_contents": "Hi,\n\nThe Hermit Hacker:\n> > Thanks, everybody. The first quick benchmark run I did afterwards states\n> > that PostgreSQL is now only half as fast as MySQL, instead of the factor\n> > of 30 seen previously, on the MySQL benchmark test. ;-)\n> \n> Wow, shock of shocks ... MySQL has more inaccuracies in their docs? *grin*\n\nNo, that factor of 30 was my result after running the benchmark for the\nfirst time. Presumably, unless I skip the large_number_of_tables test,\nit'll be just as slow the second time around.\n\nThe MySQL people probably didn't dig deeper into PostgreSQL's innards.\nThey don't seem to think it's their job to find out exactly why their\nbenchmark runs so slow on some other databases, and I don't particularly\nfault them for that attitude.\n\n\nThe PostgreSQL community has an attitude too, after all.\n\nOne of these might be to answer \"you must have had fsync turned on\"\nwhenever somebody reports a way-too-slow benchmark. In this case,\nthat's definitely not true.\n\n\nAnother attitude of the PostgreSQL developers might be to answer \"run\nVACUUM\" whenever somebody reports performance problems. That answer is\nnot helpful at all WRT this benchmark, because the user who caused the\nproblem (\"test\", in my case) isn't permitted to run VACUUM on the\npg_index table.\n\nThe alternate solution would be for the backend to notice \"Gee, I just\nscanned a whole heap of what turned out to be empty space in this here\npg_index file, maybe it would be a good idea call vacuum() on it.\"\n\nOr, if that doesn't work, increase the buffer for holding its content.\n\n\nAnyway, I fully expect to have a more reasonable benchmark result by\ntomorrow, and the MySQL guys will get a documentation update. Which they\n_will_ put in the next update's documentation file. Trust me. ;-)\n\n-- \nMatthias Urlichs | noris network GmbH | [email protected] | ICQ: 20193661\nThe quote was selected randomly. Really. | http://smurf.noris.de/\n-- \n\"The so-called Christian world is contracepting itself out of existence.\"\n\t-- Fr. L. Kieffer, HLI Reports, August 1989, as quoted in \"The Far\n Right, Speaking For Themselves,\" a Planned Parenthood pamphlet\n", "msg_date": "Fri, 19 May 2000 16:00:14 +0200", "msg_from": "\"Matthias Urlichs\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance (was: The New Slashdot Setup (includes MySql server))" }, { "msg_contents": "At 02:04 PM 5/19/00 +0200, you wrote:\n\n> Well, it's reasonable that you can't keep an index on the table which\n> states what the indices are. ;-)\n> \n> ... on the other hand, Apple's HFS file system stores all the information\n> about the on-disk locations of their files as a B-Tree in, in, you\n> guessed it, a B-Tree which is saved on disk as an (invisible) file.\n> Thus, the thing stores the information on where its sectors are located\n> at, inside itself.\n> To escape this catch-22 situation, the location of the first three\n> extents (which is usually all it takes anyway) is stored elsewhere.\n> \n> Possibly, something like this would work with postgres too.\n\nThis is one of several things we did at Illustra to make the backend\nrun faster. I did the design and implementation, but it was a few\nyears ago, so the details are hazy. Here's what I remember.\n\nWe had to solve three problems:\n\nFirst, you had to be able to run initdb and bootstrap the system\nwithout the index on pg_index in place. As I recall, we had to\ncarefully order the creation of the first several tables to make\nthat work, but it wasn't rocket science.\n\nSecond, when the index on pg_index gets created, you need to update\nit with tuples that describe it. This is really just the same as\nhard-coding the pg_attribute attribute entries into pg_attribute --\nugly, but not that bad.\n\nThird, we had to abstract a lot of the hard-coded table scans in\nthe bowels of the system to call a routine that checked for the\nexistence of an index on the system table, and used it. In order\nfor the index on pg_index to get used, its reldesc had to be nailed\nin the cache. Getting it there at startup was more hard-coded\nugliness, but you only had do to it one time.\n\nThe advantage is that you can then index a bunch more of the system\ncatalog tables, and on a bunch more attributes. That produced some\nsurprising speedups.\n\nThis was simple enough that I'm certain the same technique would\nwork in the current engine.\n\n\t\t\t\t\tmike\n\n", "msg_date": "Fri, 19 May 2000 07:18:31 -0700", "msg_from": "\"Michael A. Olson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance (was: The New Slashdot Setup (includes MySql server))" }, { "msg_contents": "\"Matthias Urlichs\" <[email protected]> writes:\n>>>> Nonono, the 1000 read() calls are triggered by a simple INSERT or UPDATE\n>>>> call. They actually scan the pg_index table of the benchmark database.\n\nOhh ... pg_index is the culprit! OK, I know exactly where that's coming\nfrom: the planner is looking around to see what indexes might be\ninteresting for planning the query. Several comments here:\n\n1. Probably we ought to try to bypass most of the planning process for\na simple INSERT ... VALUES. (I thought I had fixed that, but apparently\nit's not getting short-circuited soon enough, if the search for indexes\nis still happening.)\n\n2. The search is not using either an index or a cache IIRC. Needs to\nbe fixed but there may be no suitable index present in 7.0.\n\n3. I have been toying with the notion of having relcache entries store\ninformation about the indexes associated with the table, so that the\nplanner wouldn't have to search through pg_index at all. The trouble\nwith doing that is getting the info updated when an index is added or\ndropped; haven't quite figured out how to do that...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 19 May 2000 10:24:02 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance (was: The New Slashdot Setup (includes MySql server))" }, { "msg_contents": "I would like to see if VACUUM ANALYZE helps.\n\n> Hi,\n> \n> Hiroshi Inoue:\n> > What does 'vacuum pg_index' show ?\n> > \n> test=> vacuum pg_index;\n> NOTICE: Skipping \"pg_index\" --- only table owner can VACUUM it\n> VACUUM\n> \n> OK, so I suppose I should do it as the postgres user...\n> test=> vacuum pg_index;\n> VACUUM\n> \n> The debug output says:\n> DEBUG: --Relation pg_index--\n> DEBUG: Pages 448: Changed 0, reaped 448, Empty 0, New 0; Tup 34: Vac 21443, Keep/VTL 0/0, Crash 0, UnUsed 0, MinLen 164, MaxLen 164;\n> Re-using: Free/Avail. Space 3574948/3567176; EndEmpty/Avail. Pages 0/447. CPU 0.46s/0.00u sec.\n> DEBUG: Index pg_index_indexrelid_index: Pages 86; Tuples 34: Deleted 21443. CPU 0.05s/0.36u sec.\n> DEBUG: Rel pg_index: Pages: 448 --> 1; Tuple(s) moved: 2. CPU 0.03s/0.03u sec.\n> DEBUG: Index pg_index_indexrelid_index: Pages 86; Tuples 34: Deleted 2. CPU 0.01s/0.00u sec.\n> \n> ... which helped. A lot.\n> \n> Thanks, everybody. The first quick benchmark run I did afterwards states\n> that PostgreSQL is now only half as fast as MySQL, instead of the factor\n> of 30 seen previously, on the MySQL benchmark test. ;-)\n> \n> -- \n> Matthias Urlichs | noris network GmbH | [email protected] | ICQ: 20193661\n> The quote was selected randomly. Really. | http://smurf.noris.de/\n> -- \n> Dorian Graying:\n> \tThe unwillingness to gracefully allow one's body to show signs\n> of aging.\n> \t\t\t-Douglas Coupland, Generation X\n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 19 May 2000 11:06:55 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance (was: The New Slashdot Setup (includes MySql server))" }, { "msg_contents": "> The advantage is that you can then index a bunch more of the system\n> catalog tables, and on a bunch more attributes. That produced some\n> surprising speedups.\n\nWe have indexes on all system tables that need it. The pg_index index\nwas done quite easily and is new for 7.0. A check for recursion and\nfallback to sequential scan for pg_index table rows in the pg_index\ntable allows it to happen.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 19 May 2000 11:25:49 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance (was: The New Slashdot Setup (includes MySql\n server))" }, { "msg_contents": "Chris <[email protected]> writes:\n>> not helpful at all WRT this benchmark, because the user who caused the\n>> problem (\"test\", in my case) isn't permitted to run VACUUM on the\n>> pg_index table.\n\n> Speaking of which, why can't any user who can change meta-data, also\n> Vacuum meta-data ? It's not a threat to security is it?\n\nNo, but it is a potential route to a denial-of-service attack, because\nVACUUM has to acquire an exclusive lock on the target table. An\nunprivileged user can't vacuum pg_index for the same reason he can't\nlock it: he could effectively shut down all other users of that\ndatabase, at least for a while (and VACUUMs issued in a tight loop\nmight manage to make things pretty unusable).\n\nThe design assumption here is that VACUUMs will be run periodically by a\ncron job executing as user postgres; typically once a day at a low-load\ntime of day is a good plan.\n\nThere has been some talk of switching away from the no-overwrite storage\nmanager to a more conventional overwriting manager. That'd reduce or\neliminate the need for periodic VACUUMs. But currently, you can't\nreally run a Postgres installation without 'em.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 19 May 2000 12:19:24 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance (was: The New Slashdot Setup (includes MySql server))" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n>> The advantage is that you can then index a bunch more of the system\n>> catalog tables, and on a bunch more attributes. That produced some\n>> surprising speedups.\n\n> We have indexes on all system tables that need it.\n\nThere isn't any fundamental reason why the planner can't be using an\nindex to scan pg_index; we just need to code it that way. Right now\nit's coded as a sequential scan.\n\nUnfortunately there is no index on pg_index's indrelid column in 7.0,\nso this is not fixable without an initdb. TODO item for 7.1, I guess.\n\nMore generally, someone should examine the other places where\nheap_getnext() loops occur, and see if any of them look like performance\nbottlenecks...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 19 May 2000 12:39:17 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance (was: The New Slashdot Setup (includes MySql server))" }, { "msg_contents": "> Bruce Momjian <[email protected]> writes:\n> >> The advantage is that you can then index a bunch more of the system\n> >> catalog tables, and on a bunch more attributes. That produced some\n> >> surprising speedups.\n> \n> > We have indexes on all system tables that need it.\n> \n> There isn't any fundamental reason why the planner can't be using an\n> index to scan pg_index; we just need to code it that way. Right now\n> it's coded as a sequential scan.\n> \n> Unfortunately there is no index on pg_index's indrelid column in 7.0,\n> so this is not fixable without an initdb. TODO item for 7.1, I guess.\n\nThe reason there is no index is because it is not a unique column, and\nat the time I was adding system indexes for 7.0, I was looking for\nindexes that could be used for system cache lookups. The index you are\ndescribing returns multiple tuples, so it would be an actual index call\nin the code. i will add this to the TODO.\n\n\n> \n> More generally, someone should examine the other places where\n> heap_getnext() loops occur, and see if any of them look like performance\n> bottlenecks...\n\nGood idea. Added to TODO.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 19 May 2000 12:54:10 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance (was: The New Slashdot Setup (includes MySql server))" }, { "msg_contents": "At 12:39 PM 5/19/00 -0400, Tom Lane wrote:\n\n> Bruce Momjian <[email protected]> writes:\n> \n> > We have indexes on all system tables that need it.\n> \n> There isn't any fundamental reason why the planner can't be using an\n> index to scan pg_index; we just need to code it that way. Right now\n> it's coded as a sequential scan.\n\nEliminating the hard-coded seqscans of catalogs in the bowels of the\nsystem was the hardest part of the project. As I said, it was good\nto do. It made parsing and planning queries much, much faster.\n\n\t\t\t\t\tmike\n\n", "msg_date": "Fri, 19 May 2000 09:54:11 -0700", "msg_from": "\"Michael A. Olson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance (was: The New Slashdot Setup (includes MySql server)) " }, { "msg_contents": "Hmmm.\n\n> can be done with a real RDBMS (and a real web application environment, but\n> that's a different story).\n\nDo you happen to know one?\n\n", "msg_date": "Fri, 19 May 2000 18:55:54 +0200", "msg_from": "Kaare Rasmussen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: The New Slashdot Setup (includes MySql server)" }, { "msg_contents": "> At 12:39 PM 5/19/00 -0400, Tom Lane wrote:\n> \n> > Bruce Momjian <[email protected]> writes:\n> > \n> > > We have indexes on all system tables that need it.\n> > \n> > There isn't any fundamental reason why the planner can't be using an\n> > index to scan pg_index; we just need to code it that way. Right now\n> > it's coded as a sequential scan.\n> \n> Eliminating the hard-coded seqscans of catalogs in the bowels of the\n> system was the hardest part of the project. As I said, it was good\n> to do. It made parsing and planning queries much, much faster.\n\nAll the sequential catalog scans that return one row are gone. What has\nnot been done is adding indexes for scans returning more than one row.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 19 May 2000 13:14:30 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance (was: The New Slashdot Setup (includes MySql\n server))" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> All the sequential catalog scans that return one row are gone. What has\n> not been done is adding indexes for scans returning more than one row.\n\nI've occasionally wondered whether we can't find a way to use the\ncatcaches for searches that can return multiple rows. It'd be easy\nenough to add an API for catcache that could return multiple rows given\na nonunique search key. The problem is how to keep the catcache up to\ndate with underlying reality for this kind of query. Deletions of rows\nwill be handled by the existing catcache invalidation mechanism, but\nhow can we know when some other backend has added a row that will match\na search condition? Haven't seen an answer short of scanning the table\nevery time, which makes the catcache no win at all.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 19 May 2000 13:36:29 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance (was: The New Slashdot Setup (includes MySql server))" }, { "msg_contents": "> Bruce Momjian <[email protected]> writes:\n> > All the sequential catalog scans that return one row are gone. What has\n> > not been done is adding indexes for scans returning more than one row.\n> \n> I've occasionally wondered whether we can't find a way to use the\n> catcaches for searches that can return multiple rows. It'd be easy\n> enough to add an API for catcache that could return multiple rows given\n> a nonunique search key. The problem is how to keep the catcache up to\n> date with underlying reality for this kind of query. Deletions of rows\n> will be handled by the existing catcache invalidation mechanism, but\n> how can we know when some other backend has added a row that will match\n> a search condition? Haven't seen an answer short of scanning the table\n> every time, which makes the catcache no win at all.\n\nGood point. You can invalidate stuff, but how to find new stuff that\ndoesn't have a specific key?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 19 May 2000 13:56:58 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance (was: The New Slashdot Setup (includes MySql server))" }, { "msg_contents": "\"Michael A. Olson\" <[email protected]> writes:\n> Third, we had to abstract a lot of the hard-coded table scans in\n> the bowels of the system to call a routine that checked for the\n> existence of an index on the system table, and used it.\n\nThe way that we've been approaching this is by switching from hard-coded\nsequential scans (heap_getnext() calls) to hard-coded indexscans\n(index_getnext() calls) at places where performance dictates it.\n\nAn advantage of doing it that way is that you don't have the\nbootstrapping/circularity problems that Mike describes; the code doesn't\nneed to consult pg_index to know whether there is an index to use, it\njust has the necessary info hard-coded in. For the same reason it's\nvery quick.\n\nNonetheless it's also a pretty ugly answer. I'd rather the code wasn't\nso tightly tied to a particular set of indexes for system tables.\n\nI was thinking about doing something like what Mike describes: replace\nuses of heap_beginscan() with calls to a routine that would examine the\npassed ScanKey(s) to see if there is a relevant index, and then start\neither a heap or index scan as appropriate. The circularity issue could\nbe resolved by having that routine have hard-coded knowledge of some of\nthe system-table indexes (or even all of them, which is still better\nthan having that knowledge scattered throughout the code). But the\nperformance cost of identifying the right index based on ScanKeys gives\nme pause. It's hard to justify that per-search overhead when the\nhard-coded approach works well enough.\n\nThoughts anyone?\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 19 May 2000 15:35:04 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance (was: The New Slashdot Setup (includes MySql server))" }, { "msg_contents": "> \"Michael A. Olson\" <[email protected]> writes:\n> > Third, we had to abstract a lot of the hard-coded table scans in\n> > the bowels of the system to call a routine that checked for the\n> > existence of an index on the system table, and used it.\n> \n> The way that we've been approaching this is by switching from hard-coded\n> sequential scans (heap_getnext() calls) to hard-coded indexscans\n> (index_getnext() calls) at places where performance dictates it.\n> \n> An advantage of doing it that way is that you don't have the\n> bootstrapping/circularity problems that Mike describes; the code doesn't\n> need to consult pg_index to know whether there is an index to use, it\n> just has the necessary info hard-coded in. For the same reason it's\n> very quick.\n\nI like hard-coded. There aren't many of them, last time I looked. \nMaybe 5-10 that need index scan. The rest are already done using the\ncatalog cache.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 19 May 2000 15:52:44 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance (was: The New Slashdot Setup (includes MySql server))" }, { "msg_contents": "\n> > > That being said, the standard MySQL benchmark _still_ is 30 times\n> > > faster for MySQL 3.23 than on PostgreSQL 7.0 (with fsync turned off,\n> > > _and_ nonstandard speed-up PostgreSQL features like VACUUM enabled,\n\nVACUUM is not a speed-up feature, it's a slow-down feature. It reclaims\nspace and that takes time. It does update system statistics which can\nhelp performance if done after a data load or perhaps once a day.\n\nBut \"sprinkling the code\" with vacuum sounds like a big performance\nkiller. Hope you are not counting vacuum as part of your 1000 read()\ncalls.\n", "msg_date": "Sat, 20 May 2000 05:54:20 +1000", "msg_from": "Chris <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance (was: The New Slashdot Setup (includes MySql server))" }, { "msg_contents": "Matthias Urlichs wrote:\n\n> Nonono, the 1000 read() calls are triggered by a simple INSERT or UPDATE\n> call. They actually scan the pg_index table of the benchmark database.\n\nDoes this only happen on the first call to INSERT/UPDATE after\nconnecting to the database, or does it happen with all subsequent calls\ntoo?\n", "msg_date": "Sat, 20 May 2000 06:34:06 +1000", "msg_from": "Chris <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance (was: The New Slashdot Setup (includes MySql server))" }, { "msg_contents": "> -----Original Message-----\n> From: [email protected] [mailto:[email protected]]On\n> Behalf Of Tom Lane\n> \n> Bruce Momjian <[email protected]> writes:\n> >> The advantage is that you can then index a bunch more of the system\n> >> catalog tables, and on a bunch more attributes. That produced some\n> >> surprising speedups.\n> \n> > We have indexes on all system tables that need it.\n> \n> There isn't any fundamental reason why the planner can't be using an\n> index to scan pg_index; we just need to code it that way. Right now\n> it's coded as a sequential scan.\n> \n> Unfortunately there is no index on pg_index's indrelid column in 7.0,\n> so this is not fixable without an initdb. TODO item for 7.1, I guess.\n>\n\nI've noticed the fact since before but haven't complained.\nAs far as I see,pg_index won't so big. In fact Matthias's case has\nonly 1 page after running vacuum for pg_index. In such cases\nsequential scan is faster than index scan as you know.\nI don't agree with you to increase system indexes easily.\nThough I implemented REINDEX command to recover system\nindexes it doesn't mean index corruption is welcome.\n\nI know another case. pg_attrdef has no index on (adrelid,attnum)\nthough it has an index on (adrelid).\n\n> More generally, someone should examine the other places where\n> heap_getnext() loops occur, and see if any of them look like performance\n> bottlenecks...\n\nPlease don't lose sequential scan stuff even when changes to\nindex scan is needed because -P option of standalone postgres\nneeds sequential scan for system tables.\n\nRegards.\n\nRegards.\n\nHiroshi Inoue\[email protected]\n \n", "msg_date": "Sat, 20 May 2000 07:01:15 +0900", "msg_from": "\"Hiroshi Inoue\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: Performance (was: The New Slashdot Setup (includes MySql server))" }, { "msg_contents": "\"Hiroshi Inoue\" <[email protected]> writes:\n>> More generally, someone should examine the other places where\n>> heap_getnext() loops occur, and see if any of them look like performance\n>> bottlenecks...\n\n> Please don't lose sequential scan stuff even when changes to\n> index scan is needed because -P option of standalone postgres\n> needs sequential scan for system tables.\n\nGood point. I'd still like not to clutter the code with deciding\nwhich kind of scan to invoke, though. Maybe we could put the\nbegin_xxx routine in charge of ignoring a request for an indexscan\nwhen -P is used. (AFAIR there's no real difference for the calling\ncode, it sets up scankeys and so forth just the same either way, no?\nWe should just need a switching layer in front of heap_beginscan/\nindex_beginscan and heap_getnext/index_getnext...)\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 19 May 2000 18:23:26 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance (was: The New Slashdot Setup (includes MySql server))" }, { "msg_contents": "\"Hiroshi Inoue\" <[email protected]> writes:\n>> Unfortunately there is no index on pg_index's indrelid column in 7.0,\n>> so this is not fixable without an initdb. TODO item for 7.1, I guess.\n\n> I've noticed the fact since before but haven't complained.\n> As far as I see,pg_index won't so big. In fact Matthias's case has\n> only 1 page after running vacuum for pg_index. In such cases\n> sequential scan is faster than index scan as you know.\n\nTrue, but the differential isn't very big either when dealing with\na small table. I think I'd rather use an index and be assured that\nperformance doesn't degrade drastically when the database contains\nmany indexes.\n\nI've also been thinking about ways to implement the relcache-based\ncaching of index information that I mentioned before. That doesn't\naddress the scanning problem in general but it should improve\nperformance for this part of the planner quite a bit. The trick is to\nensure that other backends update their cached info whenever an index\nis added or deleted. I thought of one way to do that: force an update\nof the owning relation's pg_class tuple during CREATE or DROP INDEX,\neven when we don't have any actual change to make in its contents ---\nthat'd force a relcache invalidate cycle at other backends. (Maybe\nwe don't even need to change the pg_class tuple, but just send out a\nshared-cache-invalidate message as if we had.)\n\n> I know another case. pg_attrdef has no index on (adrelid,attnum)\n> though it has an index on (adrelid).\n\nDoesn't look to me like we need an index on (adrelid,attnum), at\nleast not in any paths that are common enough to justify maintaining\nanother index. The (adrelid) index supports loading attrdef data\ninto the relcache, which is the only path I'm particularly concerned\nabout performance of...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 19 May 2000 18:41:32 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance (was: The New Slashdot Setup (includes MySql server))" }, { "msg_contents": "I wrote:\n> We should just need a switching layer in front of heap_beginscan/\n> index_beginscan and heap_getnext/index_getnext...)\n\nAfter refreshing my memory of how these are used, it seems that\nwe'd have to change the API of either the heap or index scan routines\nin order to unify them like that. Might be worth doing to maintain\ncode cleanliness, though. The places Hiroshi has fixed to support\nboth index and seq scan look really ugly to my eyes ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 19 May 2000 18:44:27 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance (was: The New Slashdot Setup (includes MySql server))" }, { "msg_contents": "> I've noticed the fact since before but haven't complained.\n> As far as I see,pg_index won't so big. In fact Matthias's case has\n> only 1 page after running vacuum for pg_index. In such cases\n> sequential scan is faster than index scan as you know.\n> I don't agree with you to increase system indexes easily.\n> Though I implemented REINDEX command to recover system\n> indexes it doesn't mean index corruption is welcome.\n> \n> I know another case. pg_attrdef has no index on (adrelid,attnum)\n> though it has an index on (adrelid).\n> \n> > More generally, someone should examine the other places where\n> > heap_getnext() loops occur, and see if any of them look like performance\n> > bottlenecks...\n> \n> Please don't lose sequential scan stuff even when changes to\n> index scan is needed because -P option of standalone postgres\n> needs sequential scan for system tables.\n\nCertainly whatever we do will be discussed. I realize initdb is an\nissue. However, I am not sure sequential scan is faster than index scan\nfor finding only a few rows in the table.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 19 May 2000 19:21:47 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance (was: The New Slashdot Setup (includes MySql server))" }, { "msg_contents": "> \"Hiroshi Inoue\" <[email protected]> writes:\n> >> Unfortunately there is no index on pg_index's indrelid column in 7.0,\n> >> so this is not fixable without an initdb. TODO item for 7.1, I guess.\n> \n> > I've noticed the fact since before but haven't complained.\n> > As far as I see,pg_index won't so big. In fact Matthias's case has\n> > only 1 page after running vacuum for pg_index. In such cases\n> > sequential scan is faster than index scan as you know.\n> \n> True, but the differential isn't very big either when dealing with\n> a small table. I think I'd rather use an index and be assured that\n> performance doesn't degrade drastically when the database contains\n> many indexes.\n\nAgreed.\n\n> \n> I've also been thinking about ways to implement the relcache-based\n> caching of index information that I mentioned before. That doesn't\n> address the scanning problem in general but it should improve\n> performance for this part of the planner quite a bit. The trick is to\n> ensure that other backends update their cached info whenever an index\n> is added or deleted. I thought of one way to do that: force an update\n> of the owning relation's pg_class tuple during CREATE or DROP INDEX,\n> even when we don't have any actual change to make in its contents ---\n> that'd force a relcache invalidate cycle at other backends. (Maybe\n> we don't even need to change the pg_class tuple, but just send out a\n> shared-cache-invalidate message as if we had.)\n\nOh, good idea. Just invalidate the relation so we reload.\n\nBTW, Hiroshi is the one who gave me the recursion-prevision fix for the\nsystem index additions for 7.0.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 19 May 2000 19:24:13 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance (was: The New Slashdot Setup (includes MySql server))" }, { "msg_contents": "Matthias Urlichs wrote:\n\n> Another attitude of the PostgreSQL developers might be to answer \"run\n> VACUUM\" whenever somebody reports performance problems. That answer is\n> not helpful at all WRT this benchmark, because the user who caused the\n> problem (\"test\", in my case) isn't permitted to run VACUUM on the\n> pg_index table.\n\nSpeaking of which, why can't any user who can change meta-data, also\nVacuum meta-data ? It's not a threat to security is it?\n", "msg_date": "Sat, 20 May 2000 10:19:00 +1000", "msg_from": "Chris <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance (was: The New Slashdot Setup (includes MySql server))" }, { "msg_contents": "I wrote:\n>>>>> Nonono, the 1000 read() calls are triggered by a simple INSERT or UPDATE\n>>>>> call. They actually scan the pg_index table of the benchmark database.\n>\n> Ohh ... pg_index is the culprit! OK, I know exactly where that's coming\n> from: the planner is looking around to see what indexes might be\n> interesting for planning the query. Several comments here:\n>\n> 1. Probably we ought to try to bypass most of the planning process for\n> a simple INSERT ... VALUES. (I thought I had fixed that, but apparently\n> it's not getting short-circuited soon enough, if the search for indexes\n> is still happening.)\n\n\nIt never pays to assume you know what's happening without having looked\n:-(. It turns out the planner is not the only culprit: the executor's\nExecOpenIndices() routine *also* does a sequential scan of pg_index.\nI did shortcircuit the planner's search in the INSERT ... VALUES case,\nbut of course the executor still must find out whether the table has\nindexes.\n\nIn UPDATE, DELETE, or INSERT ... SELECT, pg_index is scanned *twice*,\nonce in the planner and once in the executor. (In fact it's worse\nthan that: the planner scans pg_index separately for each table named\nin the query. At least the executor only does it once since it only\nhas to worry about one target relation.)\n\nDefinitely need to cache the indexing information...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 19 May 2000 20:32:18 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance (was: The New Slashdot Setup (includes MySql server))" }, { "msg_contents": "> I wrote:\n> > We should just need a switching layer in front of heap_beginscan/\n> > index_beginscan and heap_getnext/index_getnext...)\n> \n> After refreshing my memory of how these are used, it seems that\n> we'd have to change the API of either the heap or index scan routines\n> in order to unify them like that. Might be worth doing to maintain\n> code cleanliness, though. The places Hiroshi has fixed to support\n> both index and seq scan look really ugly to my eyes ...\n\nAgreed, and I think there are a few places that have them.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 19 May 2000 21:46:43 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance (was: The New Slashdot Setup (includes MySql server))" }, { "msg_contents": "> The MySQL people probably didn't dig deeper into PostgreSQL's innards.\n> They don't seem to think it's their job to find out exactly why their\n> benchmark runs so slow on some other databases, and I don't particularly\n> fault them for that attitude.\n\nHmm. And then who's job is it to take someone else's work and make it\naccurate? If the shoe were on the other foot: if I generated a\nbenchmark suite and features list, and it contained major and numerous\ninaccuracies, who would you expect to be responsible (or at least feel\nresponsible) for correcting/updating/improving it? 'Twould be me imho.\n\nWe've tried, and failed (to date) to contribute information to the\n\"crashme\" travesty. My recollection was a ~30% error rate on\ninformation for Postgres, and I didn't look into the stats for other\ndatabases. Check the archives for details.\n\n> The PostgreSQL community has an attitude too, after all.\n\nYup ;)\n\n> One of these might be to answer \"you must have had fsync turned on\"\n> whenever somebody reports a way-too-slow benchmark. In this case,\n> that's definitely not true.\n\nI'm sorry that has been your experience. imho, that initial response\nmight be considered \"helpful advice\", not \"attitude\". And I'll submit\nthat most postings I've seen (I'm mostly on the -hackers list) are\nfollowed up to the bitter end if the poster can state the problem\nsuccinctly and can follow up with specific information. But I'm a\ndeveloper, so don't have the right outlook.\n\n> Anyway, I fully expect to have a more reasonable benchmark result by\n> tomorrow, and the MySQL guys will get a documentation update. Which they\n> _will_ put in the next update's documentation file. Trust me. ;-)\n\nFantastic! We've been down this road before, and have had little luck\nin getting more than a token update of inaccuracies. Any little bit\nhelps.\n\nAnd while you're at it, can you update their docs and web site to make\nclear that transactions and atomicity are not anywhere near the\nfeature list of MySQL yet? TIA\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Sat, 20 May 2000 01:53:28 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance (was: The New Slashdot Setup (includes MySql server))" }, { "msg_contents": "> -----Original Message-----\n> From: Tom Lane [mailto:[email protected]]\n> \n> I wrote:\n> > We should just need a switching layer in front of heap_beginscan/\n> > index_beginscan and heap_getnext/index_getnext...)\n> \n> After refreshing my memory of how these are used, it seems that\n> we'd have to change the API of either the heap or index scan routines\n> in order to unify them like that. Might be worth doing to maintain\n> code cleanliness, though. The places Hiroshi has fixed to support\n> both index and seq scan look really ugly to my eyes ...\n>\n\nYes,it's ugly unfortunately. So I had hesitated to commit it for\npretty long. There's a trial of unification in my trial implementation\nof ALTER TABLE DROP COLUMN in command.c.\n \nRegards.\n\nHiroshi Inoue\[email protected]\n", "msg_date": "Sat, 20 May 2000 14:25:23 +0900", "msg_from": "\"Hiroshi Inoue\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: Performance (was: The New Slashdot Setup (includes MySql server))" }, { "msg_contents": "> -----Original Message-----\n> From: Tom Lane [mailto:[email protected]]\n> \n> \"Hiroshi Inoue\" <[email protected]> writes:\n> >> Unfortunately there is no index on pg_index's indrelid column in 7.0,\n> >> so this is not fixable without an initdb. TODO item for 7.1, I guess.\n> \n> I've also been thinking about ways to implement the relcache-based\n> caching of index information that I mentioned before. That doesn't\n> address the scanning problem in general but it should improve\n> performance for this part of the planner quite a bit.\n\nSounds reasonble to me.\n\n> The trick is to\n> ensure that other backends update their cached info whenever an index\n> is added or deleted. I thought of one way to do that: force an update\n> of the owning relation's pg_class tuple during CREATE or DROP INDEX,\n> even when we don't have any actual change to make in its contents ---\n> that'd force a relcache invalidate cycle at other backends. (Maybe\n> we don't even need to change the pg_class tuple, but just send out a\n> shared-cache-invalidate message as if we had.)\n>\n\nSeems CREATE INDEX already sends shared_cache_invalidate\nmessage. DROP INDEX doesn't ?? I'm not sure.\n \n> > I know another case. pg_attrdef has no index on (adrelid,attnum)\n> > though it has an index on (adrelid).\n> \n> Doesn't look to me like we need an index on (adrelid,attnum), at\n> least not in any paths that are common enough to justify maintaining\n> another index. The (adrelid) index supports loading attrdef data\n> into the relcache, which is the only path I'm particularly concerned\n> about performance of...\n>\n\nIt seems to me that an index on (adrelid,adnum) should\nexist instead of the current index. It identifies pg_attrdef.\nI say *Oops* about it in my trial implementation of ALTER\nTABLE DROP COLUMN.\n\nRegards.\n\nHiroshi Inoue\[email protected] \n\n", "msg_date": "Sat, 20 May 2000 14:25:26 +0900", "msg_from": "\"Hiroshi Inoue\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: Performance (was: The New Slashdot Setup (includes MySql server))" }, { "msg_contents": "\"Hiroshi Inoue\" <[email protected]> writes:\n>>>> I know another case. pg_attrdef has no index on (adrelid,attnum)\n>>>> though it has an index on (adrelid).\n>> \n>> Doesn't look to me like we need an index on (adrelid,attnum), at\n>> least not in any paths that are common enough to justify maintaining\n>> another index. The (adrelid) index supports loading attrdef data\n>> into the relcache, which is the only path I'm particularly concerned\n>> about performance of...\n\n> It seems to me that an index on (adrelid,adnum) should\n> exist instead of the current index. It identifies pg_attrdef.\n> I say *Oops* about it in my trial implementation of ALTER\n> TABLE DROP COLUMN.\n\nRight, I saw that. But it seems to be the only place where such an\nindex would be useful. The relcache-loading routines, which seem to\nbe the only performance-critical access to pg_attrdef, prefer an index\non adrelid only. Is it worth maintaining a 2-column index (which is\nbulkier and slower than a 1-column one) just to speed up ALTER TABLE\nDROP COLUMN?\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 20 May 2000 01:30:08 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance (was: The New Slashdot Setup (includes MySql server))" }, { "msg_contents": "Hi,\n\nTom Lane:\n> It never pays to assume you know what's happening without having looked\n> :-(. It turns out the planner is not the only culprit: the executor's\n> ExecOpenIndices() routine *also* does a sequential scan of pg_index.\n\nThat meshes with my observation that updates seem to do twice as many\nread() calls on pg_index than inserts.\n\nFor this test, anyway.\n\n-- \nMatthias Urlichs | noris network GmbH | [email protected] | ICQ: 20193661\nThe quote was selected randomly. Really. | http://smurf.noris.de/\n-- \nStatistics: Nubers looking for an argument.\n", "msg_date": "Sat, 20 May 2000 10:09:00 +0200", "msg_from": "\"Matthias Urlichs\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance (was: The New Slashdot Setup (includes MySql server))" }, { "msg_contents": "Hi,\n\nThomas Lockhart:\n> \n> Hmm. And then who's job is it to take someone else's work and make it\n> accurate? If the shoe were on the other foot: if I generated a\n> benchmark suite and features list, and it contained major and numerous\n> inaccuracies, who would you expect to be responsible (or at least feel\n> responsible) for correcting/updating/improving it? 'Twould be me imho.\n> \nUmm, there's still a difference between saying (a) \"it's broken, fix\nit\", (b) \"here's my analysis as to what exactly is broken, can you fix\nit\", and (c) \"here's a patch that fixes it\".\n\nI get the distinct impression that most of the communication between the\nPostgreSQL and MySQL people has been looking more like (a) in the\npast... if I can help both projects by doing some \"translation\" towards\n(b) and (c), if at all possible, then so much the better.\n\n> We've tried, and failed (to date) to contribute information to the\n> \"crashme\" travesty. My recollection was a ~30% error rate on\n> information for Postgres, and I didn't look into the stats for other\n> databases. Check the archives for details.\n> \nAttached is the current crashme output. \"crash_me_safe\" is off only\nbecause of the fact that some tests go beyond available memory.\nThere's no sense in testing how far you can push a \"SELECT a from b where\nc = 'xxx(several megabytes worth of Xes)'\" query when the size fo a TEXT\nfield is limited to 32k.\n\nLimits with '+' in front of the number say that this is the max value\ntested, without implying whether higher values are OK or not.\n\nIf you have any remarks, especially about the '=no' results (i.e. you\nthink PostgreSQL can do that, therefore the crashme test must be wrong\nsomehow), tell me. Otherwise I'll forward the results to the MySQL\npeople next week.\n\n\nThe crash-me test script, BTW, is included in MySQL's sql-bench\nsubdirectory.\n\n-- \nMatthias Urlichs | noris network GmbH | [email protected] | ICQ: 20193661\nThe quote was selected randomly. Really. | http://smurf.noris.de/\n-- \nThe real character of a man is found out by his amusements.\n -- Joshua Reynolds", "msg_date": "Sat, 20 May 2000 12:14:38 +0200", "msg_from": "\"Matthias Urlichs\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance (was: The New Slashdot Setup (includes MySql server))" }, { "msg_contents": "Matthias Urlichs wrote:\n> Attached is the current crashme output. \"crash_me_safe\" is off only\n> because of the fact that some tests go beyond available memory.\n> There's no sense in testing how far you can push a \n> \"SELECT a from b where c = 'xxx(several megabytes worth of Xes)'\"\n> query when the size fo a TEXT field is limited to 32k.\n> \n> Limits with '+' in front of the number say that this is the max value\n> tested, without implying whether higher values are OK or not.\n> \n> If you have any remarks, especially about the '=no' results (i.e. you\n> think PostgreSQL can do that, therefore the crashme test must be wrong\n> somehow), tell me. Otherwise I'll forward the results to the MySQL\n> people next week.\n\nHow about:\n\n1. alter_rename_table = no\n\nThe syntax in PostgreSQL is ALTER TABLE x RENAME TO y;\n\n2. atomic_updates = no\n\nHuh? Besides being paranoid about fsync()'ing transactions how is\na transaction based MVCC not atomic with respect to updates?\n\n3. automatic_rowid = no\n\nThe description simply says Automatic rowid. Does this apply to\nquery result sets or to the underlying relation? If the latter,\nPostgreSQL has, of course, an OID for every tuple in the\ndatabase.\n\n4. binary_items = no\n\nRead up on large objects...\n\n5. connections = 32\n\nThis, should, of course be +32, since PostgreSQL can easily\nhandle hundreds of simultaneous connections.\n\n6. create_table_select = no\n\nAgain. PostgreSQL supports CREATE TABLE AS SELECT (i.e. Oracle),\nand SELECT INTO syntax.\n\n7. except = no\n\nPostgreSQL has had both INTERSECT and EXCEPT since 6.5.0 (albeit\nthey're slow).\n\nI'm starting to get very tired of this. I don't see why\nPostgreSQL users are obligated to get MySQL tests correct. And\nI'm only 15% through the list...\n\nBottom line...either the test writers are ignorant or deceptive.\nEither way I won't trust my data with them...\n\nMike Mascari\n", "msg_date": "Sat, 20 May 2000 07:02:42 -0400", "msg_from": "Mike Mascari <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance (was: The New Slashdot Setup (includes MySql server))" }, { "msg_contents": "> -----Original Message-----\n> From: Tom Lane [mailto:[email protected]]\n> \n> \"Hiroshi Inoue\" <[email protected]> writes:\n> >>>> I know another case. pg_attrdef has no index on (adrelid,attnum)\n> >>>> though it has an index on (adrelid).\n> >> \n> >> Doesn't look to me like we need an index on (adrelid,attnum), at\n> >> least not in any paths that are common enough to justify maintaining\n> >> another index. The (adrelid) index supports loading attrdef data\n> >> into the relcache, which is the only path I'm particularly concerned\n> >> about performance of...\n> \n> > It seems to me that an index on (adrelid,adnum) should\n> > exist instead of the current index. It identifies pg_attrdef.\n> > I say *Oops* about it in my trial implementation of ALTER\n> > TABLE DROP COLUMN.\n> \n> Right, I saw that. But it seems to be the only place where such an\n> index would be useful. The relcache-loading routines, which seem to\n> be the only performance-critical access to pg_attrdef, prefer an index\n> on adrelid only. Is it worth maintaining a 2-column index (which is\n> bulkier and slower than a 1-column one) just to speed up ALTER TABLE\n> DROP COLUMN?\n>\n\nI don't mind so much about the performance in this case.\nThe difference would be little.\n\nIsn't it a fundamental principle to define primary(unique\nidentification) constraint for each table ?\nI had never thought that the only one index of pg_attrdef \nisn't an unique identification index until I came across the\nunexpcted result of my DROP COLUMN test case.\n\nComments ?\n\nRegards.\n\nHiroshi Inoue\[email protected]\n", "msg_date": "Sun, 21 May 2000 01:28:31 +0900", "msg_from": "\"Hiroshi Inoue\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: Performance (was: The New Slashdot Setup (includes MySql server))" }, { "msg_contents": "Hi,\n\nMike Mascari:\n> \n> 1. alter_rename_table = no\n> \n> The syntax in PostgreSQL is ALTER TABLE x RENAME TO y;\n> \nThey say \"alter table crash_q rename crash_q1\".\n\nWhat does the official standard say (assuming any exists) -- is the \"to\"\noptional or not?\n\n> 2. atomic_updates = no\n> \n> Huh? Besides being paranoid about fsync()'ing transactions how is\n> a transaction based MVCC not atomic with respect to updates?\n> \nThat's a misnomer. They actually mean this:\n\n\tcreate table crash_q (a integer not null);\n\tcreate unique index crf on crash_q(a);\n\n\tinsert into crash_q values (2);\n\tinsert into crash_q values (3);\n\tinsert into crash_q values (1);\n\tupdate crash_q set a=a+1;\n\n> 3. automatic_rowid = no\n> \n> The description simply says Automatic rowid. Does this apply to\n> query result sets or to the underlying relation? If the latter,\n> PostgreSQL has, of course, an OID for every tuple in the\n> database.\n> \nI'll have them fix that. MySQL calls them \"_rowid\" and apparently tests\nonly for these.\n\n> 4. binary_items = no\n> \n> Read up on large objects...\n> \n... with an ... erm ... let's call it \"nonstandard\" ... interface.\n\n> 5. connections = 32\n> \n> This, should, of course be +32, since PostgreSQL can easily\n> handle hundreds of simultaneous connections.\n> \nThe testing code (Perl) looks like this, and it bombs after the 32nd\nconnection.\n\n for ($i=1; $i < $max_connections ; $i++)\n {\n if (!($dbh=DBI->connect($server->{'data_source'},$opt_user,$opt_password,\n { PrintError => 0})))\n {\n print \"Last connect error: $DBI::errstr\\n\" if ($opt_debug);\n last;\n }\n $dbh->{LongReadLen}= $longreadlen; # Set retrieval buffer\n print \".\" if ($opt_debug);\n push(@connect,$dbh);\n }\n print \"$i\\n\";\n\nI do not know where that limit comes from.\nIt might be the DBI interface to PostgreSQL, or a runtime limit.\n\nAnyway, $max_connections has the value to 1000.\n\n> 6. create_table_select = no\n> \n> Again. PostgreSQL supports CREATE TABLE AS SELECT (i.e. Oracle),\n> and SELECT INTO syntax.\n\nTest code:\n\tcreate table crash_q SELECT * from crash_me;\n\nAgain, is the \"AS\" optional or not?\n\n> 7. except = no\n> \n> PostgreSQL has had both INTERSECT and EXCEPT since 6.5.0 (albeit\n> they're slow).\n> \nLooking at the test, we see it doing this:\n\n\tcreate table crash_me (a integer not null,b char(10) not null);\n\tinsert into crash_me (a,b) values (1,'a');\n\tcreate table crash_me2 (a integer not null,b char(10) not null, c integer);\n\tinsert into crash_me2 (a,b,c) values (1,'b',1);\n\tselect * from crash_me except select * from crash_me2;\n\nFor what it's worth, there is at least one database which doesn't\nhave this restriction (i.e., that the number of columns must be\nidentical) (namely SOLID).\n\nSo this test needs to be split into two. I'll do that.\n\n> I'm starting to get very tired of this. I don't see why\n> PostgreSQL users are obligated to get MySQL tests correct. And\n> I'm only 15% through the list...\n> \n_Somebody_ has to get these things right. I'm not suggesting that it's\nany obligation of yours specifically, but somebody's gotta do it, and\n(IMHO) it can only be done by somebody who already knows _something_\nabout the databse to be tested.\n\n> Bottom line...either the test writers are ignorant or deceptive.\n\nOr the tests are just badly written. Or they're too old and suffer from\nsevere bit rot.\n\n\nFor what its worth, I do NOT think the people who wrote these tests\nare either ignorant or deceptive. Most, if not all, of these tests\nare OK when checked against at least one SQLish database.\n\n-- \nMatthias Urlichs | noris network GmbH | [email protected] | ICQ: 20193661\nThe quote was selected randomly. Really. | http://smurf.noris.de/\n-- \nFreedom of opinion can only exist when\nthe government thinks itself secure.\n -- Bertrand Russell (1872-1967)\n", "msg_date": "Sat, 20 May 2000 20:17:10 +0200", "msg_from": "\"Matthias Urlichs\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance (was: The New Slashdot Setup (includes MySql server))" }, { "msg_contents": "Hi,\n\nI've found another one of these performance problems in the benchmark,\nrelated to another ignored index.\n\nThe whole thing works perfectly after a VACUUM ANALYZE on the\ntable.\n\nIMHO this is somewhat non-optimal. In the absence of information\nto the contrary, PostgreSQL should default to using an index if\nit might be appropriate, not ignore it.\n\nI am thus throwing away yet another benchmark run -- the query now runs\n300 times faster. *Sigh* \n\ntest=# vacuum bench1;\nVACUUM\ntest=# \\d bench1\n Table \"bench1\"\n Attribute | Type | Modifier \n-----------+----------+----------\n id | integer | not null\n id2 | integer | not null\n id3 | integer | not null\n dummy1 | char(30) | \nIndices: bench1_index_,\n bench1_index_1\n\ntest=# \\d bench1_index_\n\n\nIndex \"bench1_index_\"\n Attribute | Type \n-----------+---------\n id | integer\n id2 | integer\nunique btree\n\ntest=# \ntest=# \ntest=# \\d bench1_index_1\nIndex \"bench1_index_1\"\n Attribute | Type \n-----------+---------\n id3 | integer\nbtree\n\ntest=# explain update bench1 set dummy1='updated' where id=150;\nNOTICE: QUERY PLAN:\n\nSeq Scan on bench1 (cost=0.00..6843.00 rows=3000 width=18)\n\nEXPLAIN\ntest=# vacuum bench1;\nVACUUM\ntest=# explain update bench1 set dummy1='updated' where id=150;\nNOTICE: QUERY PLAN:\n\nSeq Scan on bench1 (cost=0.00..6843.00 rows=3000 width=18)\n\nEXPLAIN\ntest=# select count(*) from bench1;\n count \n--------\n 300000\n(1 row)\n\ntest=# select count(*) from bench1 where id = 150;\n count \n-------\n 1\n(1 row)\n\ntest=# explain select count(*) from bench1 where id = 150;\nNOTICE: QUERY PLAN:\n\nAggregate (cost=6850.50..6850.50 rows=1 width=4)\n -> Seq Scan on bench1 (cost=0.00..6843.00 rows=3000 width=4)\n\nEXPLAIN\n\n\n***************************************************************\n\nRelated to this:\n\ntest=# explain select id from bench1 order by id;\nNOTICE: QUERY PLAN:\n\nSort (cost=38259.21..38259.21 rows=300000 width=4)\n -> Seq Scan on bench1 (cost=0.00..6093.00 rows=300000 width=4)\n\nEXPLAIN\n\nThe basic idea to speed this one up (a lot...) would be to walk the index.\n\nThis is _after_ ANALYZE, of course.\n\n-- \nMatthias Urlichs | noris network GmbH | [email protected] | ICQ: 20193661\nThe quote was selected randomly. Really. | http://smurf.noris.de/\n-- \nTo be positive: To be mistaken at the top of one's voice.\n -- Ambrose Bierce, The Devil's Dictionary\n", "msg_date": "Sat, 20 May 2000 20:54:20 +0200", "msg_from": "\"Matthias Urlichs\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: More Performance" }, { "msg_contents": "I know I am going to regret believing that I will actually make any\ndifference, but I am going to shoot myself anyway.\n\nI am writing this more for the new PostgreSQL members who were not\naround last time than in any belief it will make a difference on the\nMySQL end.\n\n\n\n> Hi,\n> \n> Mike Mascari:\n> > \n> > 1. alter_rename_table = no\n> > \n> > The syntax in PostgreSQL is ALTER TABLE x RENAME TO y;\n> > \n> They say \"alter table crash_q rename crash_q1\".\n> \n> What does the official standard say (assuming any exists) -- is the \"to\"\n> optional or not?\n\nI don't see any RENAME in the SQL92 spec. Now, how hard is it to do a\n'man alter_table' and see what it says at the top of the screen?\n\n> \n> > 2. atomic_updates = no\n> > \n> > Huh? Besides being paranoid about fsync()'ing transactions how is\n> > a transaction based MVCC not atomic with respect to updates?\n> > \n> That's a misnomer. They actually mean this:\n> \n> \tcreate table crash_q (a integer not null);\n> \tcreate unique index crf on crash_q(a);\n> \n> \tinsert into crash_q values (2);\n> \tinsert into crash_q values (3);\n> \tinsert into crash_q values (1);\n> \tupdate crash_q set a=a+1;\n\nPoorly named, huh? How do you think it got such a name? This item was\non the crashme tests before TRANSACTION was on there? Can you explain\nhow a very exotic issue got on there year(s) before transactions. \nTransactions got on there only because I complained.\n\n> \n> > 3. automatic_rowid = no\n> > \n> > The description simply says Automatic rowid. Does this apply to\n> > query result sets or to the underlying relation? If the latter,\n> > PostgreSQL has, of course, an OID for every tuple in the\n> > database.\n> > \n> I'll have them fix that. MySQL calls them \"_rowid\" and apparently tests\n> only for these.\n\nWell, I don't see _rowid in the SQL spec either, so we are both\nnon-standard here, though I believe our OID is SQL3.\n\n\n> \n> > 4. binary_items = no\n> > \n> > Read up on large objects...\n> > \n> ... with an ... erm ... let's call it \"nonstandard\" ... interface.\n\nYes.\n\n> \n> > 5. connections = 32\n> > \n> > This, should, of course be +32, since PostgreSQL can easily\n> > handle hundreds of simultaneous connections.\n> > \n> The testing code (Perl) looks like this, and it bombs after the 32nd\n> connection.\n> \n> for ($i=1; $i < $max_connections ; $i++)\n> {\n> if (!($dbh=DBI->connect($server->{'data_source'},$opt_user,$opt_password,\n> { PrintError => 0})))\n> {\n> print \"Last connect error: $DBI::errstr\\n\" if ($opt_debug);\n> last;\n> }\n> $dbh->{LongReadLen}= $longreadlen; # Set retrieval buffer\n> print \".\" if ($opt_debug);\n> push(@connect,$dbh);\n> }\n> print \"$i\\n\";\n> \n> I do not know where that limit comes from.\n> It might be the DBI interface to PostgreSQL, or a runtime limit.\n> \n> Anyway, $max_connections has the value to 1000.\n\nYou have to recompile the backend to increase it. Not on the client\nend. See FAQ.\n\n> \n> > 6. create_table_select = no\n> > \n> > Again. PostgreSQL supports CREATE TABLE AS SELECT (i.e. Oracle),\n> > and SELECT INTO syntax.\n> \n> Test code:\n> \tcreate table crash_q SELECT * from crash_me;\n> \n> Again, is the \"AS\" optional or not?\n\nman create_table. That is all it takes. There is not standard for\nthis. It is from Oracle. Is their AS optional? Does it really matter?\n\n> \n> > 7. except = no\n> > \n> > PostgreSQL has had both INTERSECT and EXCEPT since 6.5.0 (albeit\n> > they're slow).\n> > \n> Looking at the test, we see it doing this:\n> \n> \tcreate table crash_me (a integer not null,b char(10) not null);\n> \tinsert into crash_me (a,b) values (1,'a');\n> \tcreate table crash_me2 (a integer not null,b char(10) not null, c integer);\n> \tinsert into crash_me2 (a,b,c) values (1,'b',1);\n> \tselect * from crash_me except select * from crash_me2;\n> \n> For what it's worth, there is at least one database which doesn't\n> have this restriction (i.e., that the number of columns must be\n> identical) (namely SOLID).\n> \n> So this test needs to be split into two. I'll do that.\n\nSo you test EXCEPT by having a different number of columns. I can see\nit now, \"Hey we don't have EXCEPT. PostgreSQL does it, but they can't\nhandle a different number of columns. Let's do only that test so we\nlook equal.\"\n\n> \n> > I'm starting to get very tired of this. I don't see why\n> > PostgreSQL users are obligated to get MySQL tests correct. And\n> > I'm only 15% through the list...\n> > \n> _Somebody_ has to get these things right. I'm not suggesting that it's\n> any obligation of yours specifically, but somebody's gotta do it, and\n> (IMHO) it can only be done by somebody who already knows _something_\n> about the database to be tested.\n> \n> > Bottom line...either the test writers are ignorant or deceptive.\n> \n> Or the tests are just badly written. Or they're too old and suffer from\n> severe bit rot.\n> \n> \n> For what its worth, I do NOT think the people who wrote these tests\n> are either ignorant or deceptive. Most, if not all, of these tests\n> are OK when checked against at least one SQLish database.\n\nIn looking at each of these items, it is impossible for me to believe\nthat the tests were not written by either very ignorant people (\"I can't\nrun 'man') or very deceptive people (\"Let's make ourselves look good.\").\n\nIf you view this from outside the MySQL crowd, can you see how we would\nfeel this way? This is just a small example of the volumes of reasons\nwe have in believing this.\n\nIf you are going to publish things about other databases on your web\nsite, you had better do a reasonable job to see that is it accurate and\nfair. If it can't be done, take it off. Don't leave it up and have it\nbe wrong, and ignore people in the past who tell you it is wrong.\n\nIt never has been fair, and I suspect never will be, because this is\nhashed around every year with little change or acknowledgement.\n\nSo, yea, we have an attitude. We are usually nice folks, so if the\nmajority of us have a bad attitude, there must be some basis for that\nfeeling, and I can tell you, the majority of us do have a bad attitude\non the topic.\n\nI know the MySQL folks don't have a bad attitude about us, and you know,\nthey don't because we never did anything like this Crashme to them. But\nactually, we are tired of being pushed by an ignorant/deceptive crashme\ntest, and we are starting to push back. But, you can be sure we will\nnever stoop to the level of the crashme test.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 20 May 2000 14:56:36 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance (was: The New Slashdot Setup (includes MySql server))" }, { "msg_contents": "> Hi,\n> \n> I've found another one of these performance problems in the benchmark,\n> related to another ignored index.\n> \n> The whole thing works perfectly after a VACUUM ANALYZE on the\n> table.\n> \n> IMHO this is somewhat non-optimal. In the absence of information\n> to the contrary, PostgreSQL should default to using an index if\n> it might be appropriate, not ignore it.\n\nThis is an interesting idea. So you are saying that if a column has no\nvacuum analyze statistics, assume it is unique? Or are you talking\nabout a table that has never been vacuumed? Then we assume it is a\nlarge table. Interesting. It would help some queries, but hurt others.\nWe have gone around and around on what the default stats should be.\nTom Lane can comment on this better than I can.\n\n> \n> Related to this:\n> \n> test=# explain select id from bench1 order by id;\n> NOTICE: QUERY PLAN:\n> \n> Sort (cost=38259.21..38259.21 rows=300000 width=4)\n> -> Seq Scan on bench1 (cost=0.00..6093.00 rows=300000 width=4)\n> \n> EXPLAIN\n> \n> The basic idea to speed this one up (a lot...) would be to walk the index.\n> \n> This is _after_ ANALYZE, of course.\n\nBut you are grabbing the whole table. Our indexes are separate files. \nThe heap is unordered, meaning a sequential scan and order by is usually\nfaster than an index walk unless there is a restrictive WHERE clause.\n\nThanks for the tip about needing an index on pg_index. That will be in\n7.1. I remember previous crashme rounds did bring up some good info for\nus, like the fact older releases couldn't handle trailing comments from\nperl.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 20 May 2000 15:17:56 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: More Performance" }, { "msg_contents": "\"Matthias Urlichs\" <[email protected]> writes:\n>> Hmm. And then who's job is it to take someone else's work and make it\n>> accurate? If the shoe were on the other foot: if I generated a\n>> benchmark suite and features list, and it contained major and numerous\n>> inaccuracies, who would you expect to be responsible (or at least feel\n>> responsible) for correcting/updating/improving it? 'Twould be me imho.\n>> \n> Umm, there's still a difference between saying (a) \"it's broken, fix\n> it\", (b) \"here's my analysis as to what exactly is broken, can you fix\n> it\", and (c) \"here's a patch that fixes it\".\n\nGood luck. Close analysis of the crashme test leaves an extremely bad\ntaste in the mouth: there are just too many cases where it's clearly\ndesigned as a pro-MySQL advertising tool and not an honest attempt to\ndescribe reality. Shall we consider details?\n\n> Attached is the current crashme output. \"crash_me_safe\" is off only\n> because of the fact that some tests go beyond available memory.\n> There's no sense in testing how far you can push a \"SELECT a from b where\n> c = 'xxx(several megabytes worth of Xes)'\" query when the size fo a TEXT\n> field is limited to 32k.\n\nI would not like to see us labeled \"crashme unsafe\" merely because\nsomeone is too impatient to let the test run to conclusion. But there's\na more interesting problem here: using stock crashme and Postgres 7.0,\non my system it's crashme that crashes and not Postgres! The crashme\nPerl script is a huge memory hog and runs into the kernel's process-size\nlimit long before the connected backend does. To get it to run to\ncompletion, I have to reduce the thing's limit on the longest query it\nwill try:\n\n*** crash-me~\tSat May 20 12:28:11 2000\n--- crash-me\tSat May 20 13:21:11 2000\n***************\n*** 104,110 ****\n #\n \n $max_connections=\"+1000\"; # Number of simultaneous connections\n! $max_buffer_size=\"+16000000\"; # size of communication buffer.\n $max_string_size=\"+8000000\"; # Enough for this test\n $max_name_length=\"+512\"; # Actually 256, but ...\n $max_keys=\"+64\"; # Probably too big.\n--- 104,110 ----\n #\n \n $max_connections=\"+1000\"; # Number of simultaneous connections\n! $max_buffer_size=\"+1000000\"; # size of communication buffer.\n $max_string_size=\"+8000000\"; # Enough for this test\n $max_name_length=\"+512\"; # Actually 256, but ...\n $max_keys=\"+64\"; # Probably too big.\n\nA few months ago I was able to use max_buffer_size = +2000000, but\ncrashme 1.43 seems to be an even worse memory hog than its predecessors.\nAt this setting, the Perl process tops out at about 114Mb while the\nconnected backend grows to 66Mb. (I run with a process limit of 128Mb.)\nTo be fair, this could be Perl's fault more than crashme's. I'm using\nPerl 5.005_03 ... anyone know if more recent versions use less memory?\n\n\nNow, on to some specific complaints:\n\n> alter_drop_col=no\t\t\t# Alter table drop column\n\nWhile our ALTER TABLE support is certainly pretty weak, it should be\nnoted that this test will continue to fail even when we have ALTER TABLE\nDROP COLUMN, because crashme is testing for a non-SQL-compliant syntax.\n\n> alter_rename_table=no\t\t\t# Alter table rename table\n\nWe have ALTER TABLE RENAME ... but not under the syntax crashme is\ntesting. Since SQL92 doesn't specify a syntax for RENAME, there's no\nabsolute authority for this --- but a quick check of the comparative\ncrashme results at http://www.mysql.com/crash-me-choose.htmy shows that\n*none* of the major commercial DBMSs \"pass\" this test. Rather curious\nthat crashme uses a MySQL-only syntax for this test, no?\n\n> atomic_updates=no\t\t\t# atomic updates\n\nWhat's actually being tested here is whether the DBMS will let you do\n\"update crash_q set a=a+1\" in a table with a unique index on \"a\" and\nconsecutive pre-existing values. In other words, is the uniqueness\nconstraint checked on a per-tuple-update basis, or deferred to end of\ntransaction? It's fair to blame Postgres for not supporting a deferred\nuniqueness check, but this test is extremely misleadingly labeled.\nA person who hadn't examined the guts of crashme would probably think\nit tests whether concurrent transactions see each others' results\natomically.\n\n> automatic_rowid=no\t\t\t# Automatic rowid\n\nTest is actually looking for a system column named \"_rowid\". Our OIDs\nserve the same purpose, and I believe there are equivalent features in\nmany other DBMSes. Again, MySQL is the only \"passer\" of this test,\nwhich says more about their level of standardization than other\npeople's.\n\n> binary_items=no\t\t\t\t# binary items (0x41)\n\nWe have binary literals (per the test name) and hex literals (what\nit actually appears to be testing). Unfortunately for us, ours are\nSQL92-compliant syntax, and what crashme is looking for isn't.\n\n> comment_#=no\t\t\t\t# # as comment\n> comment_--=yes\t\t\t# -- as comment\n> comment_/**/=yes\t\t\t# /* */ as comment\n> comment_//=no\t\t\t\t# // as comment\n\nIt'd be helpful to the reader if they indicated which two of these\nconventions are SQL-compliant ... of course, that might expose the\nfact that MySQL isn't ...\n\n> connections=32\t\t\t\t# Simultaneous connections\n\nShould probably be noted that this is just the default limit (chosen to\navoid creating problems on small systems) and can easily be raised at\npostmaster start time.\n\n> crash_me_safe=no\t\t\t# crash me safe\n\nI get \"yes\", and I'd *really* appreciate it if you not submit this\nmisleading statement.\n\n> create_table_select=no\t\t\t# create table from select\n\nThis is looking for \"create table crash_q SELECT * from crash_me\",\nwhich again appears to be a MySQL-only syntax. We have the same feature\nbut we want \"AS\" in front of the \"SELECT\". Dunno how other DBMSs do it.\n\n> date_zero=no\t\t\t\t# Supports 0000-00-00 dates\n\nNote this is not checking to see if the date format yyyy-mm-dd is\naccepted, it's checking to see if the specific value '0000-00-00'\nis accepted. Haven't these people heard of NULL? Another test that\nonly MySQL \"passes\".\n\n> except=no\t\t\t\t# except\n\nThis test is checking:\ncreate table crash_me (a integer not null,b char(10) not null);\ncreate table crash_me2 (a integer not null,b char(10) not null, c integer);\nselect * from crash_me except select * from crash_me2;\nPostgres rejects it with\nERROR: Each UNION | EXCEPT | INTERSECT query must have the same number of columns.\nUnsurprisingly, hardly anyone else accepts it either.\n\n> except_all=no\t\t\t\t# except all\n\nWhile we do not have \"except all\", when we do this test will still fail\nfor the same reason as above.\n\n> func_extra_not=no\t\t\t# Function NOT in SELECT\n\nWhat they are looking for here is \"SELECT NOT 0\", which Postgres rejects\nas a type violation. SQL-compliant \"NOT FALSE\" would work.\n\nBTW, while I haven't got the patience to go through the function list in\ndetail, quite a few functions that we actually have are shown as \"not\nthere\" because of type resolution issues. For example they test exp()\nwith \"select exp(1)\" which fails because of ambiguity about whether\nexp(float8) or exp(numeric) is wanted. This will get cleaned up soon,\nbut it's not really a big problem in practice...\n\n> having_with_alias=no\t\t\t# Having on alias\n\nAgain, how curious that MySQL is the only DBMS shown as passing this\ntest. Couldn't be because it violates SQL92, could it?\n\n> insert_select=no\t\t\t# insert INTO ... SELECT ...\n\nWe would pass this test if the crashme script weren't buggy: it fails\nto clean up after a prior test that creates a crash_q table with\ndifferent column names. The prior test is testing \"drop table if\nexists\", which means the only way to be shown as having this\nSQL-standard feature is to implement the not-standard \"if exists\".\n\n> intersect=no\t\t\t\t# intersect\n> intersect_all=no\t\t\t# intersect all\n\nSee above comments for EXCEPT.\n\n> logical_value=1\t\t\t# Value of logical operation (1=1)\n\nA rather odd result, considering that what Postgres actually returns for\n\"SELECT (1=1)\" is 't'. But showing the correct answer isn't one of\ncrashme's highest priorities...\n\n> minus_neg=no\t\t\t\t# Calculate 1--1\n\nAnother case where \"passing\" the test means accepting MySQL's version of\nreality instead of SQL92's. All the SQL-compliant DBMSs think -- is a\ncomment introducer, so \"select a--1 from crash_me\" produces an error ...\nbut not in MySQL ...\n\n> quote_ident_with_\"=no\t\t\t# \" as identifier quote (ANSI SQL)\n> quote_ident_with_[=no\t\t\t# [] as identifier quote\n> quote_ident_with_`=no\t\t\t# ` as identifier quote\n\nHere at least they admit which variant is ANSI ;-). Postgres doesn't\npass because we think 'select \"A\" from crash_me' should look for a\ncolumn named upper-case-A, but the column is actually named\nlower-case-a. We are not conforming to the letter of the SQL standard\nhere --- SQL says an unquoted name should be mapped to all upper case,\nnot all lower case as we do it, which is how the column got to be named\nthat way. We're closer than MySQL though...\n\n> select_string_size=+16208\t\t# constant string size in SELECT\n\nI got 1048567 here, roughly corresponding to where I set max_buffer_size.\nNot sure why you get a smaller answer.\n\n> select_table_update=no\t\t\t# Update with sub select\n\nWe certainly have update with sub select. What they're looking for is\nthe non-SQL-compliant syntax\n\tupdate crash_q set crash_q.b=\n\t\t(select b from crash_me where crash_q.a = crash_me.a);\nIt works in Postgres if you remove the illegal table specification:\n\tupdate crash_q set b=\n\t\t(select b from crash_me where crash_q.a = crash_me.a);\n\n> type_sql_bit=yes\t\t\t# Type bit\n> type_sql_bit(1_arg)=yes\t\t\t# Type bit(1 arg)\n> type_sql_bit_varying(1_arg)=yes\t\t# Type bit varying(1 arg)\n\nIt should probably be noted that we only have syntax-level support for\nBIT types in 7.0; they don't actually work. The test is not deep enough\nto notice that, however.\n\n\nGeneral comments:\n\nIt appears that they've cleaned up their act a little bit. The last\ntime I examined crashme in any detail, there was an even longer list\nof tests that checked for standard features but were careful to use a\nnonstandard variant so they could claim that other people failed to\nhave the feature at all.\n\nMore generally, it's difficult to take seriously a test method and\npresentation method that puts more weight on how many variant spellings\nof \"log()\" you accept than on whether you have subselects. (I count\nfive entries versus two.)\n\nOne could also complain about the very large number of tests that are\nchecking features that are non-SQL if not downright SQL-contradictory,\nbut are listed simply as bullet points with no pro or con. A naive\nreader would think that green stars are always good; they are not,\nbut how are you to tell without a copy of the SQL spec in hand?\n\nFinally, the test coverage seems to have been designed with an eye\ntowards giving MySQL as many green stars as possible, not towards\nexercising the most important features of SQL. It would be interesting\nto see considerably more coverage of subselects, for example, and I\nexpect that'd turn up shortcomings in a number of products including\nPostgres. But it won't happen as long as crashme is a tool of, by, and\nfor MySQL partisans (at least not till MySQL has subselects, whereupon\nthe test coverage will no doubt change).\n\n\nJust FYI, I attach a diff between what you presented and what I get from\nrunning the current crashme. I don't understand exactly what's causing\nthe small differences in the values of some of the size limits.\nPerhaps it is a side effect of using a different max_buffer_size, but\nit seems really weird.\n\n\t\t\tregards, tom lane\n\n\n37c37\n< crash_me_safe=no\t\t\t# crash me safe\n---\n> crash_me_safe=yes\t\t\t# crash me safe\n309c309\n< max_char_size=8104\t\t\t# max char() size\n---\n> max_char_size=8088\t\t\t# max char() size\n315c315\n< max_index_length=2704\t\t\t# index length\n---\n> max_index_length=2700\t\t\t# index length\n317c317\n< max_index_part_length=2704\t\t# max index part length\n---\n> max_index_part_length=2700\t\t# max index part length\n319,321c319,321\n< max_index_varchar_part_length=2704\t# index varchar part length\n< max_row_length=7949\t\t\t# max table row length (without blobs)\n< max_row_length_with_null=7949\t\t# table row length with nulls (without blobs)\n---\n> max_index_varchar_part_length=2700\t# index varchar part length\n> max_row_length=7937\t\t\t# max table row length (without blobs)\n> max_row_length_with_null=7937\t\t# table row length with nulls (without blobs)\n326c326\n< max_text_size=8104\t\t\t# max text or blob size\n---\n> max_text_size=8092\t\t\t# max text or blob size\n328c328\n< max_varchar_size=8104\t\t\t# max varchar() size\n---\n> max_varchar_size=8088\t\t\t# max varchar() size\n344c344\n< operating_system=Linux 2.3.99s-noris-pre9-2 i686\t# crash-me tested on\n---\n> operating_system=HP-UX B.10.20 9000/780\t# crash-me tested on\n355c355\n< query_size=16777216\t\t\t# query size\n---\n> query_size=1048576\t\t\t# query size\n369c369\n< select_string_size=+16208\t\t# constant string size in SELECT\n---\n> select_string_size=1048567\t\t# constant string size in SELECT\n490c490\n< where_string_size=+16208\t\t# constant string size in where\n---\n> where_string_size=1048542\t\t# constant string size in where", "msg_date": "Sat, 20 May 2000 16:26:03 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "MySQL's \"crashme\" (was Re: Performance)" }, { "msg_contents": "Hi,\n\n[ Sorry if this reply is much too long. I know that...]\n\nBruce Momjian:\n> I know I am going to regret believing that I will actually make any\n> difference, but I am going to shoot myself anyway.\n> \nI sincerely hope/believe you're wrong.\n\n> > What does the official standard say (assuming any exists) -- is the \"to\"\n> > optional or not?\n> \n> I don't see any RENAME in the SQL92 spec. Now, how hard is it to do a\n> 'man alter_table' and see what it says at the top of the screen?\n> \nIt's not a question of your manpage vs. their manpage. I can read your\nmanpage just fine. It's a question of whether there is somethign that\ncan be regarded as a standard on it or not. \"Official\" is a poor wording\nin this case -- sorry.\n\nIf yes, then the test will be changed to do it the standard way.\nIf no, then I might have to test for both syntaxes, which is a PITA.\n\n\nWhile I'm at it, I note that the last sentence of that manpage says\n\n\tThe clauses to rename columns and tables are Postgres extensions\n\tfrom SQL92.\n\nCorrect me when I'm wrong, but is that really _your_ extension, or\ndid some other database vendor (who?) come up with it?\n\n> > Anyway, $max_connections has the value to 1000.\n> \n> You have to recompile the backend to increase it. Not on the client\n> end. See FAQ.\n\nI was compiling and running the backend with default options(*). That\nmeans that the tests will show the default limits. It does this for all\nthe other databases in the crash-me test result suite. (Oracle:40,\nmysql:100, solid:248, empress:10, interbase:10)\n\nAnyway, the max value for PostgreSQL, without recompiling the backend,\nis 1024 according to the FAQ; but there's no way an automated test can\nfind out _that_.\n\nI'll add a comment (\"installation default\") to that test column.\n\n(*) except for fsync, of course, in the interest of fairness.\n\n> man create_table. That is all it takes. There is not standard for\n> this. It is from Oracle. Is their AS optional? Does it really matter?\n> \nNo.\n\nWhat matters is that your opinion is that they are responsible for making\nthe test 100% accurate. Their reply to that is that many database\nvendors actually provided fixes for this test instead of bitching\nabout how inaccurate it is, thus they feel the obligation is on your\nside.\n\nNow I am of neither side. I am, IMHO, thus in a position to ask you\nabout your opinion of these inaccuracies, I am going to change \nthe crashme test to be a whole lot more accurate WRT PostgreSQL,\nI will feed these changes back to the MySQL people, and they'll\nincorporate these changes into their next release. (Their head honcho\n(Monty) has said so on their mailing list. I _am_ going to take him up\non it, and I can be quite obnoxious if somebody reneges on a promise.\n*EVIL*GRIN* )\n\nIf your opinion is that you have a right to be annoyed about all of this\nbecause you went through the whole thing last year, and the year before\nthat, and ..., ... well, I can understand your point of view.\n\nBut I honestly think that the problem is not one of either malice or\nstupidity. \"Different sets of priorities\" and \"different project\nstructure\" are equally-valid assumptions. At least for me. Until I'm\nproven wrong (IF I am).\n\n> So you test EXCEPT by having a different number of columns. I can see\n> it now, \"Hey we don't have EXCEPT. PostgreSQL does it, but they can't\n> handle a different number of columns. Let's do only that test so we\n> look equal.\"\n> \nThey might as well have written that test while checking their crash-me\nscript against SOLID and noting a few features MySQL doesn't have yet.\nOr they may have gotten it from them in the first place.\n\n\nI might add that their test lists 52 features of PostgreSQL which\nMySQL doesn't have (13 functions). It also lists 122 features of MySQL\nwhich PostgreSQL doesn't have; 78 of those are extra functions (40 of\nthese, just for M$-ODBC compatibility).\n\nSo it seems that overall, that crash-me test result is reasonably\nbalanced (39 vs. 44 non-function differences -- let's face it, adding\nanother function for compatibility with SQL variant FOO is one of the\neasier exercises here, whatever the current value of FOO is).\n\nThe result is going to be even more balanced when I'm through with it,\nbut I cannot do that on my own, as I do not have enough experience with\neither PostgreSQL or the various SQL standards. Thus, I'm asking.\n\nIs that really a problem?\n\n> If you view this from outside the MySQL crowd, can you see how we would\n> feel this way? This is just a small example of the volumes of reasons\n> we have in believing this.\n> \nI would like not to view this from any outside, inside, or whatever\nviewpoint. My goal is to get at least some part of the petty arguments\nout of the way because, in MY book at least, the _real_ \"battle\", such\nas there is, isn't PostgreSQL against MySQL! It's more-or-less-\n-open-source databases on one side and closed-source products, some of\nwhich are the equivalent of MS Word in the database world (you know who\nI'm talkign about ;-) on the other side.\n\n> It never has been fair, and I suspect never will be, because this is\n> hashed around every year with little change or acknowledgement.\n> \nIt is about as fair as a certain comparison chart on your site has been.\nIt's gone now, thus as far as I'm concerned it's water under the bridge.\nBesides, I'm not interested. Some of the members of this list seem\nto be pretty much burned out on the whole issue -- I can live with that;\nbut I'm trying to do something about the problem. Don't shoot the\nmessenger. ;-)\n\n\n-- \nMatthias Urlichs | noris network GmbH | [email protected] | ICQ: 20193661\nThe quote was selected randomly. Really. | http://smurf.noris.de/\n-- \nLady Luck brings added income today.\nLady friend takes it away tonight.\n", "msg_date": "Sat, 20 May 2000 22:26:40 +0200", "msg_from": "\"Matthias Urlichs\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance (was: The New Slashdot Setup (includes MySql server))" }, { "msg_contents": "Hi,\n\nBruce Momjian:\n> > \n> > test=# explain select id from bench1 order by id;\n> > Sort (cost=38259.21..38259.21 rows=300000 width=4)\n> > -> Seq Scan on bench1 (cost=0.00..6093.00 rows=300000 width=4)\n> > \n> The heap is unordered, meaning a sequential scan and order by is usually\n> faster than an index walk unless there is a restrictive WHERE clause.\n> \nWhat heap? The index is a b-tree in this case. Thus you should be able\nto walk it and get the sorted result without ever touching the data\nfile.\n\nWhether that makes sense with the current structure of the PostgreSQL\nbackend is a different question, of course. Certain othr databases\n(no, not just MySQL ;-) are capable of doing that optimization, however.\n\n-- \nMatthias Urlichs | noris network GmbH | [email protected] | ICQ: 20193661\nThe quote was selected randomly. Really. | http://smurf.noris.de/\n-- \nThe difference between a rich man and a poor man is this -- the former\neats when he pleases, the latter when he can get it.\n -- Sir Walter Raleigh\n", "msg_date": "Sat, 20 May 2000 22:30:21 +0200", "msg_from": "\"Matthias Urlichs\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: More Performance" }, { "msg_contents": "Hi,\n\nBruce Momjian:\n> > IMHO this is somewhat non-optimal. In the absence of information\n> > to the contrary, PostgreSQL should default to using an index if\n> > it might be appropriate, not ignore it.\n> \n> This is an interesting idea. So you are saying that if a column has no\n> vacuum analyze statistics, assume it is unique?\n\nNope. But why should vacuum analyze be the one and only part of\nPostgreSQL where statistics are ever updated?\n\nWhen you have no statistics, a \"column_name=CONSTANT\" query for an\nindexed column yields exactly one result (actually, \"significantly fewer\nresults than there are 8-kbyte records in the table\" would do), you\nmight want to record the fact that using the index might, in hindsight,\nhave been a good idea after all.\n\nThen, when the next query like that comes in, you use the index.\n\nMaybe I'm too naive ;-) but I fail to see how this approach could\nbe either hard to implement or detrimental to performance.\n\n-- \nMatthias Urlichs | noris network GmbH | [email protected] | ICQ: 20193661\nThe quote was selected randomly. Really. | http://smurf.noris.de/\n-- \nAn Army travels on her stomach.\n", "msg_date": "Sat, 20 May 2000 22:40:43 +0200", "msg_from": "\"Matthias Urlichs\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: More Performance" }, { "msg_contents": "> Hi,\n> \n> Bruce Momjian:\n> > > \n> > > test=# explain select id from bench1 order by id;\n> > > Sort (cost=38259.21..38259.21 rows=300000 width=4)\n> > > -> Seq Scan on bench1 (cost=0.00..6093.00 rows=300000 width=4)\n> > > \n> > The heap is unordered, meaning a sequential scan and order by is usually\n> > faster than an index walk unless there is a restrictive WHERE clause.\n> > \n> What heap? The index is a b-tree in this case. Thus you should be able\n> to walk it and get the sorted result without ever touching the data\n> file.\n> \n> Whether that makes sense with the current structure of the PostgreSQL\n> backend is a different question, of course. Certain othr databases\n> (no, not just MySQL ;-) are capable of doing that optimization, however.\n\nWe can't read data from the index. It would be nice if we could, but we\ncan't. I think we believe that there are very few cases where this\nwould be win. Usually you need non-indexed data too.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 20 May 2000 16:43:57 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: More Performance" }, { "msg_contents": "\"Matthias Urlichs\" <[email protected]> writes:\n>> So you test EXCEPT by having a different number of columns. I can see\n>> it now, \"Hey we don't have EXCEPT. PostgreSQL does it, but they can't\n>> handle a different number of columns. Let's do only that test so we\n>> look equal.\"\n>> \n> They might as well have written that test while checking their crash-me\n> script against SOLID and noting a few features MySQL doesn't have yet.\n> Or they may have gotten it from them in the first place.\n\nOur gripe is not that they're testing an extension we haven't got.\nIt's that the test result is misleadingly labeled. It doesn't say\n\"EXCEPT with incompatible select lists\", it says \"EXCEPT\", full stop.\nThat's deceptive. And no, we do not think it's an honest mistake.\nIt's part of a consistent pattern of misstatements that's been going on\nfor a long time. Sure, any one might be an honest mistake, but when you\nsee the same sort of thing over and over again, your credulity drops to\na low level. crashme is designed to make MySQL look good and everyone\nelse (not just Postgres) look bad.\n\nI'm glad to hear your optimism about cleaning this up. Perhaps you\ncan actually accomplish something, but most of us decided long ago\nthat crashme is not meant as a fair comparison. We have other things\nto do than ride herd on crashme and try to keep them to the straight\nand narrow, when they clearly have no desire to make it an unbiased\ntest and will not do so without constant prodding.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 20 May 2000 17:44:45 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance (was: The New Slashdot Setup (includes MySql server))" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n>> What heap? The index is a b-tree in this case. Thus you should be able\n>> to walk it and get the sorted result without ever touching the data\n>> file.\n\n> We can't read data from the index. It would be nice if we could, but we\n> can't.\n\nThe reason we can't is that we don't store tuple validity data in\nindexes. The index entry has the key value and a pointer to the tuple\nin the main heap file, but we have to visit the tuple to find out\nwhether it's committed or dead. If we did otherwise, then committing or\nkilling tuples would be lots slower than it is, because we'd have to\nfind and mark all the index entries pointing at the tuple, not just the\ntuple itself. It's a tradeoff... but we think it's a good one.\n\n> I think we believe that there are very few cases where this\n> would be win. Usually you need non-indexed data too.\n\nRight, non-toy examples usually read additional data columns anyway.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 20 May 2000 21:13:49 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: More Performance " }, { "msg_contents": "> Our gripe is not that they're testing an extension we haven't got.\n> It's that the test result is misleadingly labeled. It doesn't say\n> \"EXCEPT with incompatible select lists\", it says \"EXCEPT\", full stop.\n> That's deceptive. And no, we do not think it's an honest mistake.\n> It's part of a consistent pattern of misstatements that's been going on\n> for a long time. Sure, any one might be an honest mistake, but when you\n> see the same sort of thing over and over again, your credulity drops to\n> a low level. crashme is designed to make MySQL look good and everyone\n> else (not just Postgres) look bad.\n> \n> I'm glad to hear your optimism about cleaning this up. Perhaps you\n> can actually accomplish something, but most of us decided long ago\n> that crashme is not meant as a fair comparison. We have other things\n> to do than ride herd on crashme and try to keep them to the straight\n> and narrow, when they clearly have no desire to make it an unbiased\n> test and will not do so without constant prodding.\n\nThe basic issue is that you can tell us that this big crashme mess\nhappened by mistake, and that there there was no deceptive intent.\n\nHowever, really, we are not stupid enough to believe it.\n\nWhy don't you find out who wrote this thing, and ask them what they were\nthinking when they wrote it? I bet you will find out our perception is\ncorrect. \n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 20 May 2000 21:57:50 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance (was: The New Slashdot Setup (includes MySql server))" }, { "msg_contents": "Hi,\n\nBruce Momjian:\n> > \n> > > 2. atomic_updates = no\n> > That's a misnomer. They actually mean this:\n> > \n> > \tcreate table crash_q (a integer not null);\n> > \tcreate unique index crf on crash_q(a);\n> > \n> > \tinsert into crash_q values (2);\n> > \tinsert into crash_q values (3);\n> > \tinsert into crash_q values (1);\n> > \tupdate crash_q set a=a+1;\n> \n> Poorly named, huh? How do you think it got such a name? This item was\n> on the crashme tests before TRANSACTION was on there?\n\nIt probably got that name because nobody thought about people\nassociating atomicity with transactions.\n\nAnyway, the issue isn't all that exotic. ms-sql, mimer, db2, solid and\nsybase are listed as supporting this kind of update.\n\n\nIf you can think of an understandable five-word-or-so description for\nit, I'll happily rename the test. I've been thinking about it for the\nlast ten minutes or so, but couldn't come up with one. :-/\n\n\nA different question is whether the database bungles the update when the\nfirst few row can be updated and THEN you run into a conflict.\n\nPostgreSQL handles this case correctly, MySQL doesn't => I'll add a\ntest for it.\n\n-- \nMatthias Urlichs | noris network GmbH | [email protected] | ICQ: 20193661\nThe quote was selected randomly. Really. | http://smurf.noris.de/\n-- \nThe only way to be good at everything you do is to only do the things\nyou are good at.\n", "msg_date": "Sun, 21 May 2000 05:06:10 +0200", "msg_from": "\"Matthias Urlichs\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance (was: The New Slashdot Setup (includes MySql server))" }, { "msg_contents": "[CC: to general list.]\n\n> > > What does the official standard say (assuming any exists) -- is the \"to\"\n> > > optional or not?\n> > \n> > I don't see any RENAME in the SQL92 spec. Now, how hard is it to do a\n> > 'man alter_table' and see what it says at the top of the screen?\n> > \n> It's not a question of your manpage vs. their manpage. I can read your\n> manpage just fine. It's a question of whether there is something that\n> can be regarded as a standard on it or not. \"Official\" is a poor wording\n> in this case -- sorry.\n> \n> If yes, then the test will be changed to do it the standard way.\n> If no, then I might have to test for both syntaxes, which is a PITA.\n> \n\nYou know, you are asking what syntax is SQL standard. It is actually\nnot our job to report it to you. If you are responsible for the test,\nyou should know what the standard says, and test against that. If you\nare not responsible for the test, then it shows that the person who is\nresponsible for the test doesn't care enough to test for SQL standard\nsyntax, only for MySQL syntax.\n\nYou know, there is a saying, \"Do it right, or don't do it at all.\" That\nis pretty much the PostgreSQL style. And if you are going to criticize\nsomeone, you better be sure you are right.\n\nWe didn't write the crashme test, we don't host it on our web site, we\ndidn't ask to be in it. Someone has to be responsible for the test, and\nknowing standard SQL syntax, and that must be whoever put it on the\nMySQL site. We really don't want to hear that it dropped from the sky\nand landed on the MySQL site, and no one there is responsible for it.\n\nIf we put something on our site, we are responsible for it. If we don't\nlike something or can't take ownership of it, we remove it.\n\nNow, I am not picking on you. You may have the best of intentions. But\nbasically someone has decided to put it on the MySQL site, and has not\nconsidered it worth their while to learn the SQL standard. They would\nrather make other people tell them about the SQL standard, and maybe,\njust maybe, we will fix the test someday. Well, I will tell you, we\nhave better things to do than fix the MySQL crashme test.\n\n> What matters is that your opinion is that they are responsible for making\n> the test 100% accurate. Their reply to that is that many database\n> vendors actually provided fixes for this test instead of bitching\n> about how inaccurate it is, thus they feel the obligation is on your\n> side.\n\nBINGO! You know, if other database vendors are stupid enough to do\nMySQL's work for them and read the SQL standard for them, well...\n\nYou can't just point fingers and say no one at MySQL is responsible.\nThe MySQL bias is written all through that test.\n\n> Now I am of neither side. I am, IMHO, thus in a position to ask you\n> about your opinion of these inaccuracies, I am going to change \n> the crashme test to be a whole lot more accurate WRT PostgreSQL,\n> I will feed these changes back to the MySQL people, and they'll\n> incorporate these changes into their next release. (Their head honcho\n> (Monty) has said so on their mailing list. I _am_ going to take him up\n> on it, and I can be quite obnoxious if somebody reneges on a promise.\n> *EVIL*GRIN* )\n\nYou know, how do we know he is not just saying that hoping no one will\nactually take him up on it.\n\nYou know, Monty was on this list last year, and he asked why we had a\nbad attitude about MySQL, and we told him about the crashme test, and\nyou know, nothing happened. So I don't think it is very important to\nMonty to be fair, or more accurately, he would rather keep a test that\nmakes MySQL look good, than to spend time making the test fair. He made\nhis choice. I can tell you our reaction would be totally different.\n\n> I might add that their test lists 52 features of PostgreSQL which\n> MySQL doesn't have (13 functions). It also lists 122 features of MySQL\n> which PostgreSQL doesn't have; 78 of those are extra functions (40 of\n> these, just for M$-ODBC compatibility).\n\n\n> \n> So it seems that overall, that crash-me test result is reasonably\n> balanced (39 vs. 44 non-function differences -- let's face it, adding\n> another function for compatibility with SQL variant FOO is one of the\n> easier exercises here, whatever the current value of FOO is).\n\nYou have to make the test deceptive to get MySQL to be on par with\nPostgreSQL. Period. Doesn't MySQL admit they have fewer features than\nPostgreSQL. How did MySQL get an equal score on features? Answer me\nthat one.\n\nWe have given enough of our time to this, and have pointed out many\nproblems. Why don't you go an get those fixed, to show that the MySQL\ngroup is working in good faith on this, and then, go and get a copy of\nthe standard, or a book about standard SQL, and start actually doing\nsomething about the test. \n\nAnd if it is not worth your time, and it is not worth any one else's\ntime at MySQL, then you folks have to admit you want to criticize\nPostgreSQL without spending time to be fair about it.\n\nI am going to suggest that no one else in the PostgreSQL group send any\nmore problem reports about the crashme tests until some changes appear\non the MySQL end. Tom Lane has already done a great job of illustrating\nthe issues involved. Pointing to actual SQL items is not the real\nproblem. The MySQL attitude about crashme is the problem.\n\nAlso, I have heard about the hit squads attacking MySQL. I never\ncondone inaccuracy or attacks, but I can understand why it is happening.\n\nFor years, I believe the deceptiveness of the MySQL crashme test has\nhampered acceptance of PostgreSQL. And our response was to just reply\nwith our opinion when asked about it. We didn't create a web page to\nattack MySQL and make them look bad. We believed that in the end, truth\nalways wins. So we kept going, and you know, in the end, truth does\nwin. We have a $25 million dollar company forming around PostgreSQL,\nwith maybe more to come. We are on our way up, even though the MySQL\ncrashme test delayed us.\n\nAnd there is a saying \"If you are not nice to people on your way up,\nthey will not be nice to you on the way down.\" I bet the hit squads are\nfrustrated people who have seen unfair things said about PostgreSQL for\nyears, with nothing they could do about it. Now they can do something,\nand they are talking. But instead of one web page with deceptive\nresults, you have 100 people all over the net slamming MySQL. There is\na certain poetic justice in that. The saying goes, \"Oh what a tangled\nweb we weave, When first we practice to deceive\".\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 20 May 2000 23:13:46 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "MySQL crashme test and PostgreSQL" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> [CC: to general list.]\n> \n> > I might add that their test lists 52 features of PostgreSQL which\n> > MySQL doesn't have (13 functions). It also lists 122 features of \n> > MySQL which PostgreSQL doesn't have; 78 of those are extra \n> > functions (40 of these, just for M$-ODBC compatibility).\n> \n> >\n> > So it seems that overall, that crash-me test result is reasonably\n> > balanced (39 vs. 44 non-function differences -- let's face it,\n> > adding another function for compatibility with SQL variant FOO is\n> > one of the easier exercises here, whatever the current value of \n> > FOO is).\n> \n> You have to make the test deceptive to get MySQL to be on par with\n> PostgreSQL. Period. Doesn't MySQL admit they have fewer features\n> than PostgreSQL. How did MySQL get an equal score on features?\n> Answer me that one.\n\nThat's easy:\n\nMySQL has type mediumint \nPostgreSQL has transactions\n\nMySQL allows 'and' as string markers\nPostgreSQL has views\n\nMySQL has case insensitive compare\nPostgreSQL has referential integrity\n\nMySQL has support for 0000-00-00 dates\nPostgreSQL has subqueries\n\nMySQL has 'drop table if exists'\nPostgreSQL has multiversion concurrency control\n\netc.\n\nSee? Equal. I hope my sarcasm is not too overstated.\n\nMike Mascari\n", "msg_date": "Sat, 20 May 2000 23:37:17 -0400", "msg_from": "Mike Mascari <[email protected]>", "msg_from_op": false, "msg_subject": "Re: MySQL crashme test and PostgreSQL" }, { "msg_contents": "Hi,\n\nBruce Momjian:\n> Why don't you find out who wrote this thing, and ask them what they were\n> thinking when they wrote it? I bet you will find out our perception is\n> correct. \n> \nI'm working on it.\n\n-- \nMatthias Urlichs | noris network GmbH | [email protected] | ICQ: 20193661\nThe quote was selected randomly. Really. | http://smurf.noris.de/\n-- \nIt's not against any religion to want to dispose of a pigeon.\n --Tom Lehrer\n", "msg_date": "Sun, 21 May 2000 06:04:30 +0200", "msg_from": "\"Matthias Urlichs\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance (was: The New Slashdot Setup (includes MySql server))" }, { "msg_contents": "Hi,\n\nBruce Momjian:\n> Also, I have heard about the hit squads attacking MySQL. I never\n> condone inaccuracy or attacks, but I can understand why it is happening.\n> \nYou _are_ doing your side of the story a disservice, you know that?\n\n> For years, I believe the deceptiveness of the MySQL crashme test has\n> hampered acceptance of PostgreSQL. And our response was to just reply\n> with our opinion when asked about it.\n\nYeah, I can see that.\n\nLet me tell you up front that your opinion is not at all helpful to\neither the cause of PostgreSQL or to the problems between you and the\nMySQL people, especially when stated like this.\n\n\nThis is the Internet. The right thing to do if somebody spreads bad\ninformation (a biased, inaccurate, wrong, deceptive, what-have-you)\ncrash-me test would be to write your own test which either prefers\nPostgreSQL, or is reasonably neutral.\n\n\nI'll shut up now, until the first of my patches is in the crash-me\nsuite. Perhaps that will have _some_ impact here.\n\n-- \nMatthias Urlichs | noris network GmbH | [email protected] | ICQ: 20193661\nThe quote was selected randomly. Really. | http://smurf.noris.de/\n-- \nQuestion: \"Do you consider $10 a week enough for a longshoreman with\na family to support?\"\n \nAnswer: \"If that's all he can get, and he takes it, I should say it's enough.\"\n -- J. P. Morgan (1837-1913)\n", "msg_date": "Sun, 21 May 2000 06:26:46 +0200", "msg_from": "\"Matthias Urlichs\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: MySQL crashme test and PostgreSQL" }, { "msg_contents": "Hi,\n\nMike Mascari:\n> MySQL has type mediumint \n> PostgreSQL has transactions\n> \n> MySQL allows 'and' as string markers\n> PostgreSQL has views\n\nLook, we all know that transaction support is more important than type\nmediumint or the function ODBC LENGTH or cosine or whatever.\n\nBut what should I, or anybody else, do about it, in your opinion? Take\nthe \"unimportant\" tests out? Invent a couple of inconsequential stuff\nPostgreSQL can do to balance the book? Repeat the \"transactions=no\"\nentry in the crash-me results file ten times?\n\n> See? Equal. I hope my sarcasm is not too overstated.\n\nSarcasm hasn't helped in the past, and it's unlikely to help in the future.\n\n-- \nMatthias Urlichs | noris network GmbH | [email protected] | ICQ: 20193661\nThe quote was selected randomly. Really. | http://smurf.noris.de/\n-- \nLemme through! I'm a necrophiliac!\n\t\t-- overheard at a traffic accident\n", "msg_date": "Sun, 21 May 2000 06:34:13 +0200", "msg_from": "\"Matthias Urlichs\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: MySQL crashme test and PostgreSQL" }, { "msg_contents": "\"Matthias Urlichs\" <[email protected]> writes:\n> I've found another one of these performance problems in the benchmark,\n> related to another ignored index.\n> The whole thing works perfectly after a VACUUM ANALYZE on the\n> table.\n> IMHO this is somewhat non-optimal. In the absence of information\n> to the contrary, PostgreSQL should default to using an index if\n> it might be appropriate, not ignore it.\n\nJust FYI: Postgres absolutely does not \"ignore\" an index in the absence\nof VACUUM ANALYZE stats. However, the default assumptions about\nselectivity stats create cost estimates that are not too far different\nfor index and sequential scans. On a never-vacuumed table you will\nget an indexscan for \"WHERE col = foo\". If the table has been vacuumed\nbut never vacuum analyzed, it turns out that you get varying results\ndepending on the size of the table and the average tuple size (since the\nplanner now has non-default info about the table size, but still nothing\nabout the actual selectivity of the WHERE condition).\n\nThe cost estimation code is under active development, and if you check\nthe pgsql list archives you will find lively discussions about its\ndeficiencies ;-). But I'm having a hard time mustering much concern\nabout whether it behaves optimally in the vacuum-but-no-vacuum-analyze\ncase.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 21 May 2000 00:45:58 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: More Performance " }, { "msg_contents": "Hi,\n\nTom Lane:\n> I would not like to see us labeled \"crashme unsafe\" merely because\n> someone is too impatient to let the test run to conclusion.\n\nThere's not much anybody can do about a backend which dies because of a\n\"hard\" out-of-memory error which the OS only notices when all it can do\nis segfault the client.\n\nAnyway, I'll go through your list of problems and create an appropriate\npatch for the beast.\n\n\nNot to forget: THANK YOU for taking the time to go through this.\n\n\n> > alter_drop_col=no\t\t\t# Alter table drop column\n> \n> While our ALTER TABLE support is certainly pretty weak, it should be\n> noted that this test will continue to fail even when we have ALTER TABLE\n> DROP COLUMN, because crashme is testing for a non-SQL-compliant syntax.\n> \nYou mean because the COLUMN keyword is missing? Added.\n\n> > alter_rename_table=no\t\t\t# Alter table rename table\n> \n> We have ALTER TABLE RENAME ... but not under the syntax crashme is\n> testing. \n\nTO keyword added.\n\n> > atomic_updates=no\t\t\t# atomic updates\n> \nTest clarified and two new tests added. The result now is:\n\natomic_updates=no\t\t\t# update constraint check are deferred\natomic_updates_ok=yes\t\t\t# failed atomic update\n\nMySQL has \"no\" and \"no\".\n\n> > automatic_rowid=no\t\t\t# Automatic rowid\n> \n> Test is actually looking for a system column named \"_rowid\". Our OIDs\n> serve the same purpose\n\nI'll add a test for OIDs.\n\n> > binary_items=no\t\t\t\t# binary items (0x41)\n> \n> We have binary literals (per the test name) and hex literals (what\n> it actually appears to be testing). Unfortunately for us, ours are\n> SQL92-compliant syntax, and what crashme is looking for isn't.\n> \nI'll tell them to fix that.\n\n> > comment_#=no\t\t\t\t# # as comment\n> \n> It'd be helpful to the reader if they indicated which two of these\n> conventions are SQL-compliant ... of course, that might expose the\n> fact that MySQL isn't ...\n> \nAre there any problems with using '#' as a comment character, given that\nMySQL doesn't support user-defined operators?\n\n> > connections=32\t\t\t\t# Simultaneous connections\n> \n> Should probably be noted that this is just the default limit\n\nNoted.\n\n\n> > crash_me_safe=no\t\t\t# crash me safe\n> \n> I get \"yes\", and I'd *really* appreciate it if you not submit this\n> misleading statement.\n> \nI won't submit test results (not until the thing is cleaned up to\neverybody's satisfaction), but I'll change the documentation of the\nthing to state that \n\n>>> Some of the tests you are about to execute require a lot of memory.\n>>> Your tests _will_ adversely affect system performance. Either this\n>>> crash-me test program, or the actual database back-end, _will_ die with\n>>> an out-of-memory error. So might any other program on your system if it\n>>> requests more memory at the wrong time.\n\nGood enough?\n\n\n> > date_zero=no\t\t\t\t# Supports 0000-00-00 dates\n> Another test that only MySQL \"passes\".\n... and SOLID.\n\n> > except=no\t\t\t\t# except\n> Unsurprisingly, hardly anyone else accepts it either.\nSOLID again.\n\nI'll mark the features that are necessary for SQL compliancy (as well as\nthose that actually are detrimental to it). _After_ the actual test\nresults are cleaned up.\n\n> What they are looking for here is \"SELECT NOT 0\", which Postgres rejects\n> as a type violation. SQL-compliant \"NOT FALSE\" would work.\n> \n... not with MySQL, which doesn't have FALSE in the first place. :-(\n\nI added another test for TRUE/FALSE, and fixed the NOT-0 thing.\n\n> > having_with_alias=no\t\t\t# Having on alias\n> \n> Again, how curious that MySQL is the only DBMS shown as passing this\n> test. Couldn't be because it violates SQL92, could it?\n> \nNo, but it's an extremely nice feature to have, IMHO.\n\nI will not do anything about tests for extensions that won't hurt one\nway or another. Classifying them, as noted above, should be sufficient.\n\n> > insert_select=no\t\t\t# insert INTO ... SELECT ...\n> \n> We would pass this test if the crashme script weren't buggy: it fails\n> to clean up after a prior test that creates a crash_q table with\n> different column names.\n\nFixed.\n\n> > logical_value=1\t\t\t# Value of logical operation (1=1)\n> \n> A rather odd result, considering that what Postgres actually returns for\n> \"SELECT (1=1)\" is 't'. But showing the correct answer isn't one of\n> crashme's highest priorities...\n> \n> > minus_neg=no\t\t\t\t# Calculate 1--1\n> \n> Another case where \"passing\" the test means accepting MySQL's version of\n> reality instead of SQL92's. All the SQL-compliant DBMSs think -- is a\n> comment introducer\n\nSo does MySQL -- when the '--' is followed by a space.\n\nThey do explain that in their documentation. I have to agree with them\n -- you can turn a perfectly legal SQL statement into dangerous nonsense\nwith this kind of comment.\n\n>>> $dbh->do(\"update foo set bar = baz-$adjust where something-or-other\").\n\nThat kind of mistake can be rather expensive.\n\n> > select_string_size=+16208\t\t# constant string size in SELECT\n> \n> I got 1048567 here, roughly corresponding to where I set max_buffer_size.\n> Not sure why you get a smaller answer.\n> \nNote the '+'. I have changed the test to 2*max_row_size since anything\nbigger will return an empty answer anyway.\n\n> > select_table_update=no\t\t\t# Update with sub select\n> \n> We certainly have update with sub select. What they're looking for is\n> the non-SQL-compliant syntax\n> \tupdate crash_q set crash_q.b=\n> \t\t(select b from crash_me where crash_q.a = crash_me.a);\n\nGah. Thanks; fixed. \n\n> One could also complain about the very large number of tests that are\n> checking features that are non-SQL if not downright SQL-contradictory,\n> but are listed simply as bullet points with no pro or con. A naive\n> reader would think that green stars are always good; they are not,\n> but how are you to tell without a copy of the SQL spec in hand?\n> \nI'll adapt the comments, but that is quite time consuming (and the\nchanges are extensive) and thus will have to wait until after the first\nround is in one of their next alpha-test releases.\n\n-- \nMatthias Urlichs | noris network GmbH | [email protected] | ICQ: 20193661\nThe quote was selected randomly. Really. | http://smurf.noris.de/\n-- \n\"What did you do when the ship sank?\"\n\"I grabbed a cake of soap and washed myself ashore.\"\n", "msg_date": "Sun, 21 May 2000 08:25:32 +0200", "msg_from": "\"Matthias Urlichs\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: MySQL's \"crashme\" (was Re: Performance)" }, { "msg_contents": "\n> > minus_neg=no\t\t\t\t# Calculate 1--1\n\nMinus_neg expressed as select 1- -1; works. \n\n", "msg_date": "Sun, 21 May 2000 11:34:00 +0200", "msg_from": "Kaare Rasmussen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: MySQL's \"crashme\" (was Re: Performance)" }, { "msg_contents": "on 5/21/00 12:34 AM, Matthias Urlichs at [email protected] wrote:\n\n> But what should I, or anybody else, do about it, in your opinion? Take\n> the \"unimportant\" tests out? Invent a couple of inconsequential stuff\n> PostgreSQL can do to balance the book? Repeat the \"transactions=no\"\n> entry in the crash-me results file ten times?\n\nTake the unimportant tests out. Absolutely. Explain why the important tests\nare important. The MySQL team is responsible for teaching a generation of\nhackers that \"transactions aren't important, they're just for lazy coders.\"\n\nThe solution here looks extremely simple. The only risk, of course, is that\nit makes MySQL look bad, which I understand could be an unwanted outcome on\nyour end.\n\n-Ben\n\n", "msg_date": "Sun, 21 May 2000 10:31:45 -0400", "msg_from": "Benjamin Adida <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: MySQL crashme test and PostgreSQL" }, { "msg_contents": "\"Matthias Urlichs\" <[email protected]> writes:\n> Tom Lane:\n>> I would not like to see us labeled \"crashme unsafe\" merely because\n>> someone is too impatient to let the test run to conclusion.\n\n> There's not much anybody can do about a backend which dies because of a\n> \"hard\" out-of-memory error which the OS only notices when all it can do\n> is segfault the client.\n\nI'm just saying that it's unfair to downrate us when the problem is\ndemonstrably in crashme itself and not in Postgres.\n\n>>>> alter_drop_col=no\t\t\t# Alter table drop column\n>> \n>> While our ALTER TABLE support is certainly pretty weak, it should be\n>> noted that this test will continue to fail even when we have ALTER TABLE\n>> DROP COLUMN, because crashme is testing for a non-SQL-compliant syntax.\n>> \n> You mean because the COLUMN keyword is missing? Added.\n\nNo, the COLUMN keyword is optional according to the SQL92 specification:\n\n <alter table statement> ::=\n ALTER TABLE <table name> <alter table action>\n\n <alter table action> ::=\n <add column definition>\n | <alter column definition>\n | <drop column definition>\n | <add table constraint definition>\n | <drop table constraint definition>\n\n <drop column definition> ::=\n DROP [ COLUMN ] <column name> <drop behavior>\n\n <drop behavior> ::= CASCADE | RESTRICT\n\nWhat is *not* optional is a <drop behavior> keyword. Although we don't\nyet implement DROP COLUMN, our parser already has this statement in it\n--- and it follows the SQL92 grammar.\n\n>>>> comment_#=no\t\t\t\t# # as comment\n>> \n>> It'd be helpful to the reader if they indicated which two of these\n>> conventions are SQL-compliant ... of course, that might expose the\n>> fact that MySQL isn't ...\n>> \n> Are there any problems with using '#' as a comment character, given that\n> MySQL doesn't support user-defined operators?\n\nOnly in that your scripts don't port to spec-compliant DBMSes ...\n\n>>>> Some of the tests you are about to execute require a lot of memory.\n>>>> Your tests _will_ adversely affect system performance. Either this\n>>>> crash-me test program, or the actual database back-end, _will_ die with\n>>>> an out-of-memory error. So might any other program on your system if it\n>>>> requests more memory at the wrong time.\n\n> Good enough?\n\nNo, pretty misleading I'd say. Since the crashme script does have a\nlimit on max_buffer_size, it *will* run to completion if run on a\nmachine with a sufficiently large per-process memory limit (and enough\nswap of course). I may just be old-fashioned in running with a\nnot-so-large memory limit. It'd probably be more helpful if you\ndocument the behavior seen when crashme runs out of memory (for me,\nthe script just stops cold with no notice) and what to do about it\n(reduce max_buffer_size until it runs to completion).\n\n>>>> date_zero=no\t\t\t\t# Supports 0000-00-00 dates\n>> Another test that only MySQL \"passes\".\n> ... and SOLID.\n\nStill doesn't mean it's a good idea ;-)\n\n>>>> except=no\t\t\t\t# except\n>> Unsurprisingly, hardly anyone else accepts it either.\n> SOLID again.\n\nIt'd be appropriate to divide this into two tests, or at least relabel\nit.\n\n>>>> minus_neg=no\t\t\t\t# Calculate 1--1\n>> \n>> Another case where \"passing\" the test means accepting MySQL's version of\n>> reality instead of SQL92's. All the SQL-compliant DBMSs think -- is a\n>> comment introducer\n\n> So does MySQL -- when the '--' is followed by a space.\n\nConsidering how much we got ragged on for not being perfectly compliant\nwith SQL-spec handling of comments (up till 7.0 our parser didn't\nrecognize \"--\" as a comment if it was embedded in a multicharacter\noperator --- but we knew that was a bug), I don't have a lot of sympathy\nfor MySQL unilaterally redefining the spec here. And I have even less\nfor them devising a test that can only be \"passed\" by non-spec-compliant\nparsers, and then deliberately mislabeling it to give the impression\nthat the spec-compliant systems are seriously broken. How about\nlabeling the results \"Fails to recognize -- comment introducer unless\nsurrounded by whitespace\"?\n\n\nAnyway, I am pleased to see you trying to clean up the mess.\nGood luck!\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 21 May 2000 13:10:27 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: MySQL's \"crashme\" (was Re: Performance) " }, { "msg_contents": "> That's easy:\n> \n> MySQL has type mediumint \n> PostgreSQL has transactions\n> \n> MySQL allows 'and' as string markers\n> PostgreSQL has views\n> \n> MySQL has case insensitive compare\n> PostgreSQL has referential integrity\n> \n> MySQL has support for 0000-00-00 dates\n> PostgreSQL has subqueries\n> \n> MySQL has 'drop table if exists'\n> PostgreSQL has multiversion concurrency control\n> \n> etc.\n> \n> See? Equal. I hope my sarcasm is not too overstated.\n\nIt took me a minute to figure this out. Wow, that was funny. I am\nstill laughing.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 21 May 2000 13:44:48 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: MySQL crashme test and PostgreSQL" }, { "msg_contents": "> Hi,\n> \n> Bruce Momjian:\n> > Also, I have heard about the hit squads attacking MySQL. I never\n> > condone inaccuracy or attacks, but I can understand why it is happening.\n> > \n> You _are_ doing your side of the story a disservice, you know that?\n\nHey, I am not saying I like it happening. All I am saying is that I can\nunderstand why it is happening. Certainly MSSQL and Oracle are the real\nproducts we need to compete against.\n\n> \n> > For years, I believe the deceptiveness of the MySQL crashme test has\n> > hampered acceptance of PostgreSQL. And our response was to just reply\n> > with our opinion when asked about it.\n> \n> Yeah, I can see that.\n> \n> Let me tell you up front that your opinion is not at all helpful to\n> either the cause of PostgreSQL or to the problems between you and the\n> MySQL people, especially when stated like this.\n> \n> \n> This is the Internet. The right thing to do if somebody spreads bad\n> information (a biased, inaccurate, wrong, deceptive, what-have-you)\n> crash-me test would be to write your own test which either prefers\n> PostgreSQL, or is reasonably neutral.\n\nWe have better things to do than compete against deceptive tests. We\njust work to make are our product better and better. Making another\ncrashme test is not going to make PostgreSQL a better piece of software.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 21 May 2000 13:49:45 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: MySQL crashme test and PostgreSQL" }, { "msg_contents": "\"Hiroshi Inoue\" <[email protected]> writes:\n> Isn't it a fundamental principle to define primary(unique\n> identification) constraint for each table ?\n> I had never thought that the only one index of pg_attrdef \n> isn't an unique identification index until I came across the\n> unexpcted result of my DROP COLUMN test case.\n\nGood point --- I was only thinking about the performance aspect, but\nif we're going to have unique indexes to prevent errors in other\nsystem tables then pg_attrdef deserves one too.\n\nActually, I have a more radical proposal: why not get rid of pg_attrdef\nentirely, and add its two useful columns (adsrc, adbin) to pg_attribute?\nIf we allow them to be NULL for attributes with no default, then there's\nno space overhead where they're not being used, and we eliminate any\nneed for the second table.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 21 May 2000 14:45:22 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance (was: The New Slashdot Setup (includes MySql server))" }, { "msg_contents": "Hi,\n\nTom Lane:\n> I'm just saying that it's unfair to downrate us when the problem is\n> demonstrably in crashme itself and not in Postgres.\n> \nRight.\n\n> <drop behavior> ::= CASCADE | RESTRICT\n> \n> What is *not* optional is a <drop behavior> keyword. Although we don't\n> yet implement DROP COLUMN, our parser already has this statement in it\n> --- and it follows the SQL92 grammar.\n> \nAh, sorry, I apparently misparsed the BNF. (It was kind of late at\nnight...)\n\n> No, pretty misleading I'd say. Since the crashme script does have a\n> limit on max_buffer_size, it *will* run to completion if run on a\n> machine with a sufficiently large per-process memory limit (and enough\n> swap of course).\n\nHmm, I could add an explicit option to limit memory usage instead.\n(Right now it's hardcoded in the test script.)\n\n> >>>> date_zero=no\t\t\t\t# Supports 0000-00-00 dates\n> >> Another test that only MySQL \"passes\".\n> > ... and SOLID.\n> \n> Still doesn't mean it's a good idea ;-)\n> \nNo argument from me...\n\n> >>>> except=no\t\t\t\t# except\n> It'd be appropriate to divide this into two tests, or at least relabel\n> it.\n> \nAlready done.\n\n> Considering how much we got ragged on for not being perfectly compliant\n> with SQL-spec handling of comments (up till 7.0 our parser didn't\n> recognize \"--\" as a comment if it was embedded in a multicharacter\n> operator --- but we knew that was a bug), I don't have a lot of sympathy\n> for MySQL unilaterally redefining the spec here.\n\nThey do note this noncompliance with the SQL spec in their documentation,\nalong with a few others.\n\nI'll clean this one up (adding a note about the noncompliance) a bit\nmore, after they incorporate my patch into the next version.\n\n> Anyway, I am pleased to see you trying to clean up the mess.\n> Good luck!\n> \nThanks.\n\n\n-- \nMatthias Urlichs | noris network GmbH | [email protected] | ICQ: 20193661\nThe quote was selected randomly. Really. | http://smurf.noris.de/\n-- \nMathematicians do it symmetrically.\n", "msg_date": "Sun, 21 May 2000 22:37:21 +0200", "msg_from": "\"Matthias Urlichs\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: MySQL's \"crashme\" (was Re: Performance)" }, { "msg_contents": "Matthias, let me add that I wish you luck in updating the MySQL crashme\ntest. You certainly seem to be on top of the issues, and I hope they\ncan be resolved.\n\nI know a lot of people on this side are hoping you can make it happen. \n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 22 May 2000 21:56:01 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "MySQL crashme" }, { "msg_contents": "agreed, a while back i actually contacted rob malda and offered\nto convert slashdot to postgres.. he asked why i would want to do this\n, said postgres's features yada yada.. his reply\n\n.. that's dandy but we don't need those features.\n\nsad to say but mysql has a niche and slashdot fills it.\n\njeff\n\nOn Thu, 18 May 2000, The Hermit Hacker wrote:\n\n> On Thu, 18 May 2000, Bruce Momjian wrote:\n> \n> > > Info on the new slashdot.org setup\n> > > \n> > > <http://slashdot.org/article.pl?sid=00/05/18/1427203&mode=nocomment>\n> > > \n> > > interesting because of the plans (meaning $$$) they have to improve\n> > > MySql, and because they are the flagship MySql site/application. \n> > > \n> > > In the comment page, replying to the usual \"Why not PostgreSql?\" thread\n> > > someone pointed out an extract from the MySql docs that seems to me\n> > > blatantly false\n> > > (http://slashdot.org/comments.pl?sid=00/05/18/1427203&cid=131).\n> > \n> > Just finished reading the thread. I am surprised how many people\n> > slammed them on their MySQL over PostgreSQL decision. People are\n> > slamming MySQL all over the place. :-)\n> > \n> > Seems like inertia was the reason to stay with MySQL. What that means\n> > to me is that for their application space, PostgreSQL already has\n> > superior technology, and people realize it. This means we are on our\n> > way up, and MySQL is, well, ....\n> \n> In SlashDot's defence here ... I dooubt there is much they do that would\n> require half of what we offer ... it *very* little INSERT/UPDATE/DELETE\n> and *alot* of SELECT ...\n> \n> \n\n", "msg_date": "Mon, 29 May 2000 11:32:33 -0300 (ADT)", "msg_from": "Jeff MacDonald <[email protected]>", "msg_from_op": false, "msg_subject": "Re: The New Slashdot Setup (includes MySql server)" } ]
[ { "msg_contents": "\nTom reminded me tonight that after moving everything else over to a\ndedicated drive, I forgot all about our cvs repository ... that is now\nmoved. Instead of the old /usr/local/cvsroot directory, you now need to\nset your CVSROOT to use /home/projects/pgsql/cvsroot instead ...\n\nI just tested it and it appears to work okay ... please let me know if\nthere are any problems ...\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Thu, 18 May 2000 21:57:41 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "HEADS UP: CVSROOT location changed ..." }, { "msg_contents": "The Hermit Hacker writes:\n\n> Instead of the old /usr/local/cvsroot directory, you now need to set\n> your CVSROOT to use /home/projects/pgsql/cvsroot instead ...\n\nSo that means I have to do a new checkout (or manual surgery on\nCVS/Repository et al.)?\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Fri, 19 May 2000 19:39:26 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: HEADS UP: CVSROOT location changed ..." } ]
[ { "msg_contents": "Chris Bitmead writes:\n\n> > That also goes for the various ALTER TABLE [ONLY]\n> > syntax additions. If I add a row to A only then B is no longer a subtable\n> > of A. \n> \n> I agree that the alter table only is crazy, but the functionality was\n> there before and I didn't want to be the one to take it out. But if\n> someone does I can't imagine I'd object.\n\nOkay, I think I see what you're getting at. The \"ONLY\" syntax on DELETE,\nUPDATE, and ALTER TABLE would provide an entry point for the current,\nbroken behaviour, for those who need it (though it's not really backwards\ncompatibility per se). We might want to flag these with warnings \"don't do\nthat\" and reserve the option to remove them at a later date, to save\npeople from attempting stupid things.\n\nI guess what I might have alluded to with \"design document\" is that you\nwould have explained that connection, because I did look at the old\nthread(s) and didn't have any clue what was decided upon. What I was also\nwondering about were these things such as the \"virtual\" IDENTITY field\nthat was proposed, the `SELECT **' syntax (bad idea, IMO), and the notion\nthat a query could return different types of rows when reading from an\ninheritance structure (a worse idea, IMO). I didn't know whether the patch\ntouched that. (I think now that it doesn't.)\n\nI'll tell you what, I have some time next week, and I'll read up on SQL3.\nPerhaps I'll survive it. ;-)\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Fri, 19 May 2000 05:43:02 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Re: OO Patch" }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> I guess what I might have alluded to with \"design document\" is that you\n> would have explained that connection, because I did look at the old\n> thread(s) and didn't have any clue what was decided upon.\n\nAFAIR, nothing was decided on ;-) ... the list has gone 'round on this\ntopic a few times without achieving anything you could call consensus.\n\nI think Robert Easter might have his hands on the right idea: there\nis more than one concept here, and more than one set of applications\nto be addressed. We need to break things down into component concepts\nrather than trying for a one-size-fits-all solution.\n\n> I'll tell you what, I have some time next week, and I'll read up on SQL3.\n> Perhaps I'll survive it. ;-)\n\nDaniel enters the lions' den ... good luck ;-)\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 19 May 2000 00:14:38 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: OO Patch " }, { "msg_contents": "Tom Lane wrote:\n> \n> Peter Eisentraut <[email protected]> writes:\n> > I guess what I might have alluded to with \"design document\" is that you\n> > would have explained that connection, because I did look at the old\n> > thread(s) and didn't have any clue what was decided upon.\n> \n> AFAIR, nothing was decided on ;-) ... the list has gone 'round on this\n> topic a few times without achieving anything you could call consensus.\n\nOh dear. I thought we had progressed further than that. I hope we're not\nback to square one here.\n\n> I think Robert Easter might have his hands on the right idea: there\n> is more than one concept here, and more than one set of applications\n> to be addressed. We need to break things down into component concepts\n> rather than trying for a one-size-fits-all solution.\n\nI can't see that anything I've proposed could be construed as\none-size-fits-all.\n\n1) DELETE and UPDATE on inheritance hierarchies. You actually suggested\nit Tom, it used to work in postgres (if you look at the V7.0 doco very\ncarefully, it still says it works!! though it probably hasn't since the\nV4.2 days). It's really a rather obvious inclusion.\n\n2) Imaginary classoid field. This is a very stand-alone feature, that I\ndidn't hear any objections to.\n\n3) Returning of sub-class fields. Any ODBMS *must* do this by\ndefinition. If it doesn't, it isn't an ODBMS. The only question is what\nsyntax to activate it, and I'm not much fussed about that.\n", "msg_date": "Fri, 19 May 2000 14:38:44 +1000", "msg_from": "Chris Bitmead <[email protected]>", "msg_from_op": false, "msg_subject": "Re: OO Patch" }, { "msg_contents": "Chris Bitmead <[email protected]> writes:\n> 3) Returning of sub-class fields. Any ODBMS *must* do this by\n> definition. If it doesn't, it isn't an ODBMS.\n\nChris, you have a bad habit of defining away the problem. Not\neveryone is convinced upon this point, and your assertions that\nthere was consensus don't help your cause.\n\nPossibly more to the point: your patch doesn't implement the\nabove behavior AFAICS. (Certainly libpq is unprepared to support\nmultiple tuple types returned in one SELECT --- and there are no\nfrontend changes in your patch.) So it might help if you'd clarify\nexactly what the proposed patch does and doesn't do.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 19 May 2000 01:09:51 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: OO Patch " }, { "msg_contents": "Tom Lane wrote:\n> \n> Chris Bitmead <[email protected]> writes:\n> > 3) Returning of sub-class fields. Any ODBMS *must* do this by\n> > definition. If it doesn't, it isn't an ODBMS.\n> \n> Chris, you have a bad habit of defining away the problem. Not\n> everyone is convinced upon this point, \n\nYou claimed to be convinced in the previous discussions. Who exactly\nwasn't?\n\n> and your assertions that\n> there was consensus don't help your cause.\n\nI must admit to frustration here. Will I be issued with a certificate or\nsomething when an arbitrator declares \"consensus\". I can't fathom how\ndecisions are made around here, but you seem to be as close to a leader\nas I'll find. On the sub-class returning issue you declared that you\nunderstood that it was \"good for a certain class of problems\" or some\nsuch. My take on the previous discussions were that a great number of\nobjections were resolved. Am I supposed to just sit on my bum waiting\nfor people who havn't even used an ODBMS to argue for a few years? I'm\nquite willing to talk this all through again but it needs to reach\nclosure at some point.\n\n> Possibly more to the point: your patch doesn't implement the\n> above behavior AFAICS. \n\nI know, it only implements the first point. But this is useful in\nitself.\n\n> (Certainly libpq is unprepared to support\n> multiple tuple types returned in one SELECT --- and there are no\n> frontend changes in your patch.) So it might help if you'd clarify\n> exactly what the proposed patch does and doesn't do.\n\nThis is the third time I've submitted the patch and you examined it in\ndetail last two times. This is just a post-7.0 merge and I was expecting\nit put in CVS now that 7.0 is done.\n\nTo repeat - it implements DELETE and UPDATE on inheritance hierarchies\nto correct old bit-rot, and it implements ONLY as relates inheritance\nhierarchies to exclude sub-classes. Oh, and the emacs pgsql code style\nlisp implementation is done right in the FAQ.\n", "msg_date": "Fri, 19 May 2000 15:29:12 +1000", "msg_from": "Chris Bitmead <[email protected]>", "msg_from_op": false, "msg_subject": "Re: OO Patch" }, { "msg_contents": "Tom Lane wrote:\n> \n> Chris Bitmead <[email protected]> writes:\n> > 3) Returning of sub-class fields. Any ODBMS *must* do this by\n> > definition. If it doesn't, it isn't an ODBMS.\n> \n> Chris, you have a bad habit of defining away the problem. Not\n> everyone is convinced upon this point.\n\nOr to put things another way, my goal is to implement the ODMG\n(http://www.odmg.org/) interface on postgresql. Nobody has said\n*anything* like that this is a bad goal to aim for, or that there is a\nbetter way of doing it.\n", "msg_date": "Fri, 19 May 2000 16:30:26 +1000", "msg_from": "Chris Bitmead <[email protected]>", "msg_from_op": false, "msg_subject": "Re: OO Stuff" }, { "msg_contents": "Tom Lane wrote:\n> \n> Chris Bitmead <[email protected]> writes:\n> > 3) Returning of sub-class fields. Any ODBMS *must* do this by\n> > definition. If it doesn't, it isn't an ODBMS.\n> \n> Chris, you have a bad habit of defining away the problem. Not\n> everyone is convinced upon this point, and your assertions that\n> there was consensus don't help your cause.\n\nI am convinced ;). \n\nThere should be no consensus that \"there should be no way to \nretrieve sub-fields\" ;)\n\nI agree that the default may well be to retrieve only fuelds of \nbase class.\n\n> \n> Possibly more to the point: your patch doesn't implement the\n> above behavior AFAICS. (Certainly libpq is unprepared to support\n> multiple tuple types returned in one SELECT \n\nIIRC Bruce removed that feature in Pg95 days claiming that it would \nnot be needed. If backend starts to support it again it would be \nrelatively easy to put back in.\n\n> --- and there are no\n> frontend changes in your patch.) So it might help if you'd clarify\n> exactly what the proposed patch does and doesn't do.\n> \n> regards, tom lane\n", "msg_date": "Fri, 19 May 2000 09:33:35 +0300", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: OO Patch" }, { "msg_contents": "On Fri, 19 May 2000, Chris Bitmead wrote:\n\n> Tom Lane wrote:\n> > \n> > Chris Bitmead <[email protected]> writes:\n> > > 3) Returning of sub-class fields. Any ODBMS *must* do this by\n> > > definition. If it doesn't, it isn't an ODBMS.\n> > \n> > Chris, you have a bad habit of defining away the problem. Not\n> > everyone is convinced upon this point, \n> \n> You claimed to be convinced in the previous discussions. Who exactly\n> wasn't?\n> \n> > and your assertions that\n> > there was consensus don't help your cause.\n> \n> I must admit to frustration here. Will I be issued with a certificate or\n> something when an arbitrator declares \"consensus\". I can't fathom how\n> decisions are made around here, but you seem to be as close to a leader\n> as I'll find. On the sub-class returning issue you declared that you\n> understood that it was \"good for a certain class of problems\" or some\n> such. \n\n\tWe have a list archive ... just to try and help out here, you\nmight want to try posting URLs to show quotes ... to back things up ...\n\n> My take on the previous discussions were that a great number of\n> objections were resolved. Am I supposed to just sit on my bum waiting\n> for people who havn't even used an ODBMS to argue for a few years? I'm\n> quite willing to talk this all through again but it needs to reach\n> closure at some point.\n\nNope, my take on things is that your patch does things that would break\nexisting functionality, which won't be permitted without one helluva good\nexplanation ...\n\n> This is the third time I've submitted the patch and you examined it in\n> detail last two times. This is just a post-7.0 merge and I was expecting\n> it put in CVS now that 7.0 is done.\n\nThat won't happen ... v7.1, if you can get agreement, but not in the\ncurrent CVS tree ...\n\n\n", "msg_date": "Fri, 19 May 2000 09:38:12 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: OO Patch" }, { "msg_contents": "The Hermit Hacker wrote:\n> \n> On Fri, 19 May 2000, Chris Bitmead wrote:\n> \n> \n> > My take on the previous discussions were that a great number of\n> > objections were resolved. Am I supposed to just sit on my bum waiting\n> > for people who havn't even used an ODBMS to argue for a few years? I'm\n> > quite willing to talk this all through again but it needs to reach\n> > closure at some point.\n> \n> Nope, my take on things is that your patch does things that would break\n> existing functionality,\n\nIMHO it actually _fixes_ existing broken functionality .\n\n> which won't be permitted without one helluva good explanation ...\n\nYes, that was The Hermit Hacker I fearfully referred to as misusing even \nthe current \"OO\" functionality when I warned people not to promote using \nany half-baked OO features developers have forgot into PostgreSQL when they \nconverted a cool ORDBMS into a generlly usable (non-O)RDBMS.\n\nIt may be time to fork the tree into OO and beancounting editions ?\nEspecially so if the main tree will migrate to BDB ;-p\n\nOOPostgreSQL sounds quite nice ;)\n\n> > This is the third time I've submitted the patch and you examined it in\n> > detail last two times. This is just a post-7.0 merge and I was expecting\n> > it put in CVS now that 7.0 is done.\n> \n> That won't happen ... v7.1, if you can get agreement, but not in the\n> current CVS tree ...\n\n From where must he get that agreement ?\n\n---------------\nHannu\n", "msg_date": "Fri, 19 May 2000 15:58:51 +0300", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: OO Patch" }, { "msg_contents": "On Fri, 19 May 2000, Hannu Krosing wrote:\n\n> The Hermit Hacker wrote:\n> > \n> > On Fri, 19 May 2000, Chris Bitmead wrote:\n> > \n> > \n> > > My take on the previous discussions were that a great number of\n> > > objections were resolved. Am I supposed to just sit on my bum waiting\n> > > for people who havn't even used an ODBMS to argue for a few years? I'm\n> > > quite willing to talk this all through again but it needs to reach\n> > > closure at some point.\n> > \n> > Nope, my take on things is that your patch does things that would break\n> > existing functionality,\n> \n> IMHO it actually _fixes_ existing broken functionality .\n\nOops, sorry, mis-spell ... would should be could ...\n\n\n> \n> > which won't be permitted without one helluva good explanation ...\n> \n> Yes, that was The Hermit Hacker I fearfully referred to as misusing even \n> the current \"OO\" functionality when I warned people not to promote using \n> any half-baked OO features developers have forgot into PostgreSQL when they \n> converted a cool ORDBMS into a generlly usable (non-O)RDBMS.\n> \n> It may be time to fork the tree into OO and beancounting editions ?\n> Especially so if the main tree will migrate to BDB ;-p\n> \n> OOPostgreSQL sounds quite nice ;)\n> \n> > > This is the third time I've submitted the patch and you examined it in\n> > > detail last two times. This is just a post-7.0 merge and I was expecting\n> > > it put in CVS now that 7.0 is done.\n> > \n> > That won't happen ... v7.1, if you can get agreement, but not in the\n> > current CVS tree ...\n> \n> From where must he get that agreement ?\n\n From more then two ppl? Actually, IMHO, it looks like alot of the problem\nis not that we should improve our OO, but how to go about it. It appears\nto me that the past thread that Chris started ended in a fashion that bred\nmisunderstanding ... Chris thought it was resolved, others thought it got\nleft hanging ...\n\nWhat *I'd* like to see is that past thread re-picked up again ... I'm\ngoing to take some time tonight to go through the archives and see if I\ncan pull out \"the start of the thread\", will post it, and see if we can\nget some discussions going ...\n\nv7.0 hasn't been BRANCHED yet, so it can't go into the tree yet, but if we\ncan take the next bit of time before it is BRANCHED to discuss it out and\nreach some sort of consensus here ...\n\nChris, one quick question ... the last email I read from you stated a\nbunch of things that you wanted to accomplish, but your patch only\naddressed the first one. Can we focus on that and ignore the others? Do\nit through step'ng stones? Or does each step only make sense in view of\nthe whole picture?\n\n\n\n", "msg_date": "Fri, 19 May 2000 11:16:34 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: OO Patch" }, { "msg_contents": "The Hermit Hacker wrote:\n> \n> On Fri, 19 May 2000, Hannu Krosing wrote:\n> \n> > The Hermit Hacker wrote:\n> > >\n> > > On Fri, 19 May 2000, Chris Bitmead wrote:\n> > >\n> > >\n> > > > My take on the previous discussions were that a great number of\n> > > > objections were resolved. Am I supposed to just sit on my bum waiting\n> > > > for people who havn't even used an ODBMS to argue for a few years? I'm\n> > > > quite willing to talk this all through again but it needs to reach\n> > > > closure at some point.\n> > >\n> > > Nope, my take on things is that your patch does things that would break\n> > > existing functionality,\n> >\n> > IMHO it actually _fixes_ existing broken functionality .\n> \n> Oops, sorry, mis-spell ... would should be could ...\n\n;)\n\n> >\n> > From where must he get that agreement ?\n> \n> >From more then two ppl? Actually, IMHO, it looks like alot of the problem\n> is not that we should improve our OO, but how to go about it. It appears\n> to me that the past thread that Chris started ended in a fashion that bred\n> misunderstanding ... Chris thought it was resolved, others thought it got\n> left hanging ...\n> \n> What *I'd* like to see is that past thread re-picked up again ... I'm\n> going to take some time tonight to go through the archives and see if I\n> can pull out \"the start of the thread\", will post it, and see if we can\n> get some discussions going ...\n> \n> v7.0 hasn't been BRANCHED yet, so it can't go into the tree yet, but if we\n> can take the next bit of time before it is BRANCHED to discuss it out and\n> reach some sort of consensus here ...\n\nSome sort of mission statement - what we want to accomplish and steps \nto get there ?\n\n> Chris, one quick question ... the last email I read from you stated a\n> bunch of things that you wanted to accomplish, but your patch only\n> addressed the first one. Can we focus on that and ignore the others? Do\n> it through step'ng stones? Or does each step only make sense in view of\n> the whole picture?\n\nI guess the first step implemented in the patch is a useful fix in \nits own right.\n\nAlter table ONLY should be discouraged (maybe even forbidden in future)\n\nMaking Alter table to work efficiently on subtables would need some redesign \nof tuple storage anyway, but this can probably postponed to when other things \nare working. The same redesign would also give us efficient \nALTER TABLE DROP COLUMN.\n\nFuture things like having a unique index over all inherited tables require \nmore technical discussion as there are several vays to implement them, each \nefficient for different use pattern.\n\nbtw. I'll be away from computer from now to monday, but I'm very much \ninterested in this topic and will surely followup then - it's a pain to do \nall the OO in the frontend.\n\n-------------\nHannu\n", "msg_date": "Fri, 19 May 2000 17:16:40 +0300", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: OO Patch" }, { "msg_contents": "On Sat, May 20, 2000 at 09:42:45AM +1000, Chris wrote:\n> The Hermit Hacker wrote:\n> \n> > We have a list archive ... just to try and help out here, you\n> > might want to try posting URLs to show quotes ... to back things up ...\n> \n> I don't have much success with the archive. (Search for \"Proposed\n> Changes\" - the name of the thread. It yields zero results). The links\n> to the result urls are coloured the same whether you have visited them\n> or not (not a bright idea), and in general I'm skeptical the searching\n> works properly. I certainly can't lay my hands on quite a few important\n> postings.\n\nhttp://www.postgresql.org/mhonarc/pgsql-hackers/2000-02/msg00050.html\n\nSeems to be the start of it. The web server had an unfortunate hard drive\ncrash, from what I understand, and they've been rebuilding the indices\nfor the search engine. (I found this by greping my local 'all postgresql\nlist I subscribe to' archive, to find the date, then going to that page\non postgresql.org. One problem is that the 'by month' links in the mailing\nlist archives only give you _part_ of the month: you have to hit the\n'next page' link at the top)\n\n> \n> We're post v7.0 now, so presumably we are in pre-7.1 land right? Surely\n> any minor patches now can be done in a branch? I can understand\n> reluctance to branch with heavy development in progress pre-7.0 but once\n> you've released it's time to move on.\n\nNope - the standard release process for postgresql is tag at release date,\nbranch after the inital flurry of bug reports/patches settles down. This\navoids a lot of double patching for the bugs that the beta testers don't\nfind, but the general user community does.\n\nRoss\n-- \nRoss J. Reedstrom, Ph.D., <[email protected]> \nNSBRI Research Scientist/Programmer\nComputer and Information Technology Institute\nRice University, 6100 S. Main St., Houston, TX 77005\n", "msg_date": "Fri, 19 May 2000 12:05:26 -0500", "msg_from": "\"Ross J. Reedstrom\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: OO Patch" }, { "msg_contents": "Chris Bitmead <[email protected]> writes:\n> I must admit to frustration here. Will I be issued with a certificate or\n> something when an arbitrator declares \"consensus\". I can't fathom how\n> decisions are made around here,\n\nIf necessary, hard decisions are made by agreement of the core committee\n--- but core prefers not to impose answers on the community. If\npossible we wait until we think we see a consensus on the mailing list.\n(I say \"we\" since I was recently appointed to core, but being the junior\nmember of core I'm hardly the man in charge ;-). Perhaps I should also\npoint out that in sitting here and debating the technical issues with\nyou, I'm not speaking for core; I'm just speaking as another member of\nthe community. My opinion doesn't count any more than yours does,\nunless it comes to a point of having to be settled by a core vote ...\nwhich we'd rather avoid.)\n\n> On the sub-class returning issue you declared that you understood that\n> it was \"good for a certain class of problems\" or some such.\n\nSo I did, and I think there wasn't too much debate about that once you'd\nexhibited some sample problems. As I recall it, the remaining debate\nwas mostly about whether we wanted to change the system's default\nbehavior (ie the results of SQL92-compatible syntax) to cater to that\nclass of problems. There was also concern about whether we shouldn't\nlook first at SQL3 and try to follow its lead. If I recall correctly,\nyou are pursuing some other document than SQL3?\n\n> To repeat - it implements DELETE and UPDATE on inheritance hierarchies\n> to correct old bit-rot, and it implements ONLY as relates inheritance\n> hierarchies to exclude sub-classes. Oh, and the emacs pgsql code style\n> lisp implementation is done right in the FAQ.\n\nFixing DELETE* and UPDATE* is clearly not going to raise any hackles,\nsince that won't hurt any working applications. Swapping the behavior\nof SELECT and SELECT* (which is what you really mean by \"ONLY\", no?)\n*will* break some extant applications, so the threshold for deciding\nthat that's a good thing to do is a lot higher. That's the point at\nwhich we start wanting to be convinced that there's a community\nconsensus in favor of the idea, and also that we're not choosing the\nwrong standard to follow. If we do break existing apps, we want to\nbreak them once, not several times until we get it right...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 19 May 2000 17:16:41 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: OO Patch " }, { "msg_contents": "Hannu Krosing <[email protected]> writes:\n>> Certainly libpq is unprepared to support\n>> multiple tuple types returned in one SELECT \n\n> IIRC Bruce removed that feature in Pg95 days claiming that it would \n> not be needed. If backend starts to support it again it would be \n> relatively easy to put back in.\n\nWould it? libpq's internals might not care much, but it seems to me\nthat a rather significant API change would be needed, thus risking\nbreaking client applications. I'd want to see how the libpq API\nchanges before deciding how easy or hard this is ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 19 May 2000 17:20:01 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: OO Patch " }, { "msg_contents": "> So is the \"community\" the hacking community?\n\nIt's kinda fuzzy, but in practice I'd say the readers of pgsql-hackers\nand maybe pgsql-general.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 19 May 2000 18:47:23 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: OO Patch " }, { "msg_contents": "> Hannu Krosing <[email protected]> writes:\n> >> Certainly libpq is unprepared to support\n> >> multiple tuple types returned in one SELECT \n> \n> > IIRC Bruce removed that feature in Pg95 days claiming that it would \n> > not be needed. If backend starts to support it again it would be \n> > relatively easy to put back in.\n> \n> Would it? libpq's internals might not care much, but it seems to me\n> that a rather significant API change would be needed, thus risking\n> breaking client applications. I'd want to see how the libpq API\n> changes before deciding how easy or hard this is ...\n\nSince this came up, I don't remember removing any of this. I may have\ngiven the OK to do it, though.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 19 May 2000 19:19:38 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: OO Patch" }, { "msg_contents": "> The current API would not change. New APIs would be added. One option is\n> just add PQnfieldsv(result, tuple_number) to find the number of fields\n> in a particular tuple.\n> \n> But then we started discussing postgres' lack of streaming result sets\n> and how we might rectify that at the same time.\n> \n> And then it was discussed that PQ will be thrown out in favour of Corba\n> anyway.\n> \n> And then I couldn't figure out where the project is heading, so I didn't\n> know what to work on, so I didn't. I want to know up front if PQ is\n> disappearing in favour of Corba or not.\n\nOK, there are no plans to change PQ anytime soon. What someone may do\nis to implement a CORBA network service that interacts with PostgreSQL.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 19 May 2000 19:25:40 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: OO Patch" }, { "msg_contents": "Chris <[email protected]> writes:\n> And then I couldn't figure out where the project is heading, so I didn't\n> know what to work on, so I didn't. I want to know up front if PQ is\n> disappearing in favour of Corba or not.\n\nAt this point, I'd say no one knows that (although if Alex's opinion\nof Corba is correct, I'd bet we won't be going to Corba after all...)\nYou can wait and see, or you can make a guess and expend effort on the\nbasis of a guess.\n\nMy guess is that libpq won't be going away for a very long time. Even\nif we adopted Corba or some other new protocol, we'd have a lot of\nlegacy clients that we'd want to support for the foreseeable future.\nSo it's probably worth improving libpq even if you think we will/should\nadopt something else in the long run.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 19 May 2000 19:35:07 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: OO Patch " }, { "msg_contents": "The Hermit Hacker wrote:\n\n> We have a list archive ... just to try and help out here, you\n> might want to try posting URLs to show quotes ... to back things up ...\n\nI don't have much success with the archive. (Search for \"Proposed\nChanges\" - the name of the thread. It yields zero results). The links\nto the result urls are coloured the same whether you have visited them\nor not (not a bright idea), and in general I'm skeptical the searching\nworks properly. I certainly can't lay my hands on quite a few important\npostings.\n\n> Nope, my take on things is that your patch does things that would break\n> existing functionality, which won't be permitted without one helluva \n> good explanation ...\n\nThat is true that the ONLY aspect had controversy up front, but it\nseemed to me to peter out as it was discussed and the patch was\nsubmitted. The arguments in favour of ONLY seemed to be (a) It's what\nSQL3 says, (b) It's what Informix does (c) Experience in usage suggests\nthat it significantly reduced programming errors. (d) The other\nimportant point being that the patch includes a SET compatibility mode\nso that old code needs only a 1 line change.\n\n> This is just a post-7.0 merge and I was expecting\n> > it put in CVS now that 7.0 is done.\n> \n> That won't happen ... v7.1, if you can get agreement, but not in the\n> current CVS tree ...\n\nWe're post v7.0 now, so presumably we are in pre-7.1 land right? Surely\nany minor patches now can be done in a branch? I can understand\nreluctance to branch with heavy development in progress pre-7.0 but once\nyou've released it's time to move on.\n", "msg_date": "Sat, 20 May 2000 09:42:45 +1000", "msg_from": "Chris <[email protected]>", "msg_from_op": false, "msg_subject": "Re: OO Patch" }, { "msg_contents": "Hannu Krosing wrote:\n\n> It may be time to fork the tree into OO and beancounting editions ?\n> Especially so if the main tree will migrate to BDB ;-p\n> \n> OOPostgreSQL sounds quite nice ;)\n\nI hope we don't have to go there. A better relational engine and a\nproper OO engine are completely complementry. That was the whole premise\nof the Stonebraker research.\n\nI should also remind people again I guess of my original design proposal\nI wrote a few years ago. You can find it here\nhttp://www.tech.com.au/postgres/\n\nThese issues have been on my mind ever since Berkeley released R4.2.\n", "msg_date": "Sat, 20 May 2000 10:15:58 +1000", "msg_from": "Chris <[email protected]>", "msg_from_op": false, "msg_subject": "Re: OO Patch" }, { "msg_contents": "The Hermit Hacker wrote:\n\n> Chris, one quick question ... the last email I read from you stated a\n> bunch of things that you wanted to accomplish, but your patch only\n> addressed the first one. Can we focus on that and ignore the others? \n> Do it through step'ng stones? Or does each step only make sense in \n> view of the whole picture?\n\nEach of the 3 is independant and useful in and of itself, although all 3\nare needed to achieve the goal - an ODMG interface.\n\nWe can discuss one by one. It might be useful to start off with a\nmeta-discussion. Does everyone understand the significance of ODMG, the\nthe benefits of supporting it?\n", "msg_date": "Sat, 20 May 2000 10:26:35 +1000", "msg_from": "Chris <[email protected]>", "msg_from_op": false, "msg_subject": "Re: OO Patch" }, { "msg_contents": "Tom Lane wrote:\n> \n> Hannu Krosing <[email protected]> writes:\n> >> Certainly libpq is unprepared to support\n> >> multiple tuple types returned in one SELECT\n> \n> > IIRC Bruce removed that feature in Pg95 days claiming that it would\n> > not be needed. If backend starts to support it again it would be\n> > relatively easy to put back in.\n> \n> Would it? libpq's internals might not care much, but it seems to me\n> that a rather significant API change would be needed, thus risking\n> breaking client applications. I'd want to see how the libpq API\n> changes before deciding how easy or hard this is ...\n\nThe current API would not change. New APIs would be added. One option is\njust add PQnfieldsv(result, tuple_number) to find the number of fields\nin a particular tuple.\n\nBut then we started discussing postgres' lack of streaming result sets\nand how we might rectify that at the same time.\n\nAnd then it was discussed that PQ will be thrown out in favour of Corba\nanyway.\n\nAnd then I couldn't figure out where the project is heading, so I didn't\nknow what to work on, so I didn't. I want to know up front if PQ is\ndisappearing in favour of Corba or not.\n", "msg_date": "Sat, 20 May 2000 18:29:12 +1000", "msg_from": "Chris <[email protected]>", "msg_from_op": false, "msg_subject": "Re: OO Patch" }, { "msg_contents": "Tom Lane wrote:\n\n> --- but core prefers not to impose answers on the community. If\n> possible we wait until we think we see a consensus on the mailing list.\n\nSo is the \"community\" the hacking community?\n\nOk then, hands up now anyone with concerns about the compatibility\naspect of this patch (taking into account the backwards compatibly SET\nmode), and let's talk about it.\n", "msg_date": "Sat, 20 May 2000 18:41:28 +1000", "msg_from": "Chris <[email protected]>", "msg_from_op": false, "msg_subject": "Re: OO Patch" }, { "msg_contents": "Tom Lane wrote:\n\n> It's kinda fuzzy, but in practice I'd say the readers of pgsql-hackers\n> and maybe pgsql-general.\n\nOne more time for the <general> mailing list...\n\nHands up if you have objections to the patch I recently submitted for\npostgresql. It fixes the long standing bit-rot / bug that DELETE and\nUPDATE don't work on inheritance hierarchies, and it adds the ONLY\nsyntax as mentioned in SQL3 and as implemented by Informix. The downside\nis it breaks compatibility with the old inheritance syntax. But there is\na backward compatibility mode. I.e. \"SELECT * FROM foobar*\" becomes\n\"SELECT * FROM foobar\", and \"SELECT * from foobar\" becomes \"SELECT *\nFROM ONLY foobar\".\n\nBenefits:\n*) SQL3 says it.\n*) Informix does it.\n*) If you never used inheritance it doesn't affect you.\n*) Performance is unaffected.\n*) There is a backwards compatibility mode via SET.\n*) My own experience says strongly that this will greatly reduce\nprogrammer bugs because the default is much more common (laziness\nusually leads us to discard the \"*\" to the detriment of future\ninheritance data model changes.)\n*) It is more OO since by default a <subclass> IS A <baseclass>.\n\nDisadvantage:\n*) You need to make a one line change to any programs that use\ninheritance to include the back-compatibility SET mode.\n", "msg_date": "Sat, 20 May 2000 19:17:06 +1000", "msg_from": "Chris <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Postgresql OO Patch" }, { "msg_contents": "Tom Lane wrote:\n> \n> Chris <[email protected]> writes:\n> > And then I couldn't figure out where the project is heading, so I didn't\n> > know what to work on, so I didn't. I want to know up front if PQ is\n> > disappearing in favour of Corba or not.\n> \n> At this point, I'd say no one knows that (although if Alex's opinion\n> of Corba is correct, I'd bet we won't be going to Corba after all...)\n\nWhat is Alex's opinion?\n", "msg_date": "Sat, 20 May 2000 21:26:29 +1000", "msg_from": "Chris <[email protected]>", "msg_from_op": false, "msg_subject": "Re: OO Patch" }, { "msg_contents": "> Tom Lane wrote:\n> \n> > It's kinda fuzzy, but in practice I'd say the readers of pgsql-hackers\n> > and maybe pgsql-general.\n> \n> One more time for the <general> mailing list...\n> \n> Hands up if you have objections to the patch I recently submitted for\n> postgresql. It fixes the long standing bit-rot / bug that DELETE and\n> UPDATE don't work on inheritance hierarchies, and it adds the ONLY\n> syntax as mentioned in SQL3 and as implemented by Informix. The downside\n> is it breaks compatibility with the old inheritance syntax. But there is\n> a backward compatibility mode. I.e. \"SELECT * FROM foobar*\" becomes\n> \"SELECT * FROM foobar\", and \"SELECT * from foobar\" becomes \"SELECT *\n> FROM ONLY foobar\".\n> \n> Benefits:\n> *) SQL3 says it.\n> *) Informix does it.\n> *) If you never used inheritance it doesn't affect you.\n> *) Performance is unaffected.\n> *) There is a backwards compatibility mode via SET.\n> *) My own experience says strongly that this will greatly reduce\n> programmer bugs because the default is much more common (laziness\n> usually leads us to discard the \"*\" to the detriment of future\n> inheritance data model changes.)\n> *) It is more OO since by default a <subclass> IS A <baseclass>.\n> \n> Disadvantage:\n> *) You need to make a one line change to any programs that use\n> inheritance to include the back-compatibility SET mode.\n\nWell, it seems many of us forgot the valid arguments for the change. \nMatching SQL3 and Informix's behavior is a good thing. Considering how\nbroken our current inheritance implementation is, backward compatibility\nis not a must, and you have a SET option for that too. Great.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 20 May 2000 08:24:44 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [HACKERS] Postgresql OO Patch" }, { "msg_contents": "Chris wrote:\n> \n> Tom Lane wrote:\n> \n> > It's kinda fuzzy, but in practice I'd say the readers of pgsql-hackers\n> > and maybe pgsql-general.\n--snip--\n\nSo it's not just me, I was using examples from Oracal 8 and was have\ntrouble. I started thinking, I was just missing something or maybe\njust to new to SQL.\n\nRicahrd\n", "msg_date": "Sat, 20 May 2000 10:32:02 -0400", "msg_from": "Richard Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [HACKERS] Postgresql OO Patch" }, { "msg_contents": "Chris writes:\n\n> I.e. \"SELECT * FROM foobar*\" becomes \"SELECT * FROM foobar\", and\n> \"SELECT * from foobar\" becomes \"SELECT * FROM ONLY foobar\".\n\nThis aspect of the patch I wholeheartedly agree on. The rest I'm not sure\nabout -- yet. :)\n\n> Benefits:\n> *) SQL3 says it.\n\nThat is unfortunately false for the patch in general.\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Sun, 21 May 2000 18:45:46 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Postgresql OO Patch" }, { "msg_contents": "Peter Eisentraut wrote:\n> \n> Chris writes:\n> \n> > I.e. \"SELECT * FROM foobar*\" becomes \"SELECT * FROM foobar\", and\n> > \"SELECT * from foobar\" becomes \"SELECT * FROM ONLY foobar\".\n> \n> This aspect of the patch I wholeheartedly agree on. The rest I'm not sure\n> about -- yet. :)\n> \n> > Benefits:\n> > *) SQL3 says it.\n> \n> That is unfortunately false for the patch in general.\n\nHuh?\n", "msg_date": "Mon, 22 May 2000 10:15:24 +1000", "msg_from": "Chris Bitmead <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgresql OO Patch" }, { "msg_contents": "On Sun, 21 May 2000, Chris Bitmead wrote:\n> Peter Eisentraut wrote:\n> > \n> > Chris writes:\n> > \n> > > I.e. \"SELECT * FROM foobar*\" becomes \"SELECT * FROM foobar\", and\n> > > \"SELECT * from foobar\" becomes \"SELECT * FROM ONLY foobar\".\n> > \n> > This aspect of the patch I wholeheartedly agree on. The rest I'm not sure\n> > about -- yet. :)\n> > \n> > > Benefits:\n> > > *) SQL3 says it.\n> > \n\nI also agree about the usage of ONLY, as long as it follows the\nofficial standardized SQL3 spec.\n\nAbout returning multiple types of rows again: I don't see that in SQL3 so far\n(difficult and time consuming to read). If it were allowed, you might have to\nspecify the level to dig to in the tree. The rows are shared among supertable\nand subtables. One row in a leaf table has subrows in all its supertables up\nthe tree. If you do a \"SELECT * FROM supertable*\" (for example, if you were to\nredefine table* to mean select heterogeneous rows), what row will you get for a\nrow that exists in a leaf? The same row is in all tables between supertable\nand the leaf. I suppose it would be necessary to have the query check each row\nand see how far down the tree it goes, or the system keeps track of that and\nreturns the row-type from the table that inserted it. OR, there could be some\nextra specifier like \"SELECT * FROM supertable DIGGING TO LEVEL 3\". In this\ncase, it would only look down into the tree to 3 levels below supertable and\nyou'd never get row-types that are down lower than level 3. Anyhow, I still\ndon't think returning multple row-types is going to happen, not that I have any\nauthority one way or the other! :-)\n\n-- \nRobert B. Easter\[email protected]\n", "msg_date": "Mon, 22 May 2000 02:52:41 -0400", "msg_from": "\"Robert B. Easter\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Postgresql OO Patch" }, { "msg_contents": "Chris Bitmead wrote:\n> \n> > In this\n> > case, it would only look down into the tree to 3 levels below supertable and\n> > you'd never get row-types that are down lower than level 3. Anyhow, I still\n> > don't think returning multple row-types is going to happen,\n\nOTOH, I'm pretty sure that original Postgres did allow for it.\n\n> > not that I have any authority one way or the other! :-)\n> >\n-------------\nHannu\n", "msg_date": "Mon, 22 May 2000 12:03:11 +0300", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Postgresql OO Patch" }, { "msg_contents": "Chris Bitmead wrote:\n> \n> While SQL3 talks about trees and leaf rows, it's not implemented like\n> that, so all this worrying about digging down trees and leafs is all a\n> bit mute.\n\nMoot. ;-)\n\nAt a minimum, it seems to me, the backend must support the\nconcept of multiple tuples with different attributes at the\nrelation level since concurrency and rollback-ability of ALTER\nTABLE ADD COLUMN will cause two concurrent transactions to see a\nsingle relation with different attributes. It doesn't seem a\nlarge leap to support this concept for OO purposes from \"leaf\" to\n\"base\". For \"base\" to \"leaf\" type queries, wouldn't it be\nacceptable to return the base attributes only, as long as the\nequivalent of run-time type information could be had from the\nOID?\n\nJust curious, \n\nMike Mascari\n", "msg_date": "Mon, 22 May 2000 06:12:40 -0400", "msg_from": "Mike Mascari <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [HACKERS] Postgresql OO Patch" }, { "msg_contents": "Mike Mascari wrote:\n\n> At a minimum, it seems to me, the backend must support the\n> concept of multiple tuples with different attributes at the\n> relation level since concurrency and rollback-ability of ALTER\n> TABLE ADD COLUMN will cause two concurrent transactions to see a\n> single relation with different attributes. It doesn't seem a\n> large leap to support this concept for OO purposes from \"leaf\" to\n> \"base\". For \"base\" to \"leaf\" type queries, wouldn't it be\n> acceptable to return the base attributes only, as long as the\n> equivalent of run-time type information could be had from the\n> OID?\n\nHow are you going to be able to go shape.display() and have it work for\na triangle, if the triangle's apex's weren't retrieved?\n", "msg_date": "Mon, 22 May 2000 21:25:18 +1000", "msg_from": "Chris Bitmead <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [HACKERS] Postgresql OO Patch" }, { "msg_contents": "Chris wrote:\n\n> I.e. \"SELECT * FROM foobar*\" becomes\n> \"SELECT * FROM foobar\", and \"SELECT * from foobar\" becomes \"SELECT *\n> FROM ONLY foobar\".\n\nAs a user, is this all I need to know?\n\nI'd just ask that the documentation be updated simultaneously. I don't\nknow SQL3 or any other vendor's implementation. I'm pretty dependant on\nthe docs to know what I can & can't do, and how to do it. I'm easily\nconfused.\n\n-Ron-\n", "msg_date": "Mon, 22 May 2000 11:57:03 -0400", "msg_from": "Ron Peterson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [HACKERS] Postgresql OO Patch" }, { "msg_contents": "\nWhile SQL3 talks about trees and leaf rows, it's not implemented like\nthat, so all this worrying about digging down trees and leafs is all a\nbit mute.\n\n\"Robert B. Easter\" wrote:\n\n> If it were allowed, you might have to\n> specify the level to dig to in the tree. The rows are shared among supertable\n> and subtables. One row in a leaf table has subrows in all its supertables up\n> the tree. If you do a \"SELECT * FROM supertable*\" (for example, if you were to\n> redefine table* to mean select heterogeneous rows), what row will you get for a\n> row that exists in a leaf? The same row is in all tables between supertable\n> and the leaf. I suppose it would be necessary to have the query check each row\n> and see how far down the tree it goes, or the system keeps track of that and\n> returns the row-type from the table that inserted it. OR, there could be some\n> extra specifier like \"SELECT * FROM supertable DIGGING TO LEVEL 3\". In this\n> case, it would only look down into the tree to 3 levels below supertable and\n> you'd never get row-types that are down lower than level 3. Anyhow, I still\n> don't think returning multple row-types is going to happen, not that I have any\n> authority one way or the other! :-)\n> \n> --\n> Robert B. Easter\n> [email protected]\n\n-- \nChris Bitmead\nmailto:[email protected]\nhttp://www.techphoto.org - Photography News, Stuff that Matters\n", "msg_date": "Tue, 23 May 2000 05:18:54 +1000", "msg_from": "Chris Bitmead <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Postgresql OO Patch" }, { "msg_contents": "On Sat, 20 May 2000, Chris wrote:\n\n> And then I couldn't figure out where the project is heading, so I didn't\n> know what to work on, so I didn't. I want to know up front if PQ is\n> disappearing in favour of Corba or not.\n\nEventually ... maybe. But, I agree with Tom on this, it will be awhile\nbefore libpq can/will disappear, as there is too much code out there that\nrelies on it. Figuring our release cycles being 4-6mos, and figuring that\nit would be *at least* 2 full releases after Corba was fully implemented\nbefore we could phase out libpq, figure, oh, 2 years at least before libpq\n*could* disappear :)\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Mon, 22 May 2000 22:41:49 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: OO Patch" }, { "msg_contents": "The Hermit Hacker wrote:\n> > And then I couldn't figure out where the project is heading, so I didn't\n> > know what to work on, so I didn't. I want to know up front if PQ is\n> > disappearing in favour of Corba or not.\n> \n> Eventually ... maybe. But, I agree with Tom on this, it will be awhile\n> before libpq can/will disappear, as there is too much code out there that\n> relies on it. Figuring our release cycles being 4-6mos, and figuring that\n> it would be *at least* 2 full releases after Corba was fully implemented\n> before we could phase out libpq, figure, oh, 2 years at least before libpq\n> *could* disappear :)\n\nWhen you say \"libpq\", do you mean the API or the protocol? The API can\nstay forever if it is implemented in terms of a Corba API.\n\nI've been looking into it. The thing I've come up against now is\npostgres' advanced types. Does every postgres type, user-defined or not\nnow need a Corba IDL definition if we go to Corba? If so, how do people\nfeel about it? If we go to a binary representation protocol (which I\nbelieve is the right thing BTW), there has to be something which can\nmarshal etc, and using IDL to achieve it may as well be it.\n\nBut when I started to realise this aspect and the amount of work, Corba\nstarted to get pushed down my TODO list in favour of a quick fix to the\ncurrent protocol to do my OO stuff.\n", "msg_date": "Tue, 23 May 2000 13:23:59 +1000", "msg_from": "Chris Bitmead <[email protected]>", "msg_from_op": false, "msg_subject": "Re: OO Patch" }, { "msg_contents": "> the tree. If you do a \"SELECT * FROM supertable*\" (for example, if you were to\n> redefine table* to mean select heterogeneous rows), what row will you get for a\n> row that exists in a leaf? The same row is in all tables between supertable\n> and the leaf. I suppose it would be necessary to have the query check each row\n> and see how far down the tree it goes, or the system keeps track of that and\n> returns the row-type from the table that inserted it. OR, there could be some\n> extra specifier like \"SELECT * FROM supertable DIGGING TO LEVEL 3\". In this\n> case, it would only look down into the tree to 3 levels below supertable and\n> you'd never get row-types that are down lower than level 3. Anyhow, I still\n> don't think returning multple row-types is going to happen, not that I have any\n> authority one way or the other! :-)\n> \n> -- \n> Robert B. Easter\n> [email protected]\n> \n\n Your example is a very good example, that shows, why multiple result\nsets are needed to get a very good object-oriented system !\n\n Everyone here on this lists should think about: \"What do we expect\nfrom on object-oriented extension and how can it help me to improve my\nsystem\".\n\n As an example: My software background is Smalltalk and relational-\nand object-oriented databases. Now I use relational databases and from\nthis technology I use only a small part to do my mapping.\n\n After reading all the postings here on the lists I looked at my\nwrapper and asked myself: how would it benefit from an oo-extension.\n\n And the result was pretty much frustrated:\n\n - the OID (SEQUENCE's) are useless (ok, I say it again and again). Give\n PostgreSQL the OID and ask PostgreSQL to return the attributes of this\n object. Perhaps even with class informations !\n\n PostgreSQL is not able to do that ! Think about this and you see\n the usage of the OID in perhaps a different way :-)\n\n Therefore: for object system you need complete other types of object\n identification numbers.\n\n - query over a hierarchy of classes ! See the example above ! Until\n you're not able to return multiple sets you get too much garbage or\n you need to many queries or you need much more disc-space, depending\n of the way you wrap classes to tables. This feature is a CRITICAL\n one ! This may push the performance, depending how it is done.\n\n - for associations (m:n) I still need additional help tables, but \n that is ok :-)\n\n - no support for tree structures !\n\n - more powerful statements DDL to change the structure of a database !\n\n - no support to inform the client about changes inthe database !\n\n And that's it ! All the other stuff mentioned here are syntactical\nsugar for people doing object-oriented database queries over pgsql\nor hoping to structure their work - but I do not see, that it's\na real win.\n\n Very frustrating !\n\n\n Marten Feldtmann\n\n", "msg_date": "Tue, 23 May 2000 23:02:50 +0200 (CEST)", "msg_from": "Marten Feldtmann <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [HACKERS] Postgresql OO Patch" }, { "msg_contents": "\n> - the OID (SEQUENCE's) are useless (ok, I say it again and again). Give\n> PostgreSQL the OID and ask PostgreSQL to return the attributes of this\n> object. Perhaps even with class informations !\n> \n> PostgreSQL is not able to do that ! Think about this and you see\n> the usage of the OID in perhaps a different way :-)\n> \n> Therefore: for object system you need complete other types of object\n> identification numbers.\n\nI agree, that's why I have suggested an implied super-class \"Object\" for\nall postgresql objects. Then you could do \"SELECT ** FROM object WHERE\noid=?\". The ability to place an index over sub-class hierarchies (in\nthis case oid for all objects) would get the good performance.\n\n> - query over a hierarchy of classes ! See the example above ! Until\n> you're not able to return multiple sets you get too much garbage or\n> you need to many queries or you need much more disc-space, depending\n> of the way you wrap classes to tables. This feature is a CRITICAL\n> one ! This may push the performance, depending how it is done.\n\nYep.\n\n> - for associations (m:n) I still need additional help tables, but\n> that is ok :-)\n\nActually, postgres can have arrays of oids which is the ODBMS way of\nhandling associations. Last I looked there are some contrib functions\nfor doing things like ...\n\nCREATE TABLE foo( bar [] );\nCREATE TABLE bar( ... etc);\nSELECT bar.** from bar, foo where array_in(bar.oid, foo.bar) and\nfoo.oid=?\". In other words, to retrieve all the objects in a list.\n(forget the actual function name).\n\n> - no support for tree structures !\n\nAGAIN AGREE! Original postgres had a syntax \"SELECT* from foo\" to get a\ntransitive closure on a tree! Why this was removed (argh!) I can only\nguess.\n\n> - more powerful statements DDL to change the structure of a database !\n\nYep, important.\n\n> - no support to inform the client about changes inthe database !\n\nHavn't even looked at that.\n", "msg_date": "Wed, 24 May 2000 09:55:10 +1000", "msg_from": "Chris Bitmead <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Re: Postgresql OO Patch" }, { "msg_contents": "> > \n> > Therefore: for object system you need complete other types of object\n> > identification numbers.\n> \n> I agree, that's why I have suggested an implied super-class \"Object\" for\n> all postgresql objects. Then you could do \"SELECT ** FROM object WHERE\n> oid=?\". The ability to place an index over sub-class hierarchies (in\n> this case oid for all objects) would get the good performance.\n\n I can not believe, that this will result in a good performance. This\ncolumn (object identifier) would need an index to cover ALL objects\n... and this index will be growing and now image a system with about\n1.000.000 objects and now try to insert a new object. Indices on such\nlarge mount of value maybe a problem.\n\n On the other hand: the solution you mentioned can be done without an\nimplied table - which would be a special solution. The application can\ncreate the \"super\"-table and should be responsible for it.\n\n> \n> Actually, postgres can have arrays of oids which is the ODBMS way of\n> handling associations. Last I looked there are some contrib functions\n> for doing things like ...\n> \n> CREATE TABLE foo( bar [] );\n> CREATE TABLE bar( ... etc);\n> SELECT bar.** from bar, foo where array_in(bar.oid, foo.bar) and\n> foo.oid=?\". In other words, to retrieve all the objects in a list.\n> (forget the actual function name).\n\n Have you ever create a 1:n association with about 800 entries ?\nActually I do not know, how many entries such an array may\nhave. Unlimited ? How do I remove an entry, how do I delete an \nentry. I may have a closer look at that.\n \n> > - no support to inform the client about changes inthe database !\n> \n> Havn't even looked at that.\n> \n\n But here again an active system may be build on top of the system we\nalready have:\n\n - update, insert, deletes are catched via triggers (on commit)\n these trigger functions do retrieve the object-id of the objects\n changed and write the result into a special table.\n\n - another software has notification on this special table and managed\n the ip-commuication to the clients.\n\n\n Marten\n\n", "msg_date": "Wed, 24 May 2000 06:25:34 +0200 (CEST)", "msg_from": "Marten Feldtmann <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [HACKERS] Postgresql OO Patch" }, { "msg_contents": "Marten Feldtmann wrote:\n> \n> > >\n> > > Therefore: for object system you need complete other types of object\n> > > identification numbers.\n> >\n> > I agree, that's why I have suggested an implied super-class \"Object\" for\n> > all postgresql objects. Then you could do \"SELECT ** FROM object WHERE\n> > oid=?\". The ability to place an index over sub-class hierarchies (in\n> > this case oid for all objects) would get the good performance.\n> \n> I can not believe, that this will result in a good performance. This\n> column (object identifier) would need an index to cover ALL objects\n> ... and this index will be growing and now image a system with about\n> 1.000.000 objects and now try to insert a new object. Indices on such\n> large mount of value maybe a problem.\n> \n> On the other hand: the solution you mentioned can be done without an\n> implied table - which would be a special solution. The application can\n> create the \"super\"-table and should be responsible for it.\n\nThe implied table doesn't do anything to performance. Having an index on\nthat table obviously needs to be maintained and the decision to create\nsuch an index would be by the user. So the user can make use of such an\nimplied super-table or not as they please. But having such a global\nindex is necessary for an ODBMS, and I can tell you that for the Versant\nODBMS it is lightning fast even with gigabytes of data (I have seen\nVersant grown to 100 Gig). Versant does use an indexing mechanism.\n\n> Have you ever create a 1:n association with about 800 entries ?\n\nIn postgres, no. In other ODBMS, yes easily.\n\n> Actually I do not know, how many entries such an array may\n> have. Unlimited ?\n\nTo work properly we do need TOAST so that tuples can grow bigger.\n\n> How do I remove an entry, how do I delete an\n> entry. I may have a closer look at that.\n\nAdding and deleting entries would be done in memory and then the\nattribute updated in one go. Of course with an ODBMS you can create more\nsophisticated data structures if you need really huge arrays, like roll\nyour own btree, or whatever thing you can find in Knuth.\n", "msg_date": "Wed, 24 May 2000 16:27:18 +1000", "msg_from": "Chris Bitmead <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Re: Postgresql OO Patch" }, { "msg_contents": "Marten Feldtmann wrote:\n> \n> But here again an active system may be build on top of the system we\n> already have:\n> \n> - update, insert, deletes are catched via triggers (on commit)\n> these trigger functions do retrieve the object-id of the objects\n> changed and write the result into a special table.\n> \n> - another software has notification on this special table and managed\n> the ip-commuication to the clients.\n\nExtending NOTIFY to take at least ONE string argument or OID would go a \nlong long way. Even better would be for it to take an \"Object\", in the \none-supertable sense.\n\nSo triggers or whatever can just notify interested parties about changes.\n\nThis has been on my personal todo for severeal years already ;)\n\n--------------\nHannu\n", "msg_date": "Wed, 24 May 2000 17:17:48 +0300", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [HACKERS] Postgresql OO Patch" }, { "msg_contents": "Chris Bitmead wrote:\n> \n> \n> > - no support for tree structures !\n> \n> AGAIN AGREE! Original postgres had a syntax \"SELECT* from foo\" to get a\n> transitive closure on a tree! Why this was removed (argh!) I can only\n> guess.\n> \n\nThis is what I got sneaked into TODO (or at least I think it must be it ;):\n\nEXOTIC FEATURES\n\n* Add sql3 recursive unions\n\n From my reading of SQL3 draft a few years ago I concluded that this was wat it\ndescribed \n\nNow they seem to have RECURSIVE VIEWs that are used as follows:\n\nCREATE RECURSIVE VIEW APPLICABLE_ROLES ( GRANTEE, ROLE_NAME, IS_GRANTABLE ) AS\n ( ( SELECT GRANTEE, ROLE_NAME, IS_GRANTABLE\n FROM DEFINITION_SCHEMA.ROLE_AUTHORIZATION_DESCRIPTORS\n WHERE GRANTEE IN ( CURRENT_USER, 'PUBLIC' ) )\n UNION\n ( SELECT RAD.GRANTEE, RAD.ROLE_NAME, RAD.IS_GRANTABLE\n FROM DEFINITION_SCHEMA.ROLE_AUTHORIZATION_DESCRIPTORS RAD\n JOIN\n APPLICABLE_ROLES R\n ON\n RAD.GRANTEE = R.ROLE_NAME ) );\n\nThe definition of the meaning of RECURSIVE is something I should read in the\nmorning ;~]\n\n---------------------\nHannu\n", "msg_date": "Wed, 24 May 2000 17:34:01 +0300", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [HACKERS] Postgresql OO Patch" } ]
[ { "msg_contents": "Peter Eisentraut wrote:\n>\n> Chris Bitmead writes:\n>\n> > > That also goes for the various ALTER TABLE [ONLY]\n> > > syntax additions. If I add a row to A only then B is no longer a subtable\n> > > of A.\n> >\n> > I agree that the alter table only is crazy, but the functionality was\n> > there before and I didn't want to be the one to take it out. But if\n> > someone does I can't imagine I'd object.\n>\n> Okay, I think I see what you're getting at. The \"ONLY\" syntax on DELETE,\n> UPDATE, and ALTER TABLE would provide an entry point for the current,\n> broken behaviour, for those who need it (though it's not really backwards\n> compatibility per se).\n\nThat is absolutely NOT what I'm saying. In the ALTER TABLE case, yes it\nis brain-dead. For UPDATE and DELETE it is absolutely correct, and\nuseful, not to mention absolutely essential.\n\n> What I was also\n> wondering about were these things such as the \"virtual\" IDENTITY field\n> that was proposed, the `SELECT **' syntax (bad idea, IMO),\n\nOk, I can see we're going to rehash this yet again. Why is it a bad idea\n(considering that every ODBMS on the planet does this)?\n\n> and the notion\n> that a query could return different types of rows when reading from an\n> inheritance structure (a worse idea, IMO).\n\nPlease go out and use an ODBMS. I'm happy to discuss this, but even\nTom, who was at first against this understood after the last round.\n\n> I didn't know whether the patch\n> touched that. (I think now that it doesn't.)\n \nIt doesn't but a future patch hopefully will.\nPeter Eisentraut wrote:\n> \n> Chris Bitmead writes:\n> \n> > > That also goes for the various ALTER TABLE [ONLY]\n> > > syntax additions. If I add a row to A only then B is no longer a subtable\n> > > of A.\n> >\n> > I agree that the alter table only is crazy, but the functionality was\n> > there before and I didn't want to be the one to take it out. But if\n> > someone does I can't imagine I'd object.\n> \n> Okay, I think I see what you're getting at. The \"ONLY\" syntax on DELETE,\n> UPDATE, and ALTER TABLE would provide an entry point for the current,\n> broken behaviour, for those who need it (though it's not really backwards\n> compatibility per se). \n\nThat is absolutely NOT what I'm saying. In the ALTER TABLE case, yes it\nis brain-dead. For UPDATE and DELETE it is absolutely correct, and\nuseful, not to mention absolutely essential.\n\n> What I was also\n> wondering about were these things such as the \"virtual\" IDENTITY field\n> that was proposed, the `SELECT **' syntax (bad idea, IMO), \n\nOk, I can see we're going to rehash this yet again. Why is it a bad idea\n(considering that every ODBMS on the planet does this)?\n\n> and the notion\n> that a query could return different types of rows when reading from an\n> inheritance structure (a worse idea, IMO).\n\nPlease go out and use an ODBMS. I'm happy to answer questions, but even\nTom, who was at first against this understood after the last round.\n\n> I didn't know whether the patch\n> touched that. (I think now that it doesn't.)\n\nIt doesn't but a future patch hopefully will.", "msg_date": "Fri, 19 May 2000 14:40:35 +1000", "msg_from": "Chris Bitmead <[email protected]>", "msg_from_op": true, "msg_subject": "[Fwd: OO Patch]" } ]
[ { "msg_contents": "[Forgive me if you got this already. I don't _think_ it got out last\ntime]..\n\nCasting your minds back again to the discussion a few months ago. I was\ntalking about making changes to the fe/be protocol to accomodate the OO\nextensions I was talking about. At the time I mentioned interest in\nfixing some other things while I was there such as adding a streaming\ninterface, and perhaps fixing a few other things while I was at it.\n\nThen someone said all the code was going to be discarded anyway and the\nprotocol moved to Corba. That threw a spanner in the works and I havn't\ndone anything since because I couldn't get any more details.\n \nSo here's the question again. Is Corba really a good thing for a\ndatabase, seeing as a db is concerned with transferring massive chunks\nof\nsimply formatted data. I'm no Corba guru, but I would have thought (a)\nCorba would be not very efficient at that sort of thing, probably adding\nbig overhead in bytes, and possibly a lot more protocol back and forth,\nand (b) isn't the protocol simple enough anyway that Corba is overkill.\n \nIf you guys convince me that you really are going to move to Corba that\nwill influence how I approach this. I might even do work to implement\nthe Corba stuff.\n\nChris\nOk, I'll broach the subject again, and see where we get....\n\nCasting your minds back again to the discussion a few months ago. I was\ntalking about making changes to the fe/be protocol to accomodate the OO\nextensions I was talking about. At the time I mentioned interest in\nfixing some other things while I was there such as adding a streaming\ninterface, and perhaps fixing a few other things while I was at it.\n\nThen someone said all the code was going to be discarded anyway and the\nprotocol moved to Corba. That threw a spanner in the works and I havn't\ndone anything since because I couldn't get any more details.\n\nSo here's the question again. Is Corba really a good thing for a\ndatabase, seeing as it is concerned with transferring massive chunks of\nbasic formatted data. I'm no Corba guru, but I would have thought (a)\nCorba would be not very efficient at that sort of thing, probably adding\nbig overhead in bytes, and possibly a lot more protocol back and forth,\nand (b) isn't the protocol simple enough anyway that Corba is overkill.\n\nIf you guys convince me that you really are going to move to Corba that\nwill influence how I approach this. I might even do work to implement\nthe Corba stuff.\n\nChris.", "msg_date": "Fri, 19 May 2000 15:01:59 +1000", "msg_from": "Chris Bitmead <[email protected]>", "msg_from_op": true, "msg_subject": "OO / fe-be protocol" }, { "msg_contents": "Chris Bitmead wrote:\n> \n> [Forgive me if you got this already. I don't _think_ it got out last\n> time]..\n> \n> Casting your minds back again to the discussion a few months ago. I was\n> talking about making changes to the fe/be protocol to accomodate the OO\n> extensions I was talking about. At the time I mentioned interest in\n> fixing some other things while I was there such as adding a streaming\n> interface, and perhaps fixing a few other things while I was at it.\n\nI did an clien in pure python for v6.2 an I sure found some things to fix ;)\n\nI also have several other ideas for enchancing it.\n\nSo please contact me on this list when you start doing the actual work. \n\n> Then someone said all the code was going to be discarded anyway and the\n> protocol moved to Corba.\n\nSomeone was contemplating (maybe even doing _some_ work on) Corba, sorry\nbut I don't remember who it was.\n\n> So here's the question again. Is Corba really a good thing for a\n> database, seeing as a db is concerned with transferring massive chunks\n> of simply formatted data.\n\nWhile it may be a good thing to have a Corba interface to PostgreSQL, \nI don't think it will ever be the main interface.\n\n> I'm no Corba guru, but I would have thought (a)\n> Corba would be not very efficient at that sort of thing, probably adding\n> big overhead in bytes, and possibly a lot more protocol back and forth,\n> and (b) isn't the protocol simple enough anyway that Corba is overkill.\n\nDefinitely.\n\n----------------\nHannu\n", "msg_date": "Fri, 19 May 2000 10:38:59 +0300", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: OO / fe-be protocol" }, { "msg_contents": "Chris Bitmead <[email protected]> writes:\n> Then someone said all the code was going to be discarded anyway and the\n> protocol moved to Corba. That threw a spanner in the works and I havn't\n> done anything since because I couldn't get any more details.\n\nThere's been some talk of using Corba, but it's certainly not a done\ndeal; in fact I don't think anyone's actively working on it right now.\n \n> So here's the question again. Is Corba really a good thing for a\n> database, seeing as a db is concerned with transferring massive chunks\n> of simply formatted data. I'm no Corba guru, but I would have thought\n> (a) Corba would be not very efficient at that sort of thing, probably\n> adding big overhead in bytes, and possibly a lot more protocol back\n> and forth, and (b) isn't the protocol simple enough anyway that Corba\n> is overkill.\n\nThe attraction of Corba to my mind is that it might save us from the\nconvert-everything-to-text bottleneck of the current protocol (by\nproviding cross-platform byte order translation and so forth). That\nshould give us a performance boost, hopefully more than enough to cancel\nout any added overhead. I won't be very excited about switching to\nCorba if it turns out to be a performance dog compared to what we have.\n\nI'm not a Corba guru (at the moment anyway...) so someone else might be\nable to offer a more-informed opinion here.\n\nThe alternative is to stick with the present protocol and perhaps try\nto sandpaper off some of its uglier corners. It'd probably be worth\ndiscussing what we might want in that direction, if only so we can get\na feel for how much work would be involved if we go that route rather\nthan the Corba route.\n\n(Or we could do neither, instead inventing a brand-new protocol that's\nstill Postgres-only, but that seems like it has no particular\nattraction... there's a lot of work invested in the current frontends\nand if we're going to throw it away we probably ought to adopt a\nstandards-based protocol. IMHO anyway.)\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 19 May 2000 16:50:31 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: OO / fe-be protocol " }, { "msg_contents": "> Ok, I'll go back to reading about Corba and see if I can figure out if\n> it can do the job.\n\nIt can, and it is appropriate.\n\nThe devil is in the details, which include concerns on portability of\nthe ORB among our > 20 platforms, additional levels of complexity for\nthe minimum, small installation (Naming Service, etc etc), and general\nunfamiliarity with CORBA. I'm sure there are other concerns too.\n\nI've got some experience with C++ ORBs (TAO and Mico), but am not\nfamiliar with the C mapping and how clean it may or may not be.\n\nThe \"transform only if necessary\" philosophy of CORBA (that is,\nrecipients are responsible for changing byte order if required, but do\nnot if not) should minimize overhead. And the support for dynamic data\ndefinition and data handling should be a real winner, at least for\ncommunications to outside the server. Inside the server it could help\nus clean up our interfaces, and start thinking about distributing\nportions onto multiple platforms. Should be fun :)\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Sat, 20 May 2000 04:08:28 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: OO / fe-be protocol" }, { "msg_contents": "Tom Lane wrote:\n\n> I'm not a Corba guru (at the moment anyway...) so someone else might be\n> able to offer a more-informed opinion here.\n\nOk, I'll go back to reading about Corba and see if I can figure out if\nit can do the job.\n\n> (Or we could do neither, instead inventing a brand-new protocol that's\n> still Postgres-only, but that seems like it has no particular\n> attraction... there's a lot of work invested in the current frontends\n> and if we're going to throw it away we probably ought to adopt a\n> standards-based protocol. IMHO anyway.)\n\nBut if Corba is not appropriate, what else is there?\n", "msg_date": "Sat, 20 May 2000 18:48:47 +1000", "msg_from": "Chris <[email protected]>", "msg_from_op": false, "msg_subject": "Re: OO / fe-be protocol" }, { "msg_contents": "As an alternative you might consider XDR (RFC 1832, RFC 1014).\n\nhttp://landfield.com/rfcs/rfc1832.html\nhttp://landfield.com/rfcs/rfc1014.html\n\nWe are using this since many years. It is quite effective. XDR is available\non a lot of platforms (on QNX too).\n\nIn short:\nThe data structures/unions to be transfered between different platforms must\nbe defined in a normal C header. This header has to be proceeded by the tool\nrpcgen. This rpcgen tool generates encoding/decoding functions for these\nstructures. These functions can than be called in the code before send and\nafter receive to encode/decode message buffers.\n\nNo things like ORBs etc. are required.\n\nRegards,\nAndreas Kardos\n\n-----Urspr�ngliche Nachricht-----\nVon: Tom Lane <[email protected]>\nAn: Chris Bitmead <[email protected]>\nCc: <[email protected]>\nGesendet: Freitag, 19. Mai 2000 22:50\nBetreff: Re: [HACKERS] OO / fe-be protocol\n\n\n> (Or we could do neither, instead inventing a brand-new protocol that's\n> still Postgres-only, but that seems like it has no particular\n> attraction... there's a lot of work invested in the current frontends\n> and if we're going to throw it away we probably ought to adopt a\n> standards-based protocol. IMHO anyway.)\n>\n> regards, tom lane\n>\n\n", "msg_date": "Mon, 22 May 2000 11:47:07 +0200", "msg_from": "\"Kardos, Dr. Andreas\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: OO / fe-be protocol " }, { "msg_contents": "> I've been doing a little reading on Corba. It seems like a tuple would\n> have to be returned as something like a sequence of any. But I believe\n> that any might not be as efficient as one would like. Have you had any\n> thoughts about what the interface should look like?\n\nSomeone put some CORBA stuff into src/corba/, including two files from\nOMG which define a query interface afaict. At one level we would\ncertainly want to implement that, and then perhaps also implement a\nlower-level interface which is more specific to Postgres.\n\nOne problem/feature with CORBA is that long responses are usually\nhandled with an interator interface, which requires more handshaking\nthan our current \"streaming\" interface. otoh, it is more robust, in the\nsense that clients and servers have some control over their local\nresources such as memory.\n\n - Thomas\n", "msg_date": "Wed, 31 May 2000 15:41:34 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: OO / fe-be protocol" } ]
[ { "msg_contents": "Hi *,\n\nI am trying to find out why the MySQL benchmarks result in such _lousy_\nperformance for postgreSQL.\n\nTracing a simple update loop, I see the postmaster hogging 95% CPU,\ndoing this:\n\n$ strace -c -p 10174\n[ wait a few seconds ] ^C\n% time seconds usecs/call calls errors syscall\n------ ----------- ----------- --------- --------- ----------------\n 88.84 1.813381 60 30247 read\n 8.82 0.180123 6 30552 lseek\n 1.76 0.035868 1087 33 recv\n 0.45 0.009101 54 170 write\n 0.07 0.001528 45 34 send\n 0.03 0.000619 19 33 time\n 0.03 0.000618 19 33 33 ioctl\n------ ----------- ----------- --------- --------- ----------------\n100.00 2.041238 61102 33 total\n\n... which translates to \"the query\n\n\tupdate bench1 set dummy1='updated' where id=1747\n\nis processed by calling the read() system call _one_thousand_times_\".\n\nThat's a bit excessive. What the hell could possibly cause this?\nFailure to use the index?? EXPLAIN says it's using the index.\nThis query, unsurprisingly, takes 0.08 seconds to execute, which is way\ntoo much.\n\n\nThe table has been created (via Perl) thusly:\n\n$server->create(\"bench1\",\n [\"id int NOT NULL\",\n \"id2 int NOT NULL\",\n \"id3 int NOT NULL\",\n \"dummy1 char(30)\"],\n [\"primary key (id,id2)\",\n \"index index_id3 (id3)\"]));\n\nwhich translates to\n\ncreate table bench1 (id int NOT NULL,id2 int NOT NULL,id3 int NOT NULL,dummy1 char(30));\ncreate unique index bench1_index_ on bench1 using btree (id,id2);\ncreate index bench1_index_1 on bench1 using btree (id3);\n\n\nI don't have a clue what the reason for this might be. \n\n\nThis is postgreSQL 7.0, compiled with i686-pc-linux-gnu/2.95, using\nno special options to compile or setup, except that fsync was turned off,\nas verified by the above sytem call summary. Auto-commit was on during\nthis test (what's the SQL command to turn it off, anyway? I couldn't\nfind it).\n\n\nNB: The same benchmark revealed that CREATE TABLE (or maybe it's CREATE\nINDEX) leaks about 2k of memory.\n\n-- \nMatthias Urlichs | noris network GmbH | [email protected] | ICQ: 20193661\nThe quote was selected randomly. Really. | http://smurf.noris.de/\n-- \nJustice: A commodity which (in a more or less adulterated condition)\n the State sells to the citizen as a reward for his allegiance, \n taxes, and personal service.\n -- Ambrose Bierce, \"The Devil's Dictionary\" \n", "msg_date": "Fri, 19 May 2000 07:55:10 +0200", "msg_from": "\"Matthias Urlichs\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Heaps of read() syscalls by the postmaster" }, { "msg_contents": "Hi,\n\nshort add-on:\n\n> The table has been created (via Perl) thusly:\n> [...]\n\nand then 300000 records have been inserted, with unique values of 'id'\nof course, before doing the updates I mentioned in my previous post. The\nbackend does 500 read() calls each INSERT, too, which is equally dog-slow.\nTrace of testing program's send() calls:\n\n08:59:20.367593 send(3, \"Qinsert into bench1 (id,id2,id3,dummy1) values\n(2730,2730,2730,\\'ABCDEFGHIJ\\')\\0\", 77, 0) = 77\n08:59:20.416391 send(3, \"Qinsert into bench1 (id,id2,id3,dummy1) values\n(2731,2731,2731,\\'ABCDEFGHIJ\\')\\0\", 77, 0) = 77\n08:59:20.457082 send(3, \"Qinsert into bench1 (id,id2,id3,dummy1) values\n(2732,2732,2732,\\'ABCDEFGHIJ\\')\\0\", 77, 0) = 77\n08:59:20.497766 send(3, \"Qinsert into bench1 (id,id2,id3,dummy1) values\n(2733,2733,2733,\\'ABCDEFGHIJ\\')\\0\", 77, 0) = 77\n08:59:20.538928 send(3, \"Qinsert into bench1 (id,id2,id3,dummy1) values\n(2734,2734,2734,\\'ABCDEFGHIJ\\')\\0\", 77, 0) = 77\n\nTrace summary of the server while doing this:\n\n$ sudo strace -c -p 27264\n% time seconds usecs/call calls errors syscall\n------ ----------- ----------- --------- --------- ----------------\n 84.77 1.224299 60 20301 read\n 10.90 0.157477 8 20573 lseek\n 3.30 0.047625 1058 45 recv\n 0.69 0.009914 54 182 write\n 0.22 0.003168 70 45 send\n 0.07 0.000955 21 45 45 ioctl\n 0.06 0.000899 20 45 time\n------ ----------- ----------- --------- --------- ----------------\n100.00 1.444337 41236 45 total\n\ni.e., 450 or so read() calls per request, apparently serially scannign\nfor somethign or other:\n\n$ strace -e lseek -p -pid-of-postmaster\nlseek(13, 0, SEEK_SET) = 0\nlseek(13, 8192, SEEK_SET) = 8192\nlseek(13, 16384, SEEK_SET) = 16384\nlseek(13, 24576, SEEK_SET) = 24576\nlseek(13, 32768, SEEK_SET) = 32768\nlseek(13, 40960, SEEK_SET) = 40960\nlseek(13, 49152, SEEK_SET) = 49152\nlseek(13, 57344, SEEK_SET) = 57344\nlseek(13, 65536, SEEK_SET) = 65536\nlseek(13, 73728, SEEK_SET) = 73728\nlseek(13, 81920, SEEK_SET) = 81920\nlseek(13, 90112, SEEK_SET) = 90112\nlseek(13, 98304, SEEK_SET) = 98304\nlseek(13, 106496, SEEK_SET) = 106496\nlseek(13, 114688, SEEK_SET) = 114688\n\nI think you'll agree that this kind of query is supposed to use an index.\ntest=> explain select * from bench1 where id = 123;\nNOTICE: QUERY PLAN:\n\nIndex Scan using bench1_index_ on bench1 (cost=0.00..8.14 rows=10 width=24)\n\nEXPLAIN\ntest=> explain insert into bench1 (id,id2,id3,dummy1) values\ntest-> (2730,2730,2730,'ABCDEFGHIJ');\nNOTICE: QUERY PLAN:\n\nResult (cost=0.00..0.00 rows=0 width=0)\n\nEXPLAIN\ntest=> \n\nHmmm... what, no query plan at all???\n\n\n-- \nMatthias Urlichs | noris network GmbH | [email protected] | ICQ: 20193661\nThe quote was selected randomly. Really. | http://smurf.noris.de/\n-- \nMany changes of mind and mood; do not hesitate too long.\n", "msg_date": "Fri, 19 May 2000 09:10:54 +0200", "msg_from": "\"Matthias Urlichs\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Re: Heaps of read() syscalls by the postmaster" }, { "msg_contents": "> -----Original Message-----\n> From: [email protected] [mailto:[email protected]]On\n> Behalf Of Matthias Urlichs\n> \n> Hi,\n> \n> short add-on:\n> \n> > The table has been created (via Perl) thusly:\n> > [...]\n> \n> i.e., 450 or so read() calls per request, apparently serially scannign\n> for somethign or other:\n> \n> $ strace -e lseek -p -pid-of-postmaster\n> lseek(13, 0, SEEK_SET) = 0\n> lseek(13, 8192, SEEK_SET) = 8192\n> lseek(13, 16384, SEEK_SET) = 16384\n> lseek(13, 24576, SEEK_SET) = 24576\n> lseek(13, 32768, SEEK_SET) = 32768\n> lseek(13, 40960, SEEK_SET) = 40960\n> lseek(13, 49152, SEEK_SET) = 49152\n> lseek(13, 57344, SEEK_SET) = 57344\n> lseek(13, 65536, SEEK_SET) = 65536\n> lseek(13, 73728, SEEK_SET) = 73728\n> lseek(13, 81920, SEEK_SET) = 81920\n> lseek(13, 90112, SEEK_SET) = 90112\n> lseek(13, 98304, SEEK_SET) = 98304\n> lseek(13, 106496, SEEK_SET) = 106496\n> lseek(13, 114688, SEEK_SET) = 114688\n> \n\nThis seems to scan a system table(pg_index ?).\nThere are some(many?) places where indexes aren't\n(or couldn't be) used to scan system tables.\nMust we avoid sequential scan for system tables as possible ?\n\nComments ?\n\nRegards.\n\nHiroshi Inoue\[email protected]\n", "msg_date": "Fri, 19 May 2000 19:04:29 +0900", "msg_from": "\"Hiroshi Inoue\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: Re: Heaps of read() syscalls by the postmaster" }, { "msg_contents": "Hi,\n\nHiroshi Inoue:\n> \n> This seems to scan a system table(pg_index ?).\n\nYou're right. It's the database's test/pg_attribute file,\nwhich is a whopping 41 MBytes.\n\nIt would probably be very interesting to figure out (a) why it's so big,\nand (b) why it's not accesed using an index. Any pointers on how I\nshould proceed?\n\n-- \nMatthias Urlichs | noris network GmbH | [email protected] | ICQ: 20193661\nThe quote was selected randomly. Really. | http://smurf.noris.de/\n-- \nA man can sleep around, no questions asked, but if a woman makes nineteen or\ntwenty mistakes she's a tramp.\n\t-- Joan Rivers\n", "msg_date": "Fri, 19 May 2000 12:10:22 +0200", "msg_from": "\"Matthias Urlichs\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Re: Heaps of read() syscalls by the postmaster" }, { "msg_contents": "Matthias Urlichs wrote:\n> \n> Hi,\n> \n> Hiroshi Inoue:\n> >\n> > This seems to scan a system table(pg_index ?).\n> \n> You're right. It's the database's test/pg_attribute file,\n> which is a whopping 41 MBytes.\n\nDo we shrink system tables on vacuum ?\n\nIt's possible that running some benchmark that creates/drops tables\nrepetedly will blow up the size of system tables incl. pg_attribute.\n\n----------------\nHannu\n", "msg_date": "Fri, 19 May 2000 13:13:46 +0300", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Heaps of read() syscalls by the postmaster" }, { "msg_contents": "Hi,\n\nChris:\n> Matthias Urlichs wrote:\n> \n> > You're right. It's the database's test/pg_attribute file,\n> > which is a whopping 41 MBytes.\n> \n> I suppose the obvious question is whether you copy the database to a new\n> database, if the new database's pg_attribute is 41MB.\n\n? I don't understand.\n\nThe database was created by a simple initdb/createdb. The test user was\ncreated and given access, and the benchmark was started. The person\nwatching the benchmark subsequently fell asleep. ;-)\n\nThe reason for the huge size of this file might be the fact that the\nfull benchmark first creates a whole damn lot of tables, which it then\ndeletes. Apparently this process results in a rather suboptimal\npg_attribute/pg_index file. It also leaks memory (about 2k per table)\nin the backend.\n\nI'll restart the whole thing later today. We'll see if the problem comes\nback. Hopefully not.\n\n-- \nMatthias Urlichs | noris network GmbH | [email protected] | ICQ: 20193661\nThe quote was selected randomly. Really. | http://smurf.noris.de/\n-- \nGuitar players had their licks.\n", "msg_date": "Fri, 19 May 2000 13:50:28 +0200", "msg_from": "\"Matthias Urlichs\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Re: Heaps of read() syscalls by the postmaster" }, { "msg_contents": "Hi,\n\nHannu Krosing:\n> Do we shrink system tables on vacuum ?\n> \nIf the user calling the VACUUM has access rights to them, yes.\n\n> It's possible that running some benchmark that creates/drops tables\n> repetedly will blow up the size of system tables incl. pg_attribute.\n> \nNow that this test has run once, I hope that the fixups done by that\nVACUUM call will be persistent. I'll let you know.\n\n-- \nMatthias Urlichs | noris network GmbH | [email protected] | ICQ: 20193661\nThe quote was selected randomly. Really. | http://smurf.noris.de/\n-- \nIt's hard to keep a good girl down -- but lots of fun trying.\n", "msg_date": "Fri, 19 May 2000 13:52:31 +0200", "msg_from": "\"Matthias Urlichs\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Re: Heaps of read() syscalls by the postmaster" }, { "msg_contents": "On Fri, 19 May 2000, Matthias Urlichs wrote:\n\n> This is postgreSQL 7.0, compiled with i686-pc-linux-gnu/2.95, using no\n> special options to compile or setup, except that fsync was turned off,\n> as verified by the above sytem call summary. Auto-commit was on during\n> this test (what's the SQL command to turn it off, anyway? I couldn't\n> find it).\n\nIn Perl, I do:\n\nmy $dbh = DBI->connect(\"dbi:Pg:dbname=$dbname;host=$dbhost;port=$dbport\",\"$dbuser\");\n$dbh->{AutoCommit} = 0;\n\nAnd then make sure you do a $dbh->commit(); whenever you want to end the\ntransaction ...\n\n> \n> \n> NB: The same benchmark revealed that CREATE TABLE (or maybe it's CREATE\n> INDEX) leaks about 2k of memory.\n> \n> -- \n> Matthias Urlichs | noris network GmbH | [email protected] | ICQ: 20193661\n> The quote was selected randomly. Really. | http://smurf.noris.de/\n> -- \n> Justice: A commodity which (in a more or less adulterated condition)\n> the State sells to the citizen as a reward for his allegiance, \n> taxes, and personal service.\n> -- Ambrose Bierce, \"The Devil's Dictionary\" \n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Fri, 19 May 2000 09:48:02 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Heaps of read() syscalls by the postmaster" }, { "msg_contents": "\"Matthias Urlichs\" <[email protected]> writes:\n>> Do we shrink system tables on vacuum ?\n>> \n> If the user calling the VACUUM has access rights to them, yes.\n\nBut the indexes don't shrink (same problem as for user indexes).\n\nVACUUM doesn't really make any distinction between system tables and\nuser tables; they're all handled the same way. IIRC, the only\nspecial-case in 7.0 is that it doesn't try to compute pg_statistic\nentries for pg_statistic ;-)\n\n>> It's possible that running some benchmark that creates/drops tables\n>> repetedly will blow up the size of system tables incl. pg_attribute.\n\nYes, if you don't vacuum them every so often...\n\nBut what I don't understand is why a simple INSERT is doing a sequential\nscan of pg_attribute. Presumably the parser needs to find out what the\ntable's columns are ... but why isn't the catcache providing the data?\nNeeds to be looked at.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 19 May 2000 10:10:46 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Heaps of read() syscalls by the postmaster " }, { "msg_contents": "Hi,\n\nTom Lane:\n> But what I don't understand is why a simple INSERT is doing a sequential\n> scan of pg_attribute.\n\nIt might have been pg_index. I will look into this later today.\n\n-- \nMatthias Urlichs | noris network GmbH | [email protected] | ICQ: 20193661\nThe quote was selected randomly. Really. | http://smurf.noris.de/\n-- \nA raccoon tangled with a 23,000 volt line today. The results blacked\nout 1400 homes and, of course, one raccoon.\n -- Steel City News\n", "msg_date": "Fri, 19 May 2000 16:16:37 +0200", "msg_from": "\"Matthias Urlichs\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Re: Heaps of read() syscalls by the postmaster" }, { "msg_contents": "If we create a unique index on a column, seems we could update\npg_statistic automatically to mark it as unique.\n\n\n> \"Matthias Urlichs\" <[email protected]> writes:\n> >> Do we shrink system tables on vacuum ?\n> >> \n> > If the user calling the VACUUM has access rights to them, yes.\n> \n> But the indexes don't shrink (same problem as for user indexes).\n> \n> VACUUM doesn't really make any distinction between system tables and\n> user tables; they're all handled the same way. IIRC, the only\n> special-case in 7.0 is that it doesn't try to compute pg_statistic\n> entries for pg_statistic ;-)\n> \n> >> It's possible that running some benchmark that creates/drops tables\n> >> repetedly will blow up the size of system tables incl. pg_attribute.\n> \n> Yes, if you don't vacuum them every so often...\n> \n> But what I don't understand is why a simple INSERT is doing a sequential\n> scan of pg_attribute. Presumably the parser needs to find out what the\n> table's columns are ... but why isn't the catcache providing the data?\n> Needs to be looked at.\n> \n> \t\t\tregards, tom lane\n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 19 May 2000 11:20:58 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Heaps of read() syscalls by the postmaster" }, { "msg_contents": "\"Matthias Urlichs\" <[email protected]> writes:\n> NB: The same benchmark revealed that CREATE TABLE (or maybe it's CREATE\n> INDEX) leaks about 2k of memory.\n\nFollowing up on this other point: this could simply be the new table's\nrelcache entry (2K seems high though). Currently the relcache doesn't\nhave any procedure for discarding uninteresting entries, so once a\ntable is referenced by a backend that relcache entry will be there until\nthe backend quits or has some specific reason for flushing the entry.\n\nI wouldn't expect a CREATE TABLE / DELETE TABLE cycle to show any memory\nleak, since the DELETE would flush the relcache entry. But creating a\nfew thousand tables in a row would start to eat up memory a little bit.\nWhat is the benchmark doing exactly?\n\nWe could add a mechanism for aging relcache entries out of the cache\nwhen they haven't been touched recently, but so far it hasn't seemed\nworth the trouble...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 19 May 2000 14:46:13 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Heaps of read() syscalls by the postmaster " }, { "msg_contents": "I attac perl script which I run to benchmark postgres.\nNOTICE: you should create 'pgtest' database.\nIt uses DBI, DBD::Pg and Benchmark modules.\nThere are obvious parameters you can play with.\n\n\tRegards,\n\n\t\tOleg\n\nOn Fri, 19 May 2000, Tom Lane wrote:\n\n> Date: Fri, 19 May 2000 14:46:13 -0400\n> From: Tom Lane <[email protected]>\n> To: Matthias Urlichs <[email protected]>\n> Cc: [email protected]\n> Subject: Re: [HACKERS] Re: Heaps of read() syscalls by the postmaster \n> \n> \"Matthias Urlichs\" <[email protected]> writes:\n> > NB: The same benchmark revealed that CREATE TABLE (or maybe it's CREATE\n> > INDEX) leaks about 2k of memory.\n> \n> Following up on this other point: this could simply be the new table's\n> relcache entry (2K seems high though). Currently the relcache doesn't\n> have any procedure for discarding uninteresting entries, so once a\n> table is referenced by a backend that relcache entry will be there until\n> the backend quits or has some specific reason for flushing the entry.\n> \n> I wouldn't expect a CREATE TABLE / DELETE TABLE cycle to show any memory\n> leak, since the DELETE would flush the relcache entry. But creating a\n> few thousand tables in a row would start to eat up memory a little bit.\n> What is the benchmark doing exactly?\n> \n> We could add a mechanism for aging relcache entries out of the cache\n> when they haven't been touched recently, but so far it hasn't seemed\n> worth the trouble...\n> \n> \t\t\tregards, tom lane\n> \n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83", "msg_date": "Fri, 19 May 2000 21:56:07 +0300 (GMT)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Heaps of read() syscalls by the postmaster " }, { "msg_contents": "Hi,\n\nTom Lane:\n> \"Matthias Urlichs\" <[email protected]> writes:\n> > NB: The same benchmark revealed that CREATE TABLE (or maybe it's CREATE\n> > INDEX) leaks about 2k of memory.\n> \n> Following up on this other point: this could simply be the new table's\n> relcache entry (2K seems high though).\n\nThe first part of the many-tables benchmark does 10000 CREATE\nTABLE/CREATE INDEX calls followed by 10000 DROP TABLE calls (i.e.\nyou have ten thousand tables after the first step).\nThe postmaster pricess grows from 15 to 50 MBtes during that time.\n\nThe second part does 10000 CREATE TABLE/CREATE INDEX/DROP TABLE calls\n(i.e. it deletes every table immediately). Afterwards, the postmaster\nprocess is 85 MBytes big.\n\n> What is the benchmark doing exactly?\n> \nI can put a standalone version of the benchmark up for download\nsomeplace.\n\n> We could add a mechanism for aging relcache entries out of the cache\n> when they haven't been touched recently, but so far it hasn't seemed\n> worth the trouble...\n> \nNot for that benchmark, certainly, but there are real-world applications\nwhich do play with lots of temporary tables for one reason or another.\n\n-- \nMatthias Urlichs | noris network GmbH | [email protected] | ICQ: 20193661\nThe quote was selected randomly. Really. | http://smurf.noris.de/\n-- \nI want to dress you up as TALLULAH BANKHEAD and cover you with VASELINE\nand WHEAT THINS ...\n\t\t-- Zippy the Pinhead\n", "msg_date": "Fri, 19 May 2000 21:31:44 +0200", "msg_from": "\"Matthias Urlichs\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Re: Heaps of read() syscalls by the postmaster" }, { "msg_contents": "\"Matthias Urlichs\" <[email protected]> writes:\n>>>> NB: The same benchmark revealed that CREATE TABLE (or maybe it's CREATE\n>>>> INDEX) leaks about 2k of memory.\n>> \n>> Following up on this other point: this could simply be the new table's\n>> relcache entry (2K seems high though).\n\n> The first part of the many-tables benchmark does 10000 CREATE\n> TABLE/CREATE INDEX calls followed by 10000 DROP TABLE calls (i.e.\n> you have ten thousand tables after the first step).\n> The postmaster pricess grows from 15 to 50 MBtes during that time.\n\n> The second part does 10000 CREATE TABLE/CREATE INDEX/DROP TABLE calls\n> (i.e. it deletes every table immediately). Afterwards, the postmaster\n> process is 85 MBytes big.\n\nHmm. So the space *is* leaked. Grumble. Another TODO list item...\nthanks for following up on it.\n\n>> What is the benchmark doing exactly?\n>> \n> I can put a standalone version of the benchmark up for download\n> someplace.\n\nOK. If this benchmark comes with MySQL, though, just tell us where\nto look in their tarball --- no need for you to provide an alternate\ndistribution when we can get it from their server...\n\n>> We could add a mechanism for aging relcache entries out of the cache\n>> when they haven't been touched recently, but so far it hasn't seemed\n>> worth the trouble...\n>> \n> Not for that benchmark, certainly, but there are real-world applications\n> which do play with lots of temporary tables for one reason or another.\n\nAgreed, but I thought the space would get reclaimed when you deleted the\ntemp table. That's definitely a bug.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 19 May 2000 16:08:15 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Heaps of read() syscalls by the postmaster " }, { "msg_contents": "Matthias Urlichs wrote:\n\n> You're right. It's the database's test/pg_attribute file,\n> which is a whopping 41 MBytes.\n\nI suppose the obvious question is whether you copy the database to a new\ndatabase, if the new database's pg_attribute is 41MB.\n", "msg_date": "Sat, 20 May 2000 07:06:47 +1000", "msg_from": "Chris <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Heaps of read() syscalls by the postmaster" }, { "msg_contents": "Hi,\n\nTom Lane:\n> OK. If this benchmark comes with MySQL, though, just tell us where\n> to look in their tarball --- no need for you to provide an alternate\n> distribution when we can get it from their server...\n> \nUnpack mysql-3.23.16.tar.gz, subdirectory sql-bench;\n./run-all-tests --server=pg --fast --database test --user=test --password=test\n(Or ./test-whatever ... witht he same options, for a single test run.)\n\nRunning the test as the postgres superuser will change the results.\n\n-- \nMatthias Urlichs | noris network GmbH | [email protected] | ICQ: 20193661\nThe quote was selected randomly. Really. | http://smurf.noris.de/\n-- \nAPL programmers do it with stile.\n", "msg_date": "Sat, 20 May 2000 10:03:03 +0200", "msg_from": "\"Matthias Urlichs\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Re: Heaps of read() syscalls by the postmaster" }, { "msg_contents": ">>>>> NB: The same benchmark revealed that CREATE TABLE (or maybe it's CREATE\n>>>>> INDEX) leaks about 2k of memory.\n\n> Hmm. So the space *is* leaked. Grumble. Another TODO list item...\n> thanks for following up on it.\n\nOK, I think this bug is swatted --- it's not relcache really, just a\ncouple of generic memory leaks that happened to occur during relcache\nentry construction (worst possible time). If you care to try the\nrepaired code, see our CVS server, or grab a nightly snapshot dated\nlater than this message.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 20 May 2000 22:36:04 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Heaps of read() syscalls by the postmaster " }, { "msg_contents": "Hi,\n\nTom Lane:\n> entry construction (worst possible time). If you care to try the\n> repaired code, see our CVS server, or grab a nightly snapshot dated\n> later than this message.\n> \nThank you, I'll do that.\n\n-- \nMatthias Urlichs | noris network GmbH | [email protected] | ICQ: 20193661\nThe quote was selected randomly. Really. | http://smurf.noris.de/\n-- \nGo ahead - the Surgeon General has determined that you only live once\n", "msg_date": "Sun, 21 May 2000 06:14:15 +0200", "msg_from": "\"Matthias Urlichs\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Re: Heaps of read() syscalls by the postmaster" } ]
[ { "msg_contents": "It's on my ever growing list of things to do, to do the same for\nStarOffice, getting it to work with PostgreSQL (using JDBC).\n\nPeter\n\n-- \nPeter Mount\nEnterprise Support\nMaidstone Borough Council\nAny views stated are my own, and not those of Maidstone Borough Council.\n\n\n\n-----Original Message-----\nFrom: Cary O'Brien [mailto:[email protected]]\nSent: Friday, May 19, 2000 12:19 AM\nTo: [email protected]\nSubject: Re: [HACKERS] LONG: How to migrate data from MS-SQL7 to\nPostgreSQL 7.0\n\n\n> > This is how to get MS-SQL7 to copy data (either whole tables, or\nfrom\n> > queries) into PostgreSQL...\n> \n> Nice writeup. Can I fold it into our docs chapter on populating\n> databases (or some other appropriate place)?\n> \n\nI like how the Zope site has \"howtos\" and \"tips\"[1]. I think this might\nbe better because of the dynamic nature of this information. I'd be\nglad\nto contribute what I have about getting Applixware to talk to\nPostgreSQL[2],\nBut I need to check things out with the Applixware 5.0 and PostgreSQL\n7.0.\n\nYou could even use Zope itself to do this stuff.\n\n-- cary\n\n[1] http://www.zope.org/Documentation\n[2] http://www.radix.net/~cobrien/applix/applix.txt\n", "msg_date": "Fri, 19 May 2000 07:53:25 +0100", "msg_from": "Peter Mount <[email protected]>", "msg_from_op": true, "msg_subject": "RE: LONG: How to migrate data from MS-SQL7 to PostgreSQ\n\tL 7.0" } ]
[ { "msg_contents": "Good Morning.\n>From user (somebody like me) point of view it is important that\ndocumentation reflects relaity. I mentioned about the following discrepancy\na couple of months ago. The remark was thorougly ignored.\n\nIt is great that it will be corrected in the best possible way, it means\nreality will be upgraded to documentation|\nRegards,\nAndrzej Mazurkiewicz.\n\n> -----Original Message-----\n> From:\tChris Bitmead [SMTP:[email protected]]\n> Sent:\t19 maja 2000 06:39\n> To:\tTom Lane\n> Cc:\tPeter Eisentraut; Chris; Postgres Hackers List\n> Subject:\tRe: [HACKERS] OO Patch\n> \n> \n> 1) DELETE and UPDATE on inheritance hierarchies. You actually suggested\n> it Tom, it used to work in postgres (if you look at the V7.0 doco very\n> carefully, it still says it works!! though it probably hasn't since the\n> V4.2 days). It's really a rather obvious inclusion.\n> \n", "msg_date": "Fri, 19 May 2000 10:24:55 +0200", "msg_from": "Andrzej Mazurkiewicz <[email protected]>", "msg_from_op": true, "msg_subject": "RE: OO Patch" } ]
[ { "msg_contents": "Hi!\n\nIt's ancient problem for me, in postgres 6.x. And now there is in 7.0 too\nIf i try to include spi.h, i can't compile my program. Why? Some files,\nincluded from __packaged__ headers, are missing.\nTake a look:\n--------------- snip ---------------\nIn file included from /usr/local/pgsql/include/access/xact.h:18,\n from /usr/local/pgsql/include/utils/tqual.h:19,\n from /usr/local/pgsql/include/access/relscan.h:17,\n from /usr/local/pgsql/include/nodes/execnodes.h:18,\n from /usr/local/pgsql/include/executor/spi.h:19,\n from my.c:1:\n... etc ...\nSome others:\n/usr/local/pgsql/include/utils/nabstime.h:18: utils/timestamp.h: No such file or directory\n/usr/local/pgsql/include/executor/hashjoin.h:18: storage/buffile.h: No such file or directory\n/usr/local/pgsql/include/utils/builtins.h:37: utils/date.h: No such file or directory\n/usr/local/pgsql/include/utils/builtins.h:38: utils/lztext.h: No such file or directory\n/usr/local/pgsql/include/utils/builtins.h:39: utils/varbit.h: No such file or directory\n--------------- snap ---------------\nBut why? Nobody knows it, nobody intrested, or there is a way, to use\nsomethings from spi.h, without this error.\n(my way is to copy all of headers from a source package, but...)\n...\nany comments?\nthanks, and best regards\n--\n nek;(\n\n\n\n\n", "msg_date": "Fri, 19 May 2000 11:52:05 +0200 (CEST)", "msg_from": "Peter Vazsonyi <[email protected]>", "msg_from_op": true, "msg_subject": "-devel-7.0-1.rpm: Still missing a lots of headers" }, { "msg_contents": "Peter Vazsonyi wrote:\n> But why? Nobody knows it, nobody intrested, or there is a way, to use\n> somethings from spi.h, without this error.\n> (my way is to copy all of headers from a source package, but...)\n\nArgh. I'll have to go back through the headers -- my listing of SPI\nheaders included in the -devel RPM is correct for 6.5.3, but not 7.0,\napparently. Thanks for the listing -- that'll get me started.\n\nLook for a -2 RPM set later today or tomorrow.\n\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Sat, 20 May 2000 14:04:20 -0400", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: -devel-7.0-1.rpm: Still missing a lots of headers" }, { "msg_contents": "Lamar Owen wrote:\n> \n> Peter Vazsonyi wrote:\n> > But why? Nobody knows it, nobody intrested, or there is a way, to use\n> > somethings from spi.h, without this error.\n> > (my way is to copy all of headers from a source package, but...)\n> \n> Argh. I'll have to go back through the headers -- my listing of SPI\n> headers included in the -devel RPM is correct for 6.5.3, but not 7.0,\n> apparently. Thanks for the listing -- that'll get me started.\n> \n> Look for a -2 RPM set later today or tomorrow.\n\nAs a followup, use the following one-liner to generate a sorted listing\nof the SPI deps (cwd is src/include):\n\n/lib/cpp -M -I. -I../backend executor/spi.h |xargs -n 1|grep \\\\W|grep -v\n^/|grep -v spi.h|sort\n\nYes, I know I could make the regexps better, but that one-liner works\nas-is.... Above one-line is being used in rpm building process now\ninstead of the prior hard-coded listing, which should eliminate the\nproblem in future RPMsets.\n\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Sat, 20 May 2000 14:58:03 -0400", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: -devel-7.0-1.rpm: Still missing a lots of headers" } ]
[ { "msg_contents": "> \n> I'm still confused, this doesn't seem to be in Postgresql yet?\n> \n> bright=# PREPARE wwmine AS select * from d;\n> ERROR: parser: parse error at or near \"prepare\"\n> \n> (using 7.0)\n> \n> What am I doing wrong?\n\n\n Oh sorry. The query cache is planned for some future release 7.2/7.?.\n\n The pg's hackers list is area for dreams & plans & vision too :-)\n \n\t\t\t\t\t\t\tKarel\n\n PS. I'm working on this, and in June/July I will first usable snapshot. \n\n", "msg_date": "Fri, 19 May 2000 12:22:51 +0200 (CEST)", "msg_from": "Karel Zak <[email protected]>", "msg_from_op": true, "msg_subject": "Re: query cache (was: Re: caching query results)" } ]
[ { "msg_contents": "\n\n Hi,\n\n I'm total confuse. I continue in query cache implementation and I want use\nDllist routines, but what I see --- it hardly use malloc/free. Why? With\nthis is a Dllist usage _very_ limited... (the catcache works with malloc? \n--- hmm interesting)\n\n For my current situation I resolve it via some simple alloc function\nchanger for Dllist. But for future will probably good create specific\nmemory context that will based on malloc/free and in pg's sources use\n_always_ and _only_ palloc in all routines. And if anyone want use\nsystem malloc/free he can call \n\tMemoryContextSwitchTo( malloc_based_context );\n\n Tom, assume you with some 'malloc_based_context' in your memory-management\nproposal?\n\n\t\t\t\t\t\tKarel\n\n\n", "msg_date": "Fri, 19 May 2000 13:08:24 +0200 (CEST)", "msg_from": "Karel Zak <[email protected]>", "msg_from_op": true, "msg_subject": "malloc() in Dllist" }, { "msg_contents": "Karel Zak <[email protected]> writes:\n> I'm total confuse. I continue in query cache implementation and I want use\n> Dllist routines, but what I see --- it hardly use malloc/free. Why? With\n> this is a Dllist usage _very_ limited... (the catcache works with malloc? \n> --- hmm interesting)\n\nI think the reason Dllist still uses malloc directly is that the\nselfsame code is used by libpq, and there's no palloc in the frontend\nenvironment. This is pretty ugly of course. It would probably be\nbetter to give libpq its own copy of the routines (I don't think it\nneeds 'em all anyway, nor ought to be polluting application namespace\nwith global symbols that don't start with \"pq\") and then the backend's\nversion could use palloc. But we'd have to look at all the callers to\nbe sure they have current memory context set to an appropriate thing.\n\n> Tom, assume you with some 'malloc_based_context' in your memory-management\n> proposal?\n\nIt can't be *directly* malloc based if we want to ensure that pfree()\nworks regardless of context type. We need that header with the\nback-link to the context to be there always. We could have a context\ntype where each chunk is always a separate malloc request, and palloc\nis just a thin wrapper that adds the chunk header ... but I'm not sure\nwhat for ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 19 May 2000 14:57:12 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: malloc() in Dllist " }, { "msg_contents": "\nOn Fri, 19 May 2000, Tom Lane wrote:\n\n> Karel Zak <[email protected]> writes:\n> > I'm total confuse. I continue in query cache implementation and I want use\n> > Dllist routines, but what I see --- it hardly use malloc/free. Why? With\n> > this is a Dllist usage _very_ limited... (the catcache works with malloc? \n> > --- hmm interesting)\n> \n> I think the reason Dllist still uses malloc directly is that the\n> selfsame code is used by libpq, and there's no palloc in the frontend\n> environment. This is pretty ugly of course. It would probably be\n\n Yes, I know about Dllist in the frontend, but more nice is use '#if' if\nanyone want share some code between backend-frontend.\n\n> better to give libpq its own copy of the routines (I don't think it\n\nBut keep up two copy is more exigent.\n\n> needs 'em all anyway, nor ought to be polluting application namespace\n> with global symbols that don't start with \"pq\") and then the backend's\n> version could use palloc. But we'd have to look at all the callers to\n> be sure they have current memory context set to an appropriate thing.\n> \n> > Tom, assume you with some 'malloc_based_context' in your memory-management\n> > proposal?\n> \n> It can't be *directly* malloc based if we want to ensure that pfree()\n> works regardless of context type. We need that header with the\n> back-link to the context to be there always. We could have a context\n\nI said it bad. I also mean one-chunk-one-malloc with correct chunk header.\nWe already discussed about back-link in chunk header. I understand you.\n\n> type where each chunk is always a separate malloc request, and palloc\n> is just a thin wrapper that adds the chunk header ... but I'm not sure\n> what for ...\n\n IMHO this is correct and rightdown solution. We have system malloc in:\n\n bootstrap/bootstrap.c, \n\tcatalog/heap.c, \n\tcommands/sequence.c, \n\texecutor/spi.c, \n\tlib/dllist.c,\n\tgram.c / scan.c,\n\tport/dynloader/aix.c,\n\tport/dynloader/nextstep.c,\n\tregex/engine.c,\n\tregex/regcomp.c,\n\tstorage/buffer/localbuf.c,\n\tstorage/file/fd.c,\n\ttcop/postgres.c,\n\ttioga/Varray.c,\n\ttioga/tgRecipe.c,\n\tutils/fmgr/dfmgr.c,\n\tutils/cache/inval.c,\n\tutils/error/elog.c,\n\tutils/init/findbe.c,\n\tutils/init/miscinit.c\n\n ... it is long list :-( Spit it to more memory contexts will better. I not\nsure if all these routines really need directly malloc. IMHO it probably need \nsome persistent contexts, and a one-chunk-one-malloc context very probably\nnot will need often (or never). All these are possible implement via standard \nblock-based contexts, but with more separete contexts. \n\n It is good described in your proposal. A context for elog, parser, ... etc. \n \n In will in 7.2 ? :-)\n\n\t\t\t\t\t\t\tKarel\n\n \n\n\t\n\n", "msg_date": "Mon, 22 May 2000 11:48:34 +0200 (CEST)", "msg_from": "Karel Zak <[email protected]>", "msg_from_op": true, "msg_subject": "Re: malloc() in Dllist " } ]
[ { "msg_contents": "> Hi Tatsuo,\n> \n> Sorry to bother you but I noticed in the HISTORY file of Postgres-7.0\n> this entry relating to 6.5.2:\n> \n> Fix for large object write-in-middle, no extra block, memory consumption(Tatsuo)\n> \n> I am using 6.5.2 (7.0 proved too buggy) \n\nWhat kind of bugs have you found with 7.0?\n\n> but I am getting this problem with\n> the Large Object interface:\n> \n> I lo_lseek to a middle position, lo_read a header of my record,\n> lo_read the rest according to size in header, lo_lseek to previous position, \n> then lo_write an altered record: any position not written to in the LO\n> at the time is trashed and bears no resemblance to previous content.\n> As if the Lob FD no longer referred to the right object just garbage.\n> I think it is the combination of calls prior to the write, because I\n> do single seek/writes elsewhere in my app no problem.\n> \n> \n> I am wondering whether this is the problem you fixed and which version\n> the fix actually went into!?\n> \n> yours sincerely,\n> Chris Quinn\n\nOk, give me some time to dig into problems you mentioned.\n(I'm very busy this month).\n--\nTatsuo Ishii\n", "msg_date": "Fri, 19 May 2000 21:47:46 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Postgres Problemo..." } ]
[ { "msg_contents": "I've worked with various versions of Oracle for several years and can share\nsome of my experiences with their \"system catalog\" implementation.\n\nThey use a fairly simple design in which a database instance consists of 1\n.. n tablespaces (that can contain any type of database object) which in\nturn consists of 1 .. n datafiles. There is a system table which\nessentially holds the name of the physical file (full path), file_id,\ntablespace it belongs to, and sizing information. Our database (110 Gig) is\nsplit up into 24 tablespaces and these are further split into 73 datafiles.\n\nMoving the physical location of a datafile is fairly straight forward.\n 1) ALTER TABLESPACE foo OFFLINE\n 2) Move the physical file using OS command\n 3) ALTER TABLESPACE foo RENAME DATAFILE '/old/file' TO '/new/file'\n 4) ALTER TABLESPACE foo ONLINE\n\nMost of our datafiles run about 2 Gig, so the longest part of this is\nactually doing the move.\n\nOne headache is if you want to completely change the locations of ALL your\nfiles. This involves editing all of the paths and is definitely prone to\nerror. This is also where you rabidly curse the DBA who decided to have\npath names that are 140 characters long!\n\nA second headache is moving databases from one server to another. You are\nthen forced into having the exact same file structure on the second machine.\nThis can be somewhat amusing if you have a different hardware configuration\nwhich doesn't have the same number of disks, etc.\n\nThis second problem is further complicated by some of the backup solutions\navailable for Oracle. The one that we have uses the system catalog to locate\nand backup the appropriate files. This again means that if you want to\nrestore the backup to another server, it must be configured exactly as the\nfirst.\n\nI think that Thomas' fear of having thousands of entries in the system\ncatalog for datafiles is alleviated in the Oracle implementation by the use\nof the tablespace. Tablespaces can contain any number of database objects,\nso by using a reasonable tablespace layout, one can keep the number of\nactual datafiles to a manageable level.\n\nOne thing that has been definitely useful is the ability to do load\nbalancing based on what tablespaces are \"hot\". Our system is somewhat of a\ncross between OLTP and a data warehouse (don't get me started) so the data\nbecomes pretty static after, say, about 30 days. By monitoring which\ndatafiles are being accessed the most, they can be moved to different\nlocations on the storage array to avoid contention and maximize throughput.\n\nMy first reaction to the suggestion of a pg_location like table was \"ARGH,\nNO!\", but after nursing my sprained back from that violent knee jerk\nreaction and actually thinking about it, I talked myself into thinking it'd\nprobably be a good idea. If we had our online system built on top of\nPostgres, we would need a filesystem with 110+ Gig of disk space and there\nwould be roughly 3,500 files in its single data directory. Having the\nability to organize tables, indices, etc into tablespaces, and then\ndistributing the datafiles in some quasi intelligent fashion is truly pretty\npowerful.\n\nPhil Culberson\n", "msg_date": "Fri, 19 May 2000 09:08:35 -0700", "msg_from": "\"Culberson, Philip\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Question about databases in alternate locations..." }, { "msg_contents": "> Having the\n> ability to organize tables, indices, etc into tablespaces, and then\n> distributing the datafiles in some quasi intelligent fashion is truly pretty\n> powerful.\n\nGreat feedback! Everyone will agree that there is no problem with the\noverall goal. We're just working out the details, and your use-case\nwith Oracle should and will be one of the use-cases that any\nimprovements should actually improve :)\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Fri, 19 May 2000 21:03:48 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Question about databases in alternate locations..." } ]
[ { "msg_contents": "I can not CVS commit:\n\n\ncvs [commit aborted]: authorization failed: server hub.org rejected access\nrm: CVS: is a directory\ncvs add: cannot add special file `CVS'; skipping\ncvs [add aborted]: authorization failed: server hub.org rejected access\ncvs [commit aborted]: authorization failed: server hub.org rejected access\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 19 May 2000 12:55:01 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "CVS commit broken" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> I can not CVS commit:\n> cvs [commit aborted]: authorization failed: server hub.org rejected access\n\nYou haven't updated your CVSROOT pointer (took me a little while to\nfigure that error out too ;-)). See Marc's \"heads up\" from last night.\n\nThere may be a better answer than doing a whole fresh checkout, but\nthat's what I did to get the new CVSROOT value in place here.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 19 May 2000 13:55:36 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CVS commit broken " }, { "msg_contents": "\ndid you see my email yesterday in both -hackers and -core about the change\nin CVSROOT? *raised eyebrow* *grin*\n\nOn Fri, 19 May 2000, Bruce Momjian wrote:\n\n> I can not CVS commit:\n> \n> \n> cvs [commit aborted]: authorization failed: server hub.org rejected access\n> rm: CVS: is a directory\n> cvs add: cannot add special file `CVS'; skipping\n> cvs [add aborted]: authorization failed: server hub.org rejected access\n> cvs [commit aborted]: authorization failed: server hub.org rejected access\n> \n> \n> -- \n> Bruce Momjian | http://www.op.net/~candle\n> [email protected] | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Fri, 19 May 2000 14:55:53 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CVS commit broken" }, { "msg_contents": "> \n> did you see my email yesterday in both -hackers and -core about the change\n> in CVSROOT? *raised eyebrow* *grin*\n\nFirst, I forgot my end had it defined. Now that I have fixed this, and\nforgot about my .cvspass file. Working now.\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 19 May 2000 14:39:27 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: CVS commit broken" }, { "msg_contents": "> > \n> > did you see my email yesterday in both -hackers and -core about the change\n> > in CVSROOT? *raised eyebrow* *grin*\n> \n> First, I forgot my end had it defined. Now that I have fixed this, and\n> forgot about my .cvspass file. Working now.\n\n\n Hmm, but for me:\n\n~$ rm -f .cvspass\n~$ cvs -d :pserver:[email protected]:/home/projects/pgsql/cvsroot/ login\n(Logging in to [email protected])\nCVS password:\ncvs [login aborted]: authorization failed: server postgresql.org rejected access\n~$\n\n * password \"postgresql\" or \"postgres\" \n * cvs 1.10.7\n * before now it works (I use it often..)\n\n\n Overlook I anything? \n \n\t\t\t\t\t\tKarel\n\n", "msg_date": "Tue, 23 May 2000 10:56:19 +0200 (CEST)", "msg_from": "Karel Zak <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CVS commit broken" }, { "msg_contents": "> \n> Another possibility is that postgresql.org is not pointing at the right\n> server. I see that I have hub.org in my .cvspass (but that's for\n> non-anonymous login which might be set up differently). Marc?\n> \n\n Marc and postgresql.org is OK! --- CVS 1.10.7 is bad....\n\n In the CVSROOT path (or -d switch) can't be '/' as last char! \n\nSee:\n\n$ export CVSROOT=\":pserver:[email protected]:/home/projects/pgsql/cvsroot/\"\n$ cvs login\n(Logging in to [email protected])\nCVS password:\ncvs [login aborted]: authorization failed: server postgresql.org rejected accessmravenec:~$ export\n\nexport CVSROOT=\":pserver:[email protected]:/home/projects/pgsql/cvsroot\"\n$ cvs login\n(Logging in to [email protected])\nCVS password:\n$\n\nGrrrr...... \n\n\t\t\t\t\t\t\tKarel\n\n\n \n\n", "msg_date": "Tue, 23 May 2000 16:25:49 +0200 (CEST)", "msg_from": "Karel Zak <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CVS commit broken " }, { "msg_contents": "Karel Zak <[email protected]> writes:\n> ~$ rm -f .cvspass\n> ~$ cvs -d :pserver:[email protected]:/home/projects/pgsql/cvsroot/ login\n> (Logging in to [email protected])\n> CVS password:\n> cvs [login aborted]: authorization failed: server postgresql.org rejected access\n> ~$\n\n> * password \"postgresql\" or \"postgres\" \n\nThe docs say the anoncvs password is \"postgresql\" (BTW guys, let's\nnot forget to update the documentation page that says the cvsroot is\n/usr/local/cvsroot ...). I wonder if leaving off the trailing slash on\nthe -d path would help?\n\nAnother possibility is that postgresql.org is not pointing at the right\nserver. I see that I have hub.org in my .cvspass (but that's for\nnon-anonymous login which might be set up differently). Marc?\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 23 May 2000 10:32:22 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CVS commit broken " }, { "msg_contents": "> Karel Zak <[email protected]> writes:\n> > ~$ rm -f .cvspass\n> > ~$ cvs -d :pserver:[email protected]:/home/projects/pgsql/cvsroot/ login\n> > (Logging in to [email protected])\n> > CVS password:\n> > cvs [login aborted]: authorization failed: server postgresql.org rejected access\n> > ~$\n> \n> > * password \"postgresql\" or \"postgres\" \n> \n> The docs say the anoncvs password is \"postgresql\" (BTW guys, let's\n> not forget to update the documentation page that says the cvsroot is\n> /usr/local/cvsroot ...). I wonder if leaving off the trailing slash on\n> the -d path would help?\n\nOK, CVS faq updated, but I need group write permission in html and\nhtml/docs. Marc?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 23 May 2000 11:19:32 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: CVS commit broken" }, { "msg_contents": "On Tue, 23 May 2000, Tom Lane wrote:\n\n> Karel Zak <[email protected]> writes:\n> > ~$ rm -f .cvspass\n> > ~$ cvs -d :pserver:[email protected]:/home/projects/pgsql/cvsroot/ login\n> > (Logging in to [email protected])\n> > CVS password:\n> > cvs [login aborted]: authorization failed: server postgresql.org rejected access\n> > ~$\n> \n> > * password \"postgresql\" or \"postgres\" \n> \n> The docs say the anoncvs password is \"postgresql\" (BTW guys, let's\n> not forget to update the documentation page that says the cvsroot is\n> /usr/local/cvsroot ...). I wonder if leaving off the trailing slash on\n> the -d path would help?\n> \n> Another possibility is that postgresql.org is not pointing at the right\n> server. I see that I have hub.org in my .cvspass (but that's for\n> non-anonymous login which might be set up differently). Marc?\n\nAll the same thing, points to the same directory ...\n\n\n", "msg_date": "Tue, 23 May 2000 17:40:46 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CVS commit broken " }, { "msg_contents": "\noh, how I hate disk crashes ... fixed ...\n\nOn Tue, 23 May 2000, Bruce Momjian wrote:\n\n> > Karel Zak <[email protected]> writes:\n> > > ~$ rm -f .cvspass\n> > > ~$ cvs -d :pserver:[email protected]:/home/projects/pgsql/cvsroot/ login\n> > > (Logging in to [email protected])\n> > > CVS password:\n> > > cvs [login aborted]: authorization failed: server postgresql.org rejected access\n> > > ~$\n> > \n> > > * password \"postgresql\" or \"postgres\" \n> > \n> > The docs say the anoncvs password is \"postgresql\" (BTW guys, let's\n> > not forget to update the documentation page that says the cvsroot is\n> > /usr/local/cvsroot ...). I wonder if leaving off the trailing slash on\n> > the -d path would help?\n> \n> OK, CVS faq updated, but I need group write permission in html and\n> html/docs. Marc?\n> \n> -- \n> Bruce Momjian | http://www.op.net/~candle\n> [email protected] | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Tue, 23 May 2000 17:41:55 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CVS commit broken" } ]