threads
listlengths
1
2.99k
[ { "msg_contents": "Hi,\n\nRecently I upgraded my egcs compiler and noticed it can't compile \ncvs postgres ( 6.4 probably too) on my linux x86 machine.\n\nmake[2]: Entering directory /home/postgres/cvs/pgsql/src/backend/postmaster'\ngcc -I../../include -I../../backend -O2 -mpentium -Wall -Wmissing-prototypes -I.. -c postmaster.c -o postmaster.o\npostmaster.c: In function `initMasks':\npostmaster.c:802: Invalid `asm' statement:\npostmaster.c:802: fixed or forbidden register 2 (cx) was spilled for class CREG.\nmake[2]: *** [postmaster.o] Error 1\nmake[2]: Leaving directory /home/postgres/cvs/pgsql/src/backend/postmaster'\nmake[1]: *** [postmaster.dir] Error 2\nmake[1]: Leaving directory /home/postgres/cvs/pgsql/src/backend'\nmake: *** [all] Error 2\n\ngcc -v\nReading specs from /usr/local/egcs/lib/gcc-lib/i586-pc-linux-gnulibc1/egcs-2.92.21/specs\ngcc version egcs-2.92.21 19981109 (gcc2 ss-980609 experimental)\n\nPrevious version of egcs does compile postgres ok.\nI read egcs FAQ and found http://egcs.cygnus.com/faq.html/#asmclobber\n........\nPrevious releases of gcc (for example, gcc-2.7.2.X) did not detect as invalid a clobber specifier that\nclobbered an operand. Instead, it could spuriously and silently generate incorrect code for certain\nnon-obvious cases of source code. Even more unfortunately, the manual (Using and Porting GCC,\nsection Extended Asm, see the bug report entry) did not explicitly say that it was invalid to specify\nclobber registers that were destined to overlap operands; it could arguably be interpreted that it was\ncorrect to clobber an input operand to mark it as not holding a usable value after the asm. \n\nFor the general case, there is no way to tell whether a specified clobber is intended to overlap with a\nspecific (input) operand or is a program error, where the choice of actual register for operands failed to\navoid the clobbered register. Such unavoidable overlap is detected by versions egcs-2.92.18\n19981104 and above, and flagged as an error rather than accepted. An error message is given, such\nas: \n\nfoo.c: In function `foo':\nfoo.c:7: Invalid `asm' statement:\nfoo.c:7: fixed or forbidden register 0 (ax) was spilled for class AREG.\n\nUnfortunately, a lot of existing software, for example the Linux kernel version 2.0.35 for the Intel x86,\nhas constructs where input operands are marked as clobbered. \n\n.............\n\negcs becomes popular and people are going to use it and I think it\nwould be great if someone could fix the problem.\n\n\tRegards,\n\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Fri, 13 Nov 1998 16:57:45 +0300 (MSK)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": true, "msg_subject": "postmaster.c:802: fixed or forbidden register 2 (cx) was spilled for\n\tclass CREG." } ]
[ { "msg_contents": "Dear PostgreSQL hackers:\n\nI have a problem that I recently posted in the Postgres-ADMIN newsgroup. It was not answered there, but Mr. Marc Fournier, aka Hermit Hacker, told me that it might be worthwhile (to both myself and the Postgres code hackers) to cross-post my problem. \n\nWithout reposting all the previous emails, I will attempt to summarize the problem. Based upon the advice that is offered from this newsgroup, I intend to change my Postgres configuration to aid in the debugging of this problem. In advance, thank you for your help.\n\nThe Problem\nI have three Perl ingest processes running continuously, inserting and deleting data into the database. Although not consistent, sometimes the database slows down to a crawl and the CPU usage increases to\njust under 100%. A Perl ingest processes may insert/delete to/from the same table as another Perl ingest process.\n\nYou may be wondering what kind of data we store. The system is a development system utilizing a real-time alphanumeric weather feed. So if we drop the tables and thus the weather, we can always wait a couple of hours for it to reload. On our production/deployed systems, this is not an option.\n\nAs to the amount of data being ingested...we don't really know what the true amount is, but we have discussed it and predict that we could get about 1500 inserts peak in a 3 minute time span, coming from two different Perl ingest routines. Of course, it probably is quite a bit less normally.\n\nTo try to stop the problem before it starts, I wrote a script that vacuums the database tables once every hour. The vacuum script runs at the same time as the Perl ingest processes do (the Perl ingest processes never\nget shut down). This may or may not have helped the situation. I believe it does help. Or does the entire database need to be idle before performing a vacuum due to potential deadlock scenarious?\n\nWhen I do notice the database taking 99% of CPU, I generally shut down the ingest, and then try to vacuum the database manually. I usually find that the indexes that I set up on some of the large tables do not correlate to the actual table data. In the past, I have deleted the index and then recreated the index. In extreme cases, I have deleted the actual UNIX file that corresponds to the table, and then deleted the table reference and then ecreated the table.\n\n\nA Related Problem\nEven under normal operations when the database is fast, we still have problems inserting data. We have examined the insert query identified in our error log to see if there is a problem with our code. Apparently not, as the Insert SQL that failed and dropped core while running the ingest process has no problems when entered manually. \n\n\nMore Information\nAs to the wise suggestions of Mr. Fournier and others, we have adequate RAM, 256 MBytes, adequate CPU, 2 MIPS R10000s, adequate swap, 260 Mbytes, and the postgres user located on a separate disk from the swap file. (SGI IRIX 6.5 Dual Processor Octane, Postgres 6.3 built using SGI's C compiler, Pg) \n\nWe tried building 6.3.2 using GNU's C compiler and SGI's C compiler but the problem appeared instantly and was much worse. We are going to attempt 6.4. We do not have trouble running out of memory.\n\nFinally, as we lose the database connection consistently when running our Perl ingest routines, we automatically try to reconnect to the database (as per Mr. Fournier's advice).\n\n\nAnother little fact is that we first check the primary key to see if there is a previous entry with the\nsame key, and if there is a previous entry, delete the old entry. The primary key is basically a location identifier. So if we already have a weather report for \"Denver\", we make sure to delete the old weather report\nbefore inserting the new \"Denver\" weather report. Mr. Fournier has offered some suggestions to streamline this process, but as yet we have not made any changes. IMHO, these changes should not affect the problem. \n\n\nWe are in the process of changing over to 6.4. In fact, if you, the PostgreSQL Hackers, could give me some information on what type of run-time environment would be benificial to you, I will try to comply. In other words, if you wanted us to set up at a particular debug level, at certain memory buffer allocations,or any other setup suitable for easy diagnostics, please let me know. \n\nThanks again for your interest and help\n\n\nDavid Ben-Yaacov\nMetView Technical Lead\nSterling Software\nUSA\n402 291 8300 x 351 (voice)\n402 291 4362 (fax)\n\n\n\n\n\n\n\nDear PostgreSQL \nhackers:\n \nI have a problem that I recently posted in \nthe Postgres-ADMIN newsgroup.  It was not answered there, but Mr. Marc \nFournier, aka Hermit Hacker, told me that it might be worthwhile (to both myself \nand the Postgres code hackers) to cross-post my problem.  \n\n \nWithout reposting all the previous emails, I \nwill attempt to summarize the problem.  Based upon the advice that is \noffered from this newsgroup,  I intend to change my Postgres configuration \nto aid in the debugging of this problem.  In advance, thank you for your \nhelp.\n \nThe \nProblem\nI have three Perl ingest processes running \ncontinuously, inserting and deleting data into the database.  Although not \nconsistent, sometimes the database slows down to a crawl and the CPU usage \nincreases tojust under 100%.  A Perl ingest processes may insert/delete \nto/from the same table as another Perl ingest process.You may be \nwondering what kind of data we store.  The system is a development system \nutilizing a real-time alphanumeric weather feed.  So if we drop the tables \nand thus the weather, we can always wait a couple of hours for it to \nreload.  On our production/deployed systems, this is not an \noption.\n \nAs to the amount of data \nbeing ingested...we don't really know what the true amount is, but we have \ndiscussed it and predict that we could get about 1500 inserts peak in a 3 minute \ntime span, coming from two different Perl ingest routines.  Of course, it \nprobably is quite a bit less normally.To try to stop the problem \nbefore it starts, I wrote a script that vacuums the database tables once every \nhour.  The vacuum script runs at the same time as the Perl ingest processes \ndo (the Perl ingest processes neverget shut down).  This may or may not \nhave helped the situation.  I believe it does help.  Or does the \nentire database need to be idle before performing a vacuum due to potential \ndeadlock scenarious?When I do notice the \ndatabase taking 99% of CPU, I generally shut down the ingest, and then try to \nvacuum the database manually.  I usually find that the indexes that I set \nup on some of the large tables do not correlate to the actual table data.  \nIn the past, I have deleted the index and then recreated the index.  In \nextreme cases, I have deleted the actual UNIX file that corresponds to the \ntable, and then deleted the table reference and then  ecreated the \ntable.\n \nA Related ProblemEven under normal operations \nwhen the database is fast, we still have problems inserting data.  We have \nexamined the insert query identified in our error log to see if there is a \nproblem with our code.  Apparently not, as the Insert SQL that failed and \ndropped core while running the ingest process has no problems when entered \nmanually.  \n \nMore InformationAs to the wise suggestions of \nMr. Fournier and others, we have adequate RAM, 256 MBytes, adequate CPU, 2 MIPS \nR10000s, adequate swap, 260 Mbytes, and the postgres user located on a separate \ndisk from the swap file.  (SGI IRIX 6.5 Dual \nProcessor Octane, Postgres 6.3 built using SGI's C compiler, \nPg) \n \nWe tried building 6.3.2 using GNU's C compiler and \nSGI's C compiler but the problem appeared instantly and was much worse.  We \nare going to attempt 6.4.  We do not have trouble running out of \nmemory.Finally, as we lose the database connection consistently when \nrunning our Perl ingest routines, we automatically try to reconnect to the \ndatabase (as per Mr. Fournier's advice).\n \nAnother little fact is that we first check the \nprimary key to see if there is a previous entry with thesame key, and if \nthere is a previous entry, delete the old entry.  The primary key is \nbasically a location identifier.  So if we already have a weather report \nfor \"Denver\", we make sure to delete the old weather reportbefore \ninserting the new \"Denver\" weather report.  Mr. Fournier has \noffered some suggestions to streamline this process, but as yet we have not made \nany changes.  IMHO, these changes should not affect the problem. \n\n \nWe are in the process of changing over to \n6.4.  In fact, if you, the PostgreSQL Hackers, could give me some \ninformation on what type of run-time environment would be benificial to you, I \nwill try to comply.  In other words, if you wanted us to set up at a \nparticular debug level, at certain memory buffer allocations,or any other setup \nsuitable for easy diagnostics, please let me know. Thanks again for your \ninterest and help\n \nDavid Ben-YaacovMetView Technical \nLeadSterling Software\nUSA402 291 8300 x 351 \n(voice)402 291 4362 (fax)", "msg_date": "Fri, 13 Nov 1998 08:08:09 -0600", "msg_from": "\"David Ben-Yaacov\" <[email protected]>", "msg_from_op": true, "msg_subject": "High-level of inserts makes database drop core" }, { "msg_contents": "\"David Ben-Yaacov\" <[email protected]> writes:\n> [ many details snipped ]\n> When I do notice the database taking 99% of CPU, I generally shut down\n> the ingest, and then try to vacuum the database manually. I usually\n> find that the indexes that I set up on some of the large tables do not\n> correlate to the actual table data.\n\n> Finally, as we lose the database connection consistently when running\n> our Perl ingest routines, we automatically try to reconnect to the\n> database (as per Mr. Fournier's advice).\n\nLosing the database connection means you are tickling some sort of\ncoredump-causing bug in the backend. It's not too surprising that the\nindexes would be left corrupt after such a crash. The high-CPU-usage\nsymptom probably results when the database gets so messed up that the\nsystem is just chasing its tail trying to use the index.\n\nI don't believe in automatic reconnection myself --- if you do that\nyou're just papering over the symptom of a serious problem. Better\nto find out why the backend is crashing and fix it.\n\n> (SGI IRIX 6.5 Dual Processor Octane, Postgres 6.3 built using\n> SGI's C compiler)\n\nYou might want to look at src/include/storage/s_lock.h and make sure\nthat the IRIX spin-lock code is correct for a multiprocessor system.\n\n> We tried building 6.3.2 using GNU's C compiler and SGI's C compiler but\n> the problem appeared instantly and was much worse. We are going to\n> attempt 6.4.\n\nGood, I would definitely suggest moving to 6.4 before you do anything\nelse. There were some significant bug fixes in index handling, I\nbelieve.\n\nFWIW, my company has been using a pre-alpha 6.4 release with no problems\nin a system where multiple processes write the database concurrently.\nWe saw hard-to-reproduce problems when we were on 6.3.2, but those seem\nto have been cleaned up over the summer.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 13 Nov 1998 10:27:20 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] High-level of inserts makes database drop core " }, { "msg_contents": "> Good, I would definitely suggest moving to 6.4 before you do anything\n> else. There were some significant bug fixes in index handling, I\n> believe.\n> \n> FWIW, my company has been using a pre-alpha 6.4 release with no problems\n> in a system where multiple processes write the database concurrently.\n> We saw hard-to-reproduce problems when we were on 6.3.2, but those seem\n> to have been cleaned up over the summer.\n\nYou are correct. There were cases where heap data was being\nread/modified while unlocked, causing problems. Mega-patch fixed that.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 13 Nov 1998 11:24:40 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] High-level of inserts makes database drop core" } ]
[ { "msg_contents": "CORBA in fifty words or less:\n \n CORBA is an architecture which specifies how to:\n - define the method signatures of an object type\n - obtain a reference to a object instance\n - invoke a method, with parameters, on an object instance\n - receive the results\n - all interoperably between different programming languages, object\n implementations, and platforms.\n \nWhy CORBA?\n \n If you don't do object-oriented programming and system design, the\n rationale for CORBA will be hard to understand. If you don't understand\n why PostgreSQL is called an \"object-relational\" database (and why every\n row has an OID), the rationale for PostgreSQL+CORBA will be hard to\n understand.\n \n The short version goes like this:\n \n Think of a database table as a \"typedef\" of a data structure, with each\n row representing a malloc'ed instance of that structure type. The\n database provides for persistant storage, and concurrent data access,\n but with a considerable access overhead: sending an SQL query string down\n a socket, parsing the query string into an execution plan, executing\n the plan, coverting the returned result set into text strings, sending\n the strings down a socket, retrieving the strings from the socket, and,\n finally, converting the text strings back into usable data values.\n \n With CORBA, though, you could keep a reference (OID, pointer, etc.) to\n each data structure of interest, and just call a function to read or \n write data to fields in that structure. Another way to think of it \n is cursors without queries. The database (PostgreSQL in our case)\n continues to maintain persistence and concurrent access, and the data\n is also always available for relational queries.\n \nWhich ORB?\n \n GNOME started with Mico. Mico, apparently, makes use of C++ templates, \n which caused the compiler they were using to generate bloated, wallowing \n code.\n \n GNOME then adopted ORBit, which has two wins: it's in C, and (this is\n the biggy) it has provisions to shortcut parameter marshalling,\n transmission, authentication, reception, and demarshalling--if the client\n stub and server skeleton are in the same address space, and both stub\n and skeleton permit this.\n\n This means that, with ORBit, CORBA method calls can be almost as\n efficient as normal function calls.\n\nHow to use CORBA with PostgreSQL?\n\n There are three ways I can see this working:\n\n 1. As a simple alternative for the current FE<->BE communication protocol.\n The SQL query engine continues to intermediate all transactions. This\n has some benefits, but is really boring to me.\n\n 2. As an alternative to both the FE<->BE communication protocol and the\n SQL query engine. In this case, programs could have efficient direct\n row access, but all data transfers would still be shoved through a\n socket (via the Internet Inter-Orb Protocol). This could be useful,\n and mildly interesting.\n\n 3. As an alternative API to libpq that would allow, for example,\n embedding a Python interpreter in the backend, with PostgreSQL tables\n exposed through CORBA as native Python classes, and with high\n performance via ORBit method shortcutting. This, in my opinion,\n would be the most useful and interesting.\n\n\n\t\t-Michael Robinson\n\n", "msg_date": "Sat, 14 Nov 1998 01:19:49 +0800 (GMT)", "msg_from": "Michael Robinson <[email protected]>", "msg_from_op": true, "msg_subject": "More PostgreSQL+CORBA" }, { "msg_contents": "On Sat, 14 Nov 1998, Michael Robinson wrote:\n\n< a very well written description removed >\n\n> Which ORB?\n> \n> GNOME started with Mico. Mico, apparently, makes use of C++ templates, \n> which caused the compiler they were using to generate bloated, wallowing \n> code.\n\n\tIs that still accurate today?\n\n> GNOME then adopted ORBit, which has two wins: it's in C, and (this is\n> the biggy) it has provisions to shortcut parameter marshalling,\n\n\tSo...implement an OO 'environment' with a non-OO language? :)\n\n\tMy experience is that for pretty much every pro, there is a\ncon...what are we losing with ORBit that we'd have with mico? Short\nand/or long term? mico is reputed to be Corba 2.2 compliant..orbit?\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Fri, 13 Nov 1998 13:39:17 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] More PostgreSQL+CORBA" }, { "msg_contents": "Michael Robinson <[email protected]> writes:\n> [ pithy summary of CORBA snipped --- thanks, Michael! ]\n\n> There are three ways I can see this working:\n\n> 1. As a simple alternative for the current FE<->BE communication protocol.\n> The SQL query engine continues to intermediate all transactions. This\n> has some benefits, but is really boring to me.\n\nI agree, that's not too exciting ... although replacing our current ad-hoc\nFE/BE protocol with a standards-based protocol (I assume CORBA has a\nrecognized standard for the wire-level protocol?) might well be worth doing.\n\n> 2. As an alternative to both the FE<->BE communication protocol and the\n> SQL query engine. In this case, programs could have efficient direct\n> row access, but all data transfers would still be shoved through a\n> socket (via the Internet Inter-Orb Protocol). This could be useful,\n> and mildly interesting.\n\nActually, I find that one Extremely Useful and indeed Fascinating ;-).\nIn the applications I'm currently using, a large fraction of the\nupdate queries act on single rows and have the form\n\tUPDATE table SET field(s) WHERE oid = 123456;\nThe overhead of doing this is horrendous, of course. Being able to\naccess individual rows as if they were CORBA objects would be a lovely\nperformance improvement, I suspect.\n\n> 3. As an alternative API to libpq that would allow, for example,\n> embedding a Python interpreter in the backend, with PostgreSQL tables\n> exposed through CORBA as native Python classes, and with high\n> performance via ORBit method shortcutting. This, in my opinion,\n> would be the most useful and interesting.\n\nI'm leery of this, not only because of the implementation problems other\npeople have mentioned (bringing the backend to a state where it is\nthread-safe would be a large effort), but because it subverts all the\nprotection and security reasons for having the Postgres frontend/backend\narchitecture in the first place. The *last* thing I'd want is client\ncode executing in the same process as the database server.\n\nHowever, if I understand things correctly, the CORBA interface will hide\nwhether client code is in the same process as the backend server or not;\nso we could each assemble the parts in the way we prefer. At least, I\ncould build my separate-processes setup right away, and we could work\ntowards making the backend thread-safe so you could build your\ninstallation your way.\n\nDoes CORBA have any provision for saying \"this object is not thread\nsafe, don't send it concurrent operations\"? If there's something along\nthat line, then maybe we don't have to fix the backend before it can\nlive in a common address space with the client...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 13 Nov 1998 17:52:39 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] More PostgreSQL+CORBA " }, { "msg_contents": "> > 1. simple alternative for the current FE<->BE communication \n> > protocol.\n> ... although replacing our current ad-hoc\n> FE/BE protocol with a standards-based protocol (I assume CORBA has a\n> recognized standard for the wire-level protocol?) might well be worth \n> doing.\n\nCorba does specify a wire-level protocol, and also several layers above\nthat, so we would get endian issues resolved as well as\nsecurity/encryption, authentication, etc. Also, it may (at least\npartially) address the need for different client libraries for different\nlanguages, since there are Corba bindings available for a bunch of\nlanguages. Some of that is up to the package one chooses to use...\n\n - Tom\n", "msg_date": "Sat, 14 Nov 1998 02:17:12 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] More PostgreSQL+CORBA" }, { "msg_contents": "> Michael Robinson <[email protected]> writes:\n> > [ pithy summary of CORBA snipped --- thanks, Michael! ]\n> \n> > There are three ways I can see this working:\n> \n> > 1. As a simple alternative for the current FE<->BE communication protocol.\n> > The SQL query engine continues to intermediate all transactions. This\n> > has some benefits, but is really boring to me.\n> \n> I agree, that's not too exciting ... although replacing our current ad-hoc\n> FE/BE protocol with a standards-based protocol (I assume CORBA has a\n> recognized standard for the wire-level protocol?) might well be worth doing.\n\nCurrent FE/BE protocol seems pretty optimized to me, but you should know\nthe best. Seems like a waste to try and get it to match some standard,\nexpecially if we can just create a module to do it on top of the\nexisting protocol.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 13 Nov 1998 22:57:46 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] More PostgreSQL+CORBA" }, { "msg_contents": "On Fri, 13 Nov 1998, Bruce Momjian wrote:\n\n> > Michael Robinson <[email protected]> writes:\n> > > [ pithy summary of CORBA snipped --- thanks, Michael! ]\n> > \n> > > There are three ways I can see this working:\n> > \n> > > 1. As a simple alternative for the current FE<->BE communication protocol.\n> > > The SQL query engine continues to intermediate all transactions. This\n> > > has some benefits, but is really boring to me.\n> > \n> > I agree, that's not too exciting ... although replacing our current ad-hoc\n> > FE/BE protocol with a standards-based protocol (I assume CORBA has a\n> > recognized standard for the wire-level protocol?) might well be worth doing.\n> \n> Current FE/BE protocol seems pretty optimized to me, but you should know\n> the best. Seems like a waste to try and get it to match some standard,\n> expecially if we can just create a module to do it on top of the\n> existing protocol.\n\n\tExcept...if I'm understanding even half of this correctly...by\nimplementing CORBA at the FE/BE level, this effectively eliminates the\nneed for *us* to maintain a seperate interface for each language we want\nto support, since that is what one of CORBA's design goals is...\n\n\tIn fact, again, if I'm understanding this correctly, this could\npotentially open us up to languages we currently don't support...?\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Sat, 14 Nov 1998 01:18:40 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] More PostgreSQL+CORBA" }, { "msg_contents": "> \tExcept...if I'm understanding even half of this correctly...by\n> implementing CORBA at the FE/BE level, this effectively eliminates the\n> need for *us* to maintain a seperate interface for each language we want\n> to support, since that is what one of CORBA's design goals is...\n> \n> \tIn fact, again, if I'm understanding this correctly, this could\n> potentially open us up to languages we currently don't support...?\n\nYea, that would be neat. But considering no one really totally supports\nCORBA yet, and we already have tons of working interfaces, perhaps we\ncan consider it in the future, or were you thinking in the next 6-9\nmonths?\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 14 Nov 1998 02:34:04 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] More PostgreSQL+CORBA" }, { "msg_contents": "On Fri, Nov 13, 1998 at 01:39:17PM -0400, The Hermit Hacker wrote:\n> > GNOME started with Mico. Mico, apparently, makes use of C++ templates, \n> > which caused the compiler they were using to generate bloated, wallowing \n> > code.\n> \n> \tIs that still accurate today?\n\nI think so, yes.\n\n> > GNOME then adopted ORBit, which has two wins: it's in C, and (this is\n> > the biggy) it has provisions to shortcut parameter marshalling,\n> \n> \tSo...implement an OO 'environment' with a non-OO language? :)\n> \n> \tMy experience is that for pretty much every pro, there is a\n> con...what are we losing with ORBit that we'd have with mico? Short\n> and/or long term? mico is reputed to be Corba 2.2 compliant..orbit?\n\nIn the short term we lose lots of functions with ORBit. In the long run\nhowever we win performance. \n\nMichael\n-- \nDr. Michael Meskes, Manager of the Western Branch Office, Datenrevision GmbH\nwork: Cuxhavener Str. 36, D-21149 Hamburg\nhome: Th.-Heuss-Str. 61, D-41812 Erkelenz, [email protected]\nGo SF49ers! Go Rhein Fire! Use Debian GNU/Linux!\n\n\n", "msg_date": "Sat, 14 Nov 1998 14:19:21 +0100", "msg_from": "Michael Meskes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] More PostgreSQL+CORBA" }, { "msg_contents": "On Sat, 14 Nov 1998, Bruce Momjian wrote:\n\n> > \tExcept...if I'm understanding even half of this correctly...by\n> > implementing CORBA at the FE/BE level, this effectively eliminates the\n> > need for *us* to maintain a seperate interface for each language we want\n> > to support, since that is what one of CORBA's design goals is...\n> > \n> > \tIn fact, again, if I'm understanding this correctly, this could\n> > potentially open us up to languages we currently don't support...?\n> \n> Yea, that would be neat. But considering no one really totally supports\n> CORBA yet, and we already have tons of working interfaces, perhaps we\n> can consider it in the future, or were you thinking in the next 6-9\n> months?\n\n\tGuess that's the next question (vs statement)...who actually\nsupports Corba at this time? two, off the top of my head, are Gnome and\nKoffice...anyone know of a list of others?\n\n\tAs for 6-9 months...I think this is more in Michael court then\nanything...I don't see why work can't start on it now, even if its nothing\nmore then Michael submitting patches that have the relevant sections\n#ifdef's so that they are only enabled for those working on it. I don't\nimagine this is going to be a \"now it isn't, now it is\" sort of thing...it\nmight take 6-9 months to implement...\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Sat, 14 Nov 1998 10:19:17 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] More PostgreSQL+CORBA" }, { "msg_contents": "> > \tExcept...if I'm understanding even half of this correctly...by\n> > implementing CORBA at the FE/BE level, this effectively eliminates the\n> > need for *us* to maintain a seperate interface for each language we want\n> > to support, since that is what one of CORBA's design goals is...\n> > \n> > \tIn fact, again, if I'm understanding this correctly, this could\n> > potentially open us up to languages we currently don't support...?\n> \n> Yea, that would be neat. But considering no one really totally supports\n> CORBA yet, and we already have tons of working interfaces, perhaps we\n> can consider it in the future, or were you thinking in the next 6-9\n> months?\n\nI think I get it now. Currently, all the non-C interfaces use libpq to\ngo over the wire to the backend. If we made the FE/BE protocol CORBA, we\ncould modify libpq, and all the current interfaces would still work. \nThen if someone came up with a Smalltalk-to-CORBA interface, they could\nuse it for PostgreSQL. Also, if someone came up with a better\nPerl-to-CORBA interface, we could throw ours away, and just use that\none.\n \nWould nice. Hope there is no performance penalty.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 14 Nov 1998 10:38:08 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] More PostgreSQL+CORBA" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> Yea, that would be neat. But considering no one really totally supports\n> CORBA yet, and we already have tons of working interfaces, perhaps we\n> can consider it in the future, or were you thinking in the next 6-9\n> months?\n\nI'm not sure what Michael was thinking, but I was seeing this as a\nlong-term kind of project. Maybe for the release after 6.5, or even\nthe one after that. (Do we have any definite plans for future release\nfrequency?)\n\nEven if the open-source ORBs are adequately up-to-speed today (which\nsounded iffy), we need to learn about CORBA, or import some expertise\nfrom somewhere, before we can do much. This is unknown territory for\nme, and evidently also for most of the pgsql crew. I'd be inclined to\njust play around for a few months and try to write a paper design.\n\nIt does sound like an idea worth pursuing, though.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 14 Nov 1998 11:19:19 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] More PostgreSQL+CORBA " }, { "msg_contents": "Bruce Momjian wrote:\n> \n> Current FE/BE protocol seems pretty optimized to me, but you should know\n> the best. Seems like a waste to try and get it to match some standard,\n> expecially if we can just create a module to do it on top of the\n> existing protocol.\n> \n\nIt may be pretty optimized, but in its current implementation it seems \nit has just evolved into the current shape, and not engineered to be \na efficient and consistent interface.\n\nThe way it has become as it is seems to be grown from the need to \ntalk to the backend directly using telnet. \n\nOtherways the protocol is a bitch to implement (comared for example to\nX)\nas it is very inconsistent in ways lengths are specified, and also lacks \nmany features needed from standard SQL CLI spec, like prepared \nstatements and such; (Implementing prepared statements on top of \ncurrent protocol would need quite a lot of gum, wire and black tape)\n\nThe binary protocol is very much lacking (read unimplemented) - there \nare places for network input/output functions in the pg_type table, \n(I guess typsend/typreceive were ment to be used in binary mode \ncursors) but they are not used (they are always the same as \ntypoutput/typsend)\n\nhaving really standard network independent binary representations \n(using typsend/typreceive) for all types would be one thing to ease \nimplementation of higher-level client libraries (ODBC,JDBC,...)\n\nCurrently everything goes through ascii as it is the only \narchitecture and network independent way.\n\nI have been entertaining an idea to design a new protocol based on \nX Window protocol, but probably the CORBA one (GIOP ?) is more advanced, \nat least in fields like security, marshalling of new user-defined \ndatatypes and others. It does probably not suffer from arbitrary 8k \nlimits in unexpected places ;)\n\nSo, as we will need a new FE<->BE protocol anyway, why not try to \nuse CORBA.\n\n-------------\nHannu Krosing\n", "msg_date": "Sat, 14 Nov 1998 19:40:36 +0200", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] More PostgreSQL+CORBA" }, { "msg_contents": "Bruce Momjian wrote:\n>\n> > Yea, that would be neat. But considering no one really totally supports\n> > CORBA yet, and we already have tons of working interfaces, perhaps we\n> > can consider it in the future, or were you thinking in the next 6-9\n> > months?\n> \n> I think I get it now. Currently, all the non-C interfaces use libpq to\n> go over the wire to the backend.\n\nAFAIK, many of them (ODBC,tcl(the portable one),JDBC) speak native\nwire-protocol.\n\n> If we made the FE/BE protocol CORBA, we\n> could modify libpq, and all the current interfaces would still work.\n> Then if someone came up with a Smalltalk-to-CORBA interface, they could\n> use it for PostgreSQL. Also, if someone came up with a better\n> Perl-to-CORBA interface, we could throw ours away, and just use that\n> one.\n> \n> Would nice. Hope there is no performance penalty.\n\nI don't know about it, it probably depends on which ORB we will use.\n\n------------------\nHannu\n", "msg_date": "Sat, 14 Nov 1998 20:01:39 +0200", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] More PostgreSQL+CORBA" }, { "msg_contents": "Tom Lane wrote:\n> \n> Bruce Momjian <[email protected]> writes:\n> > Yea, that would be neat. But considering no one really totally supports\n> > CORBA yet, and we already have tons of working interfaces, perhaps we\n> > can consider it in the future, or were you thinking in the next 6-9\n> > months?\n> \n> I'm not sure what Michael was thinking, but I was seeing this as a\n> long-term kind of project. Maybe for the release after 6.5, or even\n> the one after that. (Do we have any definite plans for future release\n> frequency?)\n> \n> Even if the open-source ORBs are adequately up-to-speed today (which\n> sounded iffy), we need to learn about CORBA, or import some expertise\n> from somewhere, before we can do much. This is unknown territory for\n> me, and evidently also for most of the pgsql crew. I'd be inclined to\n> just play around for a few months and try to write a paper design.\n\nKeep an eye on the SQL CLI specs (or ODBC/JDBC as they are quite\nsimilar)\nwhen doing the design. \n\n> It does sound like an idea worth pursuing, though.\n\nIndeed.\n\n---------------\nHannu\n", "msg_date": "Sat, 14 Nov 1998 20:04:56 +0200", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] More PostgreSQL+CORBA" }, { "msg_contents": "> Bruce Momjian wrote:\n> >\n> > > Yea, that would be neat. But considering no one really totally supports\n> > > CORBA yet, and we already have tons of working interfaces, perhaps we\n> > > can consider it in the future, or were you thinking in the next 6-9\n> > > months?\n> > \n> > I think I get it now. Currently, all the non-C interfaces use libpq to\n> > go over the wire to the backend.\n> \n> AFAIK, many of them (ODBC,tcl(the portable one),JDBC) speak native\n> wire-protocol.\n\nOops, I forgot about those.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 14 Nov 1998 13:48:48 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] More PostgreSQL+CORBA" }, { "msg_contents": "On Sat, Nov 14, 1998 at 02:34:04AM -0500, Bruce Momjian wrote:\n> Yea, that would be neat. But considering no one really totally supports\n> CORBA yet, and we already have tons of working interfaces, perhaps we\n> can consider it in the future, or were you thinking in the next 6-9\n> months?\n\nI'd say start woking now. But when it will be ready is up in the air. And\nhey why shouldn't we be the first?\n\nMichael\n-- \nDr. Michael Meskes, Manager of the Western Branch Office, Datenrevision GmbH\nwork: Cuxhavener Str. 36, D-21149 Hamburg\nhome: Th.-Heuss-Str. 61, D-41812 Erkelenz, [email protected]\nGo SF49ers! Go Rhein Fire! Use Debian GNU/Linux!\n\n\n", "msg_date": "Sat, 14 Nov 1998 19:53:51 +0100", "msg_from": "Michael Meskes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] More PostgreSQL+CORBA" }, { "msg_contents": "On Sat, Nov 14, 1998 at 10:19:17AM -0400, The Hermit Hacker wrote:\n> supports Corba at this time? two, off the top of my head, are Gnome and\n> Koffice...anyone know of a list of others?\n\nKoffice? Is there anything like this yet? I know they are planning for it.\nOr did you mean KDE? Do they use an ORB? Which one?\n\nMichael\n-- \nDr. Michael Meskes, Manager of the Western Branch Office, Datenrevision GmbH\nwork: Cuxhavener Str. 36, D-21149 Hamburg\nhome: Th.-Heuss-Str. 61, D-41812 Erkelenz, [email protected]\nGo SF49ers! Go Rhein Fire! Use Debian GNU/Linux!\n\n\n", "msg_date": "Sat, 14 Nov 1998 19:54:41 +0100", "msg_from": "Michael Meskes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] More PostgreSQL+CORBA" }, { "msg_contents": "On Sat, 14 Nov 1998, Michael Meskes wrote:\n\n> I'd say start woking now. But when it will be ready is up in the air. And\n> hey why shouldn't we be the first?\n> \n> Michael\n\nI *like* that attitude! I'm researching CORBA now, to see if ORBit is\nsufficient for our needs.\n\nTaral\n\n", "msg_date": "Sat, 14 Nov 1998 14:25:36 -0600 (CST)", "msg_from": "Taral <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] More PostgreSQL+CORBA" }, { "msg_contents": "> > > \n> > > \tMy experience is that for pretty much every pro, there is a\n> > > con...what are we losing with ORBit that we'd have with mico? Short\n> > > and/or long term? mico is reputed to be Corba 2.2 compliant..orbit?\n> > \n> > In the short term we lose lots of functions with ORBit. In the long run\n> > however we win performance. \n> \n> \tI was talking with a friend of mine down here, that I was *hoping*\n> would have posted by now since I really really hate translating what\n> someone else said, but...\n\nok, ok, Marc... you've got me into it now... :)\n\nI started thinking about that talking to Marc about the CORBA ORB issue.\nGNOME wants ORBit, BSD'ers want mico, linux'ers want omniorb, etc...\n(forgive any possible generalizations above)\n\nBasically what I was saying to Marc is that the really ORB-dependent part\nof CORBA is the IDL-to-<language> mappings. The logic of the CORBA object\ncan be implemented as a normal object independent of the compiled stub.\nThen, all that has to be done is to fill in the stubs with code that \nacts as an adaptor to the ORB-independent object. That way porting to\nanother ORB should be trivial. \n\nSomeone out there correct me if I'm wrong... :)\n\nI do like the idea of using CORBA to access postgreSQL. There is the\nobvious advantage of not having to worry about porting libpg to every\nsystem and language. CORBA ORB vendors/developers can take care of that. \n\nWhat really makes me curious is what the object's interface will be...\nJust a clone of libpg, or something more?... I think someone earlier \nmentioned possibly doing things like directly referencing \"objects.\"\nI think that the use of CORBA could allow for much more functionality\nthan a classical SQL RDBMS interface. I don't know that much about the\nbackend, but it could be interesting to throw some ideas back and forth\nabout what could be done with the interface.\n\nI would personally prefer to have a more natural OO interface to \nthe database, possibly expanding on the idea of being able to directly\nreference objects (and hold the reference to them). \n\nDuane\n\n--------------------------------------------\nDuane Currie ([email protected])\nUnix System Administrator/Software Developer\nAcadia Institute for Teaching and Technology\n--------------------------------------------\n\n\n", "msg_date": "Sat, 14 Nov 1998 23:40:21 +0000 (AST)", "msg_from": "Duane Currie <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] More PostgreSQL+CORBA" }, { "msg_contents": "On Sat, 14 Nov 1998, Michael Meskes wrote:\n\n> On Fri, Nov 13, 1998 at 01:39:17PM -0400, The Hermit Hacker wrote:\n> > > GNOME started with Mico. Mico, apparently, makes use of C++ templates, \n> > > which caused the compiler they were using to generate bloated, wallowing \n> > > code.\n> > \n> > \tIs that still accurate today?\n> \n> I think so, yes.\n> \n> > > GNOME then adopted ORBit, which has two wins: it's in C, and (this is\n> > > the biggy) it has provisions to shortcut parameter marshalling,\n> > \n> > \tSo...implement an OO 'environment' with a non-OO language? :)\n> > \n> > \tMy experience is that for pretty much every pro, there is a\n> > con...what are we losing with ORBit that we'd have with mico? Short\n> > and/or long term? mico is reputed to be Corba 2.2 compliant..orbit?\n> \n> In the short term we lose lots of functions with ORBit. In the long run\n> however we win performance. \n\n\tI was talking with a friend of mine down here, that I was *hoping*\nwould have posted by now since I really really hate translating what\nsomeone else said, but...\n\n\tWe should be able to create a 'mapping' include file, like we've\ndone in other places, that maps one 'corba' implementations functions to\nour own, so that which implementation (OMNIorb, ORBit, MICO, etc) someone\nuses is up to them.\n\n\tFor instance, I already have MICO installed because of\nkoffice...I'd prefer not to have to install *another* one because I want\nto use it for PostgreSQL.\n\n\tMy thought, at this point in time, is that this should be\nimplemented initially in mico, since ORBit is missing 857+ 'methods'(?)\n... but program it generic enough that by a simple #ifdef in a single\ninclude file, it can be extended to any of the other ones as desired...\n\n\t...basically, don't lock us down to one implementation...\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Sat, 14 Nov 1998 21:49:48 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] More PostgreSQL+CORBA" }, { "msg_contents": "On Sat, 14 Nov 1998, Bruce Momjian wrote:\n\n> > > \tExcept...if I'm understanding even half of this correctly...by\n> > > implementing CORBA at the FE/BE level, this effectively eliminates the\n> > > need for *us* to maintain a seperate interface for each language we want\n> > > to support, since that is what one of CORBA's design goals is...\n> > > \n> > > \tIn fact, again, if I'm understanding this correctly, this could\n> > > potentially open us up to languages we currently don't support...?\n> > \n> > Yea, that would be neat. But considering no one really totally supports\n> > CORBA yet, and we already have tons of working interfaces, perhaps we\n> > can consider it in the future, or were you thinking in the next 6-9\n> > months?\n> \n> I think I get it now. Currently, all the non-C interfaces use libpq to\n> go over the wire to the backend. If we made the FE/BE protocol CORBA, we\n> could modify libpq, and all the current interfaces would still work. \n> Then if someone came up with a Smalltalk-to-CORBA interface, they could\n> use it for PostgreSQL. Also, if someone came up with a better\n> Perl-to-CORBA interface, we could throw ours away, and just use that\n> one.\n\n\tYa, was talking to Duane today (he's out there somewhere...just\nshy or something *grin*) ... he said there is, get this, a COBOL-to-Corba\ninterface... :)\n\n\tIt basically opens us up to more languages supported without\nhaving to actually support them :)\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Sat, 14 Nov 1998 21:51:41 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] More PostgreSQL+CORBA" }, { "msg_contents": "On Sat, 14 Nov 1998, Michael Meskes wrote:\n\n> On Sat, Nov 14, 1998 at 02:34:04AM -0500, Bruce Momjian wrote:\n> > Yea, that would be neat. But considering no one really totally supports\n> > CORBA yet, and we already have tons of working interfaces, perhaps we\n> > can consider it in the future, or were you thinking in the next 6-9\n> > months?\n> \n> I'd say start woking now. But when it will be ready is up in the air. And\n> hey why shouldn't we be the first?\n\n\tWe weren't sure over here, but does Oracle support ORBs?\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Sat, 14 Nov 1998 21:55:27 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] More PostgreSQL+CORBA" }, { "msg_contents": "On Sat, 14 Nov 1998, Michael Meskes wrote:\n\n> On Sat, Nov 14, 1998 at 10:19:17AM -0400, The Hermit Hacker wrote:\n> > supports Corba at this time? two, off the top of my head, are Gnome and\n> > Koffice...anyone know of a list of others?\n> \n> Koffice? Is there anything like this yet? I know they are planning for it.\n> Or did you mean KDE? Do they use an ORB? Which one?\n\n\tKoffice is in alpha stages right now, screen shots look good...I\njust finally got a clean compile of mico under FreeBSD, so next is to get\nkoffice done up too...\n\n\tAnd, yes, they use an orb...mico, in particular :)\n\n\tCheck out koffice.kde.org...looks promising...\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Sat, 14 Nov 1998 21:57:18 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] More PostgreSQL+CORBA" }, { "msg_contents": "On Sat, 14 Nov 1998, Taral wrote:\n\n> On Sat, 14 Nov 1998, Michael Meskes wrote:\n> \n> > I'd say start woking now. But when it will be ready is up in the air. And\n> > hey why shouldn't we be the first?\n> > \n> > Michael\n> \n> I *like* that attitude! I'm researching CORBA now, to see if ORBit is\n> sufficient for our needs.\n\nTaral...\n\n\tI'm putting *one* condition on all this...any code added to C\nfiles *must* be of the sort that is generic enough that someone can decide\nupon installation which implementation they want to use. For this, you\nwill need to do a corba.h file that contains:\n\n#ifdef USING_MICO\n#define ... ...\n#elif USING_ORBIT\n#define ... ...\n#endif\n\n\tI don't care if what you guys are working on only has ORBit stuff\nin the corba.h file, but I (or anyone else) has to be able to modify that\none file in order for it to work with MICO also...or OMNIorb, or some\ncommercial product...\n\n\tAnd, as I said before, there is no reason why any one of you has\nto get a \"working model\" on your machine before starting to submit\npatches. That kills productivity from the standpoint that two ppl are\ngoing to end up working on the same functionality, independent of each\nother...\n\n\tFeel free to submit code such that:\n\n#ifdef ENABLE_CORBA\n\t<corba related stuff>\n#else /* original method */\n\t<old style stuff>\n#endif\n\n\tToo easy to add a --enable-corba option to configure...but at\nleast this way everyone sees what and where everyone else is working. In\nthe end, when it comes time to do it, we can strip that all out, but this\nis looking like a fairly long term project, and with the ongoing changes\nthat go on in the code, getting the CORBA code in while all the other\nchanges are going on is a good thing in that it keeps code trees in\nsync...\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Sat, 14 Nov 1998 22:05:43 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] More PostgreSQL+CORBA" }, { "msg_contents": "On Sat, 14 Nov 1998, The Hermit Hacker wrote:\n\n> \tWe weren't sure over here, but does Oracle support ORBs?\n\nThey have their own (very good) ORB included with their Application\nServer platform.\n\n--\nTodd Graham Lewis 32���49'N,83���36'W (800) 719-4664, x2804\n******Linux****** MindSpring Enterprises [email protected]\n\n", "msg_date": "Sat, 14 Nov 1998 21:24:18 -0500 (EST)", "msg_from": "Todd Graham Lewis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] More PostgreSQL+CORBA" }, { "msg_contents": "On Sat, 14 Nov 1998, The Hermit Hacker wrote:\n\n> \tFor instance, I already have MICO installed because of\n> koffice...I'd prefer not to have to install *another* one because I want\n> to use it for PostgreSQL.\n\nIf I may, I'd like to put in a plug for ORBit. DISCLAIMER: I am the\nGNOME FAQ maintainer.\n\nThe GNOME project abandoned MICO and began the ORBit project for two\nmain reasons: MICO is a memory hog, and MICO is pretty deeply C++-only.\nWe took a serious look at fixing these problems with MICO, but could not\nsee a way to solve them. (I think that some of it had to do with MICO's\ndependence on C++ templates, but I am not sure. I only know of this\nevaluation second-hand.)\n\nThe ORBit developers intend on ORBit being fast, compliant, and\nmulti-lingual. They are right now working on C++ support so that our\nKDE brothers can make use of ORBit and help us develop it.\n\nAgain, I am a GNOME partisan, but I really do think that ORBit is a\ngood choice and a reliable long-term investment. I encourage the\nPostgreSQL developers to consider it.\n\nBTW, I've recently joined the hackers list. I've used PostgreSQL for\nover a year and just began another project using it. I'll mostly be\nlurking, but if I can be of help to any of you, then please feel free\nto call on me.\n\n--\nTodd Graham Lewis 32���49'N,83���36'W (800) 719-4664, x2804\n******Linux****** MindSpring Enterprises [email protected]\n\n", "msg_date": "Sat, 14 Nov 1998 21:29:10 -0500 (EST)", "msg_from": "Todd Graham Lewis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] More PostgreSQL+CORBA" }, { "msg_contents": "On Sat, 14 Nov 1998, Todd Graham Lewis wrote:\n\n> On Sat, 14 Nov 1998, The Hermit Hacker wrote:\n> \n> > \tFor instance, I already have MICO installed because of\n> > koffice...I'd prefer not to have to install *another* one because I want\n> > to use it for PostgreSQL.\n> \n> If I may, I'd like to put in a plug for ORBit. DISCLAIMER: I am the\n> GNOME FAQ maintainer.\n\n\tthis isn't actually a voting process here...I don't want us locked\ninto one implementation. there are ppl out there that would prefer to use\nmico because it is already installed on their system for some reason or\nanother...there should be absolutely no reason why this can't be coded\ngeneric enough that, through a simple switch when running configure, one\nor the other can be used...no?\n\n> Again, I am a GNOME partisan, but I really do think that ORBit is a\n> good choice and a reliable long-term investment. I encourage the\n> PostgreSQL developers to consider it.\n\n\tI would rather our implementatin be \"non-partisan\"...what happens\nin a years time, or 6 months, when the \"faults\" in mico disappear, if they\ndo? at least if we work at staying non-partisan as far as which\nimplementation is used, switching will be transparent...all of them will\nalready be supported.\n\n\tBear in mind that ORBit and MICO are free implementations...what\nabout the commercial ones out there? Just like we tend to try and support\neach OSs C compiler, I want us to stay non-partisan as far as any\n\"external tools\" are concerned...\n\n Marc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Sat, 14 Nov 1998 22:38:34 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] More PostgreSQL+CORBA" }, { "msg_contents": "Hear ye! Hear ye!\n\nI've just started stressing my brain on the CORBA 2 docs... It turns out\nthat CORBA will be completely compatible with the existing system.\n\npostmaster will register a \"method server\" with the ORB (which is a separate\nprocess in mico, at least). A \"method server\" is one which requires a\nseparate process for every method invocation. Which is basically what we do\nnow with our one-backend-per-connection system. At least until the backend\nis thread-safe, we're stuck with it. When the backend is thread-safe, we can\nswitch to \"unshared server\" where we have one per database. :)\n\nTaral\n\nP.S. When will that corba-specific list be set up? I'm sure that well over\nhalf of the list didn't care one bit about what I just said.\n\n", "msg_date": "Sun, 15 Nov 1998 00:29:47 -0600", "msg_from": "\"Taral\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: [HACKERS] More PostgreSQL+CORBA" }, { "msg_contents": "On Sun, 15 Nov 1998, Taral wrote:\n\n> Hear ye! Hear ye!\n> \n> I've just started stressing my brain on the CORBA 2 docs... It turns out\n> that CORBA will be completely compatible with the existing system.\n> \n> postmaster will register a \"method server\" with the ORB (which is a separate\n> process in mico, at least). A \"method server\" is one which requires a\n> separate process for every method invocation. Which is basically what we do\n> now with our one-backend-per-connection system. At least until the backend\n> is thread-safe, we're stuck with it. When the backend is thread-safe, we can\n> switch to \"unshared server\" where we have one per database. :)\n> \n> Taral\n> \n> P.S. When will that corba-specific list be set up? I'm sure that well over\n> half of the list didn't care one bit about what I just said.\n\n\tSee Bruce's post...discussion is moved to pgsql-interfaces...\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Sun, 15 Nov 1998 02:41:46 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "RE: [HACKERS] More PostgreSQL+CORBA" }, { "msg_contents": "> I started thinking about that talking to Marc about the CORBA ORB issue.\n> GNOME wants ORBit, BSD'ers want mico, linux'ers want omniorb, etc...\n> (forgive any possible generalizations above)\n>\n> Basically what I was saying to Marc is that the really ORB-dependent part\n> of CORBA is the IDL-to-<language> mappings. The logic of the CORBA object\n> can be implemented as a normal object independent of the compiled stub.\n> Then, all that has to be done is to fill in the stubs with code that\n> acts as an adaptor to the ORB-independent object. That way porting to\n> another ORB should be trivial.\n\nNo, the IDL-to-language mappings are precisely defined by the CORBA 2.0\nspecification. However we do need a C to C++ thunking layer, as I said in a\nprevious post (that may have gotten lost in the e-mail cloud). The C++ API\nas defined by CORBA 2.0 is quite different in certain areas (structs\nreplaced by classes, etc.)\n\n> Someone out there correct me if I'm wrong... :)\n\n:)\n\n> I do like the idea of using CORBA to access postgreSQL. There is the\n> obvious advantage of not having to worry about porting libpg to every\n> system and language. CORBA ORB vendors/developers can take care\n> of that.\n>\n> What really makes me curious is what the object's interface will be...\n> Just a clone of libpg, or something more?... I think someone earlier\n> mentioned possibly doing things like directly referencing \"objects.\"\n> I think that the use of CORBA could allow for much more functionality\n> than a classical SQL RDBMS interface. I don't know that much about the\n> backend, but it could be interesting to throw some ideas back and forth\n> about what could be done with the interface.\n\nThe initial proof-of-concept will be a libpq wrapper. Later, we will code a\nfull alternate to the frontend, eventually hoping to have the FE/BE protocol\nreplaced with CORBA... (afaik)\n\n> I would personally prefer to have a more natural OO interface to\n> the database, possibly expanding on the idea of being able to directly\n> reference objects (and hold the reference to them).\n\nWe hope to be able to export objects with a Row interface, for example,\nwhich effectively refer to a single row in the database. That way expensive\nqueries of the type 'UPDATE table SET column = data WHERE pkey = value' are\navoided.\n\nTaral\n\n", "msg_date": "Sun, 15 Nov 1998 08:43:05 -0600", "msg_from": "\"Taral\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: [HACKERS] More PostgreSQL+CORBA" }, { "msg_contents": "On Sat, Nov 14, 1998 at 09:49:48PM -0400, The Hermit Hacker wrote:\n> \tFor instance, I already have MICO installed because of\n> koffice...I'd prefer not to have to install *another* one because I want\n> to use it for PostgreSQL.\n\nHey Marc, this shouldn't be the reason to decide which ORB to use. I do like\nhowever to allow to use every ORB you like.\n\nMichael\n\n[Still posted to hackers as I'm not sure I'm subscribed to interfaces. Will\ncheck that later.]\n-- \nDr. Michael Meskes, Manager of the Western Branch Office, Datenrevision GmbH\nwork: Cuxhavener Str. 36, D-21149 Hamburg\nhome: Th.-Heuss-Str. 61, D-41812 Erkelenz, [email protected]\nGo SF49ers! Go Rhein Fire! Use Debian GNU/Linux!\n\n\n", "msg_date": "Mon, 16 Nov 1998 07:49:31 +0100", "msg_from": "Michael Meskes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] More PostgreSQL+CORBA" }, { "msg_contents": "On Sat, Nov 14, 1998 at 09:57:18PM -0400, The Hermit Hacker wrote:\n> \tKoffice is in alpha stages right now, screen shots look good...I\n> just finally got a clean compile of mico under FreeBSD, so next is to get\n> koffice done up too...\n\nSo what software do they use? In particular of course I'd like to know\nwhether they use PostgreSQL as DB backend.\n\n> \tAnd, yes, they use an orb...mico, in particular :)\n\nBut mico isn't used for the complete KDE system (/or is it?), whereas Gnome\nuses its ORB almost everywhere it seems.\n\nMichael\n\n-- \nDr. Michael Meskes, Manager of the Western Branch Office, Datenrevision GmbH\nwork: Cuxhavener Str. 36, D-21149 Hamburg\nhome: Th.-Heuss-Str. 61, D-41812 Erkelenz, [email protected]\nGo SF49ers! Go Rhein Fire! Use Debian GNU/Linux!\n\n\n", "msg_date": "Mon, 16 Nov 1998 12:31:31 +0100", "msg_from": "Michael Meskes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] More PostgreSQL+CORBA" }, { "msg_contents": "On Sat, Nov 14, 1998 at 10:38:34PM -0400, The Hermit Hacker wrote:\n> On Sat, 14 Nov 1998, Todd Graham Lewis wrote:\n> \n> > On Sat, 14 Nov 1998, The Hermit Hacker wrote:\n> > \n> > > \tFor instance, I already have MICO installed because of\n> > > koffice...I'd prefer not to have to install *another* one because I want\n> > > to use it for PostgreSQL.\n> > \n> > If I may, I'd like to put in a plug for ORBit. DISCLAIMER: I am the\n> > GNOME FAQ maintainer.\n> \n> \tthis isn't actually a voting process here...I don't want us locked\n> into one implementation. there are ppl out there that would prefer to use\n> mico because it is already installed on their system for some reason or\n> another...there should be absolutely no reason why this can't be coded\n> generic enough that, through a simple switch when running configure, one\n> or the other can be used...no?\n\nYou may not want that, but you had better look again. CORBA defines the\nwire transactions very carefully. It defines the API for invoking \nservices very carefully. It very carefully avoids defining client\ninitialization, memory layout, API, etc. There is limited code \nportability. Yes, conditional compilation will work, but it will be\na lot of effort, probably as much as maintaining a NT/Unix port.\nChoose the first ORB carefully! Internal APIs for service \nimplementation is simillarly undefined.\n\n> \n> > Again, I am a GNOME partisan, but I really do think that ORBit is a\n> > good choice and a reliable long-term investment. I encourage the\n> > PostgreSQL developers to consider it.\n> \n> \tI would rather our implementatin be \"non-partisan\"...what happens\n> in a years time, or 6 months, when the \"faults\" in mico disappear, if they\n> do? at least if we work at staying non-partisan as far as which\n> implementation is used, switching will be transparent...all of them will\n> already be supported.\n> \n\nPersonally I have trouble with both ORBit and MICO. MICO is far too\nmemory intensive and SLOW. Choosing MICO will be an enormous \nperformance hit.\n\nI worry about ORBit because I think that GNOME has seriously \nunderestimated the amount of work involved in implementing an\n ORB, not to mention the services. \n\nIf you are going to do this, look carefully before you leap.\nAt least look at ORBit, OmniORB2, ACE+TAO, ilu. Drop MICO\nas the performance will simply be too bad.\n\n> \tBear in mind that ORBit and MICO are free implementations...what\n> about the commercial ones out there? Just like we tend to try and support\n> each OSs C compiler, I want us to stay non-partisan as far as any\n> \"external tools\" are concerned...\n> \n\nGreat ideal, but as CORBA lacks this level of source compatibility,\nyou can't get there.\n\n> Marc G. Fournier \n> Systems Administrator @ hub.org \n> primary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n> \n> \n", "msg_date": "Mon, 16 Nov 1998 11:59:39 -0500", "msg_from": "Jim Penny <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] More PostgreSQL+CORBA" }, { "msg_contents": "On Mon, 16 Nov 1998, Michael Meskes wrote:\n\n> On Sat, Nov 14, 1998 at 09:49:48PM -0400, The Hermit Hacker wrote:\n> > \tFor instance, I already have MICO installed because of\n> > koffice...I'd prefer not to have to install *another* one because I want\n> > to use it for PostgreSQL.\n> \n> Hey Marc, this shouldn't be the reason to decide which ORB to use. I do like\n> however to allow to use every ORB you like.\n\n\tWas just an example...*roll eyes* *grin*\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Mon, 16 Nov 1998 15:34:50 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] More PostgreSQL+CORBA" }, { "msg_contents": "On Mon, 16 Nov 1998, Michael Meskes wrote:\n\n> On Sat, Nov 14, 1998 at 09:57:18PM -0400, The Hermit Hacker wrote:\n> > \tKoffice is in alpha stages right now, screen shots look good...I\n> > just finally got a clean compile of mico under FreeBSD, so next is to get\n> > koffice done up too...\n> \n> So what software do they use? In particular of course I'd like to know\n> whether they use PostgreSQL as DB backend.\n\n\tNone, right now...I could be wrong though, haven't been able to\nget it up and running yet :(\n\n> > \tAnd, yes, they use an orb...mico, in particular :)\n> \n> But mico isn't used for the complete KDE system (/or is it?), whereas Gnome\n> uses its ORB almost everywhere it seems.\n\n\tFrom what Taral has investigatd and come up with, if we stick with\na 2.2 implementation of Corba (mico is only free implementation, that I am\naware of, tha tis fully 2.2 compliant...the rest are all 2.0 still), then\nwhen the other ORBs, when they come up to speed (if?), making it workk\nwith them should be relatively easy.\n\n\tAccording to what Taral has found out so far, 2.2's API is very\nrigidly defined, whereas 2.0''s is left up to the implementator...\n\n\tThe end result I'd like to see if for PostgreSQL to be able to be\ncompiled against whatever implementation the installer wishes, as long as\nthey are 2.2 compliant...\n\n\tQuite frankly, I want the *core* to be built around a complete\nimplementation...if someone is really hot on ORBit, and wants to add in\ncompatibility code so that it will work with ORBit, so bit it...but I want\nthe *core* to be 2.2 clean/compliant...and mico is the only ORB out there\nat this point in time that will allow that...\n \n Marc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Mon, 16 Nov 1998 15:42:45 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] More PostgreSQL+CORBA" }, { "msg_contents": "> I *like* that attitude! I'm researching CORBA now, to see if ORBit is\n> sufficient for our needs.\n\nPlease look at ILU too... Predates CORBA, but has CORBA compatibility\nand lots of history/development. afaik ORBit doesn't have broad\nmulti-language support, but I suppose we could use ILU for clients and\nORBit for the backend server, which is already C.\n\n - Tom\n", "msg_date": "Tue, 17 Nov 1998 02:38:24 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] More PostgreSQL+CORBA" }, { "msg_contents": "On Mon, Nov 16, 1998 at 03:42:45PM -0400, The Hermit Hacker wrote:\n> \tFrom what Taral has investigatd and come up with, if we stick with\n> a 2.2 implementation of Corba (mico is only free implementation, that I am\n> aware of, tha tis fully 2.2 compliant...the rest are all 2.0 still), then\n> when the other ORBs, when they come up to speed (if?), making it workk\n> with them should be relatively easy.\n\nI agree that we should go with 2.2.\n\n> \tThe end result I'd like to see if for PostgreSQL to be able to be\n> compiled against whatever implementation the installer wishes, as long as\n> they are 2.2 compliant...\n\nYes.\n\n> \tQuite frankly, I want the *core* to be built around a complete\n> implementation...if someone is really hot on ORBit, and wants to add in\n> compatibility code so that it will work with ORBit, so bit it...but I want\n> the *core* to be 2.2 clean/compliant...and mico is the only ORB out there\n> at this point in time that will allow that...\n\nI still think eventually PostgreSQL win run on ORBit on most systems. :-)\n\nMichael\n\n-- \nDr. Michael Meskes, Manager of the Western Branch Office, Datenrevision GmbH\nwork: Cuxhavener Str. 36, D-21149 Hamburg\nhome: Th.-Heuss-Str. 61, D-41812 Erkelenz, [email protected]\nGo SF49ers! Go Rhein Fire! Use Debian GNU/Linux!\n\n\n", "msg_date": "Tue, 17 Nov 1998 10:17:06 +0100", "msg_from": "Michael Meskes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] More PostgreSQL+CORBA" }, { "msg_contents": "On Tue, 17 Nov 1998, Michael Meskes wrote:\n\n> On Mon, Nov 16, 1998 at 03:42:45PM -0400, The Hermit Hacker wrote:\n> > \tFrom what Taral has investigatd and come up with, if we stick with\n> > a 2.2 implementation of Corba (mico is only free implementation, that I am\n> > aware of, tha tis fully 2.2 compliant...the rest are all 2.0 still), then\n> > when the other ORBs, when they come up to speed (if?), making it workk\n> > with them should be relatively easy.\n> \n> I agree that we should go with 2.2.\n> \n> > \tThe end result I'd like to see if for PostgreSQL to be able to be\n> > compiled against whatever implementation the installer wishes, as long as\n> > they are 2.2 compliant...\n> \n> Yes.\n> \n> > \tQuite frankly, I want the *core* to be built around a complete\n> > implementation...if someone is really hot on ORBit, and wants to add in\n> > compatibility code so that it will work with ORBit, so bit it...but I want\n> > the *core* to be 2.2 clean/compliant...and mico is the only ORB out there\n> > at this point in time that will allow that...\n> \n> I still think eventually PostgreSQL win run on ORBit on most systems. :-)\n> \n\n\tPossibly...which makes a few assumption though...1st is that ORBit\never catches up to mico and 2 that mico doesn't speed up as it matures :)\n\n\tPersonally, the only thing that I care about is that *we* aren't\nthe ones that are scrambling to catch up to the 2.2 standard...we have a\nclean (albeit slow) solution to work with now, let the other\nimplementations catch up to that...\n\n Marc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Tue, 17 Nov 1998 09:23:58 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] More PostgreSQL+CORBA" }, { "msg_contents": "On Tue, 17 Nov 1998, Michael Meskes wrote:\n\n> > \tQuite frankly, I want the *core* to be built around a complete\n> > implementation...if someone is really hot on ORBit, and wants to add in\n> > compatibility code so that it will work with ORBit, so bit it...but I want\n> > the *core* to be 2.2 clean/compliant...and mico is the only ORB out there\n> > at this point in time that will allow that...\n> \n> I still think eventually PostgreSQL win run on ORBit on most systems. :-)\n\n\tAppendum...why do you use PostgreSQL? Speed or features? :)\n\n\tORBit is to MICO as MSQL is to POstgreSQL :)\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Tue, 17 Nov 1998 09:25:10 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] More PostgreSQL+CORBA" }, { "msg_contents": "Marc G. Fournier writes:\n> On Tue, 17 Nov 1998, Michael Meskes wrote:\n> \n> > > \tQuite frankly, I want the *core* to be built around a complete\n> > > implementation...if someone is really hot on ORBit, and wants to add in\n> > > compatibility code so that it will work with ORBit, so bit it...but I want\n> > > the *core* to be 2.2 clean/compliant...and mico is the only ORB out there\n> > > at this point in time that will allow that...\n> > \n> > I still think eventually PostgreSQL win run on ORBit on most systems. :-)\n> \n> \tAppendum...why do you use PostgreSQL? Speed or features? :)\n> \n> \tORBit is to MICO as MSQL is to POstgreSQL :)\n> \n\nOk, but I think the other sense of the question is even more interesting:\n\n why do people use MSQL instead of PostgreSQL?\n\nMy point is that while pg beats msql hands down on features, speed is \nimportant enough to a lot of users that they choose msql instead. If we\nhad more speed, we would have even more success than we do now.\n\n-dg\n\nDavid Gould [email protected] 510.628.3783 or 510.305.9468 \nInformix Software (No, really) 300 Lakeside Drive Oakland, CA 94612\nThe irony is that Bill Gates claims to be making a stable operating\nsystem and Linus Torvalds claims to be trying to take over the world.\n", "msg_date": "Tue, 17 Nov 1998 11:20:33 -0800 (PST)", "msg_from": "[email protected] (David Gould)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] More PostgreSQL+CORBA" }, { "msg_contents": "On Tue, Nov 17, 1998 at 09:23:58AM -0400, The Hermit Hacker wrote:\n> \tPossibly...which makes a few assumption though...1st is that ORBit\n> ever catches up to mico and 2 that mico doesn't speed up as it matures :)\n\nRight. These assumptions are made on a) Mico was thought to be an\neducational project and b) ORBit has the financial backing of RedHat.\n\nBut of course I might be wrong. :-)\n\n> \tPersonally, the only thing that I care about is that *we* aren't\n> the ones that are scrambling to catch up to the 2.2 standard...we have a\n> clean (albeit slow) solution to work with now, let the other\n> implementations catch up to that...\n\nTotally agreed.\n\nMichael\n-- \nDr. Michael Meskes, Manager of the Western Branch Office, Datenrevision GmbH\nwork: Cuxhavener Str. 36, D-21149 Hamburg\nhome: Th.-Heuss-Str. 61, D-41812 Erkelenz, [email protected]\nGo SF49ers! Go Rhein Fire! Use Debian GNU/Linux!\n\n\n", "msg_date": "Tue, 17 Nov 1998 20:24:15 +0100", "msg_from": "Michael Meskes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] More PostgreSQL+CORBA" }, { "msg_contents": "On Tue, Nov 17, 1998 at 09:25:10AM -0400, The Hermit Hacker wrote:\n> \tAppendum...why do you use PostgreSQL? Speed or features? :)\n\nBoth! :-)\n\n> \tORBit is to MICO as MSQL is to POstgreSQL :)\n\nI would think PostgreSQL is better than MICO in their respective area. :-)\n\nMichael\n-- \nDr. Michael Meskes, Manager of the Western Branch Office, Datenrevision GmbH\nwork: Cuxhavener Str. 36, D-21149 Hamburg\nhome: Th.-Heuss-Str. 61, D-41812 Erkelenz, [email protected]\nGo SF49ers! Go Rhein Fire! Use Debian GNU/Linux!\n\n\n", "msg_date": "Tue, 17 Nov 1998 20:25:35 +0100", "msg_from": "Michael Meskes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] More PostgreSQL+CORBA" } ]
[ { "msg_contents": "\n> What do the others say?\n> \nIn my opinion a redo log is more than worth the overhead. If it is\npossible to keep those databases that aren't using the log from creating\nthe shmem and semaphores until logging is turned on for them, I'd say\nyou'd eliminated all of the possible arguments.\n\t-DEJ\n\n\n> Jan\n> \n> \n", "msg_date": "Fri, 13 Nov 1998 12:38:28 -0600", "msg_from": "\"Jackson, DeJuan\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] shmem limits and redolog" }, { "msg_contents": ">\n>\n> > What do the others say?\n> >\n> In my opinion a redo log is more than worth the overhead. If it is\n> possible to keep those databases that aren't using the log from creating\n> the shmem and semaphores until logging is turned on for them, I'd say\n> you'd eliminated all of the possible arguments.\n> -DEJ\n>\n>\n\n The semaphore set I would like to stay at least. Because it's\n a way to make pg_dump capable of totally consistent online\n backups.\n\n Let's say pg_dump first issues an\n\n ALTER DATABASE BEGIN BACKUP;\n\n This will return when the last write lock on the database got\n released. It now dumps schema. During that phase, any query\n from another backend will suspend as soon as it requests a\n write lock. After finishing schema dump (including sequence\n states - that's the reason for the exclusive backup phase),\n pg_dump does an\n\n ALTER DATABASE ONLINE BACKUP;\n\n At this time, a logfile switch is done and pg_dump's backend\n changes it's behavior so all subsequent queries will return\n the data valid at the moment of the ONLINE BACKUP command.\n Now all other backend's can freely modify the database and do\n whatever they want and the suspended backends continue.\n pg_dump will not see their changes.\n\n When pg_dump finishes, it does an\n\n ALTER DATABASE END BACKUP;\n\n This stores information about the last successful backup and\n the first logfile sequence required to recover from this dump\n - the sequence of the logfile began at ONLINE BACKUP. And it\n turns back the special behaviour of pg_dump's backend. Last\n action of pg_dump is to add this info to the dump.\n\n Wouldn't that all be really nice? Having a productional\n database online, taking a full backup while the database is\n beeing updated plus having a transaction log that could\n recover a crash using that backup.\n\n The final redolog I'm planning will have more capabilities.\n Point-in-time recovery and online recovery of another\n database (on another system?) to have a second database in\n sync and beeing able to switchover in a crash situation, not\n requiring downtime for recovery.\n\n It's still a long way to there - I just made the first steps.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Fri, 13 Nov 1998 21:19:56 +0100 (MET)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] shmem limits and redolog" } ]
[ { "msg_contents": "Hi,\n\n I'm currently hacking around on a solution for logging all\n database operations at query level that can recover a crashed\n database from the last successful backup by redoing all the\n commands.\n\n Well, I wanted it to be as flexible as can. So I decided to\n make it per database configurable. One could say which\n databases are logged and if a database is, if it is logged\n sync or async (in sync mode, every COMMIT forces an fsync of\n the actual logfile and controlfiles).\n\n To make async mode as fast as can, I'm using a shared memory\n of 32K per database (not per backend) that is used as a wrap\n around buffer from the backends to place their query\n information. So the log writer can fall a little behind if\n there are many backends doing different things that don't\n lock each other.\n\n Now I'm a little in doubt about the shared memory limits\n reported. Was it a good decision to use shared memory? Am I\n better off using socket's?\n\n The bad thing in what I have up to now (it's far from\n complete) is, that even if a database isn't currently logged,\n a redolog writer is started and creates the 32K shmem segment\n (plus a semaphore set with 5 semaphores). This is because I\n plan to create commands like\n\n ALTER DATABASE LOG MODE=ASYNC LOGDIR='/somewhere/dbname';\n\n and the like that can be used at runtime (while more than one\n backend is connected to the database) to turn logging on/off,\n switch to/from backup mode (all other activity is stopped)\n etc.\n\n So every 32 databases will require another megabyte of shared\n memory. The logging master controls which databases have\n activity and kills redolog writers after some time of\n inactivity, and the shmem is freed then. But it can hurt if\n someone really has many many databases that are all used at\n the same time.\n\n What do the others say?\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Fri, 13 Nov 1998 19:46:20 +0100 (MET)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": true, "msg_subject": "shmem limits and redolog" }, { "msg_contents": "> Hi,\n> \n> I'm currently hacking around on a solution for logging all\n> database operations at query level that can recover a crashed\n> database from the last successful backup by redoing all the\n> commands.\n\nCool. \n\nI have postings that describe a method of not f-sync'ing the pg_log or\ndata tables, while allowing proper recoery. Let me know if you want to\nread them. Hope you will.\n\n\n> Well, I wanted it to be as flexible as can. So I decided to\n> make it per database configurable. One could say which\n> databases are logged and if a database is, if it is logged\n> sync or async (in sync mode, every COMMIT forces an fsync of\n> the actual logfile and controlfiles).\n> \n> To make async mode as fast as can, I'm using a shared memory\n> of 32K per database (not per backend) that is used as a wrap\n> around buffer from the backends to place their query\n> information. So the log writer can fall a little behind if\n> there are many backends doing different things that don't\n> lock each other.\n> \n> Now I'm a little in doubt about the shared memory limits\n> reported. Was it a good decision to use shared memory? Am I\n> better off using socket's?\n\nShared memory is usually a good idea. We have researched the other\noptions. The only other option was anonymous mmap() of a file, but that\nis not supported by many OS's.\n\n> \n> The bad thing in what I have up to now (it's far from\n> complete) is, that even if a database isn't currently logged,\n> a redolog writer is started and creates the 32K shmem segment\n> (plus a semaphore set with 5 semaphores). This is because I\n> plan to create commands like\n> \n> ALTER DATABASE LOG MODE=ASYNC LOGDIR='/somewhere/dbname';\n> \n> and the like that can be used at runtime (while more than one\n> backend is connected to the database) to turn logging on/off,\n> switch to/from backup mode (all other activity is stopped)\n> etc.\n> \n> So every 32 databases will require another megabyte of shared\n> memory. The logging master controls which databases have\n> activity and kills redolog writers after some time of\n> inactivity, and the shmem is freed then. But it can hurt if\n> someone really has many many databases that are all used at\n> the same time.\n\nBecause most filesystems are 8k blocks, you could reduce it to 16k\nshared memory if you think you can write them out fast enough.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 12 Dec 1998 17:40:34 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] shmem limits and redolog" } ]
[ { "msg_contents": "On Fri, Nov 13, 1998 at 09:00:14AM -0400, The Hermit Hacker wrote:\n> On Fri, 13 Nov 1998, Michael Robinson wrote:\n> \n> > - The ORBit sources appear to be LGPL'ed, which means they can be linked to\n> > PostgreSQL without poisoning the BSD license.\n> \n> \tMico is also LGPL'd for the libraries...\n> \n> > I also have bad news to report.\n> > \n> > - Most of the CORBA functionality that PostgreSQL would rely on is currently\n> > unimplemented in ORBit.\n> \n> \tI don't know what is implemented, but check out:\n> \n> \thttp://www.vsb.cs.uni-frankfurt.de/~mico\n> \n> \tThey \"claim\" a completely 2.2 Corba implementation...\n> \n> > - While CORBA provides a very disciplined interface for allowing different\n> > object implementations (e.g. Python and PostgreSQL) to share the same address\n> > space and execution context safely and efficiently, the PostgreSQL backend\n> > doesn't seem ready for it. In particular, it doesn't appear to be thread\n> > safe. It may not even be reentrant, from what I can tell. And, if a backend\n> > process is not punctual about reading cache synchronization messages out of\n> > the IPC queue, it appears that excessive cache invalidation would hurt\n> > performance.\n> \n> \tHrmmm...does this mean that we are going to have to move towards a\n> threaded model vs forked? Or is it just going to require some major code\n> cleanups for the 'thread safe/reentrant' as aspect?\n> \n> Marc G. Fournier \n> Systems Administrator @ hub.org \n> primary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n> \n> \n\nOK, here as another viewpoint.\n\nDon't even think about MICO. Its design goals are pedagical. It is\nvery slow, very, very compiler hungry, and has no collocation \noptimization.\n\nILU is multi-language, and GPL uncontaminated. It seems to be\nunder development by a very small team. They just released the\n13th alpha. a12 to a13 was roughly a year. You need to be \nprepared for relatively slow code development, with code changes\nvisible only at release points.\n\nORBit is C only, which was very important to the GNOME group.\nHaving worked in C under ILU, I have come to think this a \nlarge mistake; working to overcome the hysterisis gets to be\nvery irritating, and checking for exceptions is a real pain.\nAlso, ORBit is a bare ORB with no services and not much \ndocumentation. \n\nAnother option I have seen no discussion of is TAO. Fast, aggresive\ndevelopment. CVS available, fair service support. Built on underlying\nlibrary. Essentially C++ only (some Java support). See \nhttp://www.cs.wustl.edu/~schmidt/. This is also not GPL.\n\nI am assuming that you have looked at OmniORB and rejected it for\nGPL reasons. If not, you should also consider it.\n\nFinally, I am not sure about the threading. It may be easier to think\nabout caching less and more about quick startup. It is not clear to\nme that having a thread-safe agent managing a pool of forked backends\nis inherently that much slower, and in many applications (like a\nWeb fronted one) is clearly a great improvement in terms of\nmemory usage. Of course, if cache invalidation is such a large\nperformance loss, the agent could attempt to minimize the effects\nby employing user affinity, or even better, if parsinng is easy\nenough, by relation affinity.\n\nWhat I would find intereting about this appoach has not been touched\non. It should make building a distributed database very easy.\n\nAh well, I don't have the time, and maybe not the talent to do the\nwork, so I will shut up and let you guys get back to the very well\ndone task of enhancing postgresql.\n\nThanks\n\nJim\n\n\n\n\n\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n\n----- End forwarded message -----\n", "msg_date": "Fri, 13 Nov 1998 14:54:03 -0500", "msg_from": "Jim Penny <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] More CORBA and PostgreSQL]" } ]
[ { "msg_contents": "> > > What do the others say?\n> > >\n> > In my opinion a redo log is more than worth the overhead. If it is\n> > possible to keep those databases that aren't using the log from\n> creating\n> > the shmem and semaphores until logging is turned on for them, I'd\n> say\n> > you'd eliminated all of the possible arguments.\n> > -DEJ\n> \n> The semaphore set I would like to stay at least. Because it's\n> a way to make pg_dump capable of totally consistent online\n> backups.\n> \n> Let's say pg_dump first issues an\n> \n> ALTER DATABASE BEGIN BACKUP;\n> \n> This will return when the last write lock on the database got\n> released. It now dumps schema. During that phase, any query\n> from another backend will suspend as soon as it requests a\n> write lock. After finishing schema dump (including sequence\n> states - that's the reason for the exclusive backup phase),\n> pg_dump does an\n> \n> ALTER DATABASE ONLINE BACKUP;\n> \n> At this time, a logfile switch is done and pg_dump's backend\n> changes it's behavior so all subsequent queries will return\n> the data valid at the moment of the ONLINE BACKUP command.\n> Now all other backend's can freely modify the database and do\n> whatever they want and the suspended backends continue.\n> pg_dump will not see their changes.\n> \n> When pg_dump finishes, it does an\n> \n> ALTER DATABASE END BACKUP;\n> \n> This stores information about the last successful backup and\n> the first logfile sequence required to recover from this dump\n> - the sequence of the logfile began at ONLINE BACKUP. And it\n> turns back the special behaviour of pg_dump's backend. Last\n> action of pg_dump is to add this info to the dump.\n> \n> Wouldn't that all be really nice? Having a productional\n> database online, taking a full backup while the database is\n> beeing updated plus having a transaction log that could\n> recover a crash using that backup.\n> \n> The final redolog I'm planning will have more capabilities.\n> Point-in-time recovery and online recovery of another\n> database (on another system?) to have a second database in\n> sync and beeing able to switchover in a crash situation, not\n> requiring downtime for recovery.\n> \n> It's still a long way to there - I just made the first steps.\n> \n> \n> Jan\n> \nJan, you make me dreams come true. Keep the semaphores, how big are\nthey anyway?\n\t-DEJ\n", "msg_date": "Fri, 13 Nov 1998 16:33:20 -0600", "msg_from": "\"Jackson, DeJuan\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] shmem limits and redolog" } ]
[ { "msg_contents": "I've added a new utility called vacuumdb (a la \"createdb\", etc.) to make\nvacuum'ing operations a bit easier. It accepts a few command line\nswitches like \"-a\" and \"--analyze\". It's in the main tree, in the same\nplace as the other utilities (src/bin/vacuumdb/). If anyone finds it\ninteresting I could put it into the v6.4 tree also, so it would be\navailable for v6.4.1...\n\n - Tom\n", "msg_date": "Sat, 14 Nov 1998 02:09:55 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": true, "msg_subject": "New \"vacuumdb\" utility" } ]
[ { "msg_contents": "Tom Lane <[email protected]> writes:\n>(I assume CORBA has a recognized standard for the wire-level protocol?)\n\nIt's called the General Inter-ORB Protocol (GIOP). It's stream based.\nWhen the streams are TCP/IP sockets, it becomes the Internet Inter-ORB \nProtocol (GIOP, plus a specification for addresses, etc.).\n\n>I'm leery of this, not only because of the implementation problems other\n>people have mentioned (bringing the backend to a state where it is\n>thread-safe would be a large effort),\n\nIt may be possible just to put a master lock on the whole backend, so that\nonly one one thread was active in there at a time.\n\n>but because it subverts all the\n>protection and security reasons for having the Postgres frontend/backend\n>architecture in the first place. The *last* thing I'd want is client\n>code executing in the same process as the database server.\n\nIn the general case, I'd agree. However, for specific applications where\nyou want tight integration with a reasonably well-disciplined system,\nsuch as a language interpreter, it could be a huge performance win. This\nis sort of what Oracle did with their Java integration. They wrote a \ncommon-denominator VM, and implemented both PL/SQL, and Java on top of it,\nso that both languages could get \"raw\" access to the database tables.\n\n>However, if I understand things correctly, the CORBA interface will hide\n>whether client code is in the same process as the backend server or not;\n>so we could each assemble the parts in the way we prefer.\n\nThat is exactly correct.\n\n>Does CORBA have any provision for saying \"this object is not thread\n>safe, don't send it concurrent operations\"? If there's something along\n>that line, then maybe we don't have to fix the backend before it can\n>live in a common address space with the client...\n\nThat sort of question is resolved at the \"Object Adapter\" layer. The \nORBit \"Portable Object Adapter\" supports two thread policies, \"ORB_CTRL_MODEL\"\nand \"SINGLE_THREAD_MODEL\" (I don't know the specifics; I just pulled that\nout of the Interface Definition for the POA). The Postgres Object Adapter\ncould, presumably, do whatever it wanted in this regard.\n\nIn any case, the full text of the CORBA 2.x spec is here in all it's HTML\nglory:\n\n http://www.infosys.tuwien.ac.at/Research/Corba/OMG/corb2prf.htm\n\nI'm really impressed by the quality of the architecture design.\n\n\t-Michael Robinson\n\n", "msg_date": "Sat, 14 Nov 1998 10:59:38 +0800 (GMT)", "msg_from": "Michael Robinson <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] More PostgreSQL+CORBA" } ]
[ { "msg_contents": "The Hermit Hacker <[email protected]> writes:\n>> GNOME then adopted ORBit, which has two wins: it's in C, and (this is\n>> the biggy) it has provisions to shortcut parameter marshalling,\n>\n>\tMy experience is that for pretty much every pro, there is a\n>con...what are we losing with ORBit that we'd have with mico? Short\n>and/or long term? mico is reputed to be Corba 2.2 compliant..orbit?\n\nThe big con for ORBit, short term:\n\n % cd /usr/src/gnome/ORBit-0.3.0\n % find . -name \"*.c\" -exec grep \"Not yet implemented\" {} \\; | wc\n 857 2613 30302\n\nThat's 857 items specified in the IDL for ORBit that are currently empty\nfunctions.\n\nI think long-term, with the financial backing of RedHat, and the key role\nit plays in the GNOME, I think ORBit will be the premier open-source ORB.\n\n\t-Michael\n\n", "msg_date": "Sat, 14 Nov 1998 11:08:52 +0800 (GMT)", "msg_from": "Michael Robinson <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] More PostgreSQL+CORBA" }, { "msg_contents": "On Sat, 14 Nov 1998, Michael Robinson wrote:\n\n> The Hermit Hacker <[email protected]> writes:\n> >> GNOME then adopted ORBit, which has two wins: it's in C, and (this is\n> >> the biggy) it has provisions to shortcut parameter marshalling,\n> >\n> >\tMy experience is that for pretty much every pro, there is a\n> >con...what are we losing with ORBit that we'd have with mico? Short\n> >and/or long term? mico is reputed to be Corba 2.2 compliant..orbit?\n> \n> The big con for ORBit, short term:\n> \n> % cd /usr/src/gnome/ORBit-0.3.0\n> % find . -name \"*.c\" -exec grep \"Not yet implemented\" {} \\; | wc\n> 857 2613 30302\n> \n> That's 857 items specified in the IDL for ORBit that are currently empty\n> functions.\n> \n> I think long-term, with the financial backing of RedHat, and the key role\n> it plays in the GNOME, I think ORBit will be the premier open-source ORB.\n\n\tCan we do the first implemntation using MICO and convert over\nlater, if the need arises? Or are those 857 items not important?\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Fri, 13 Nov 1998 23:29:11 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] More PostgreSQL+CORBA" }, { "msg_contents": "On Sat, Nov 14, 1998 at 11:08:52AM +0800, Michael Robinson wrote:\n> I think long-term, with the financial backing of RedHat, and the key role\n> it plays in the GNOME, I think ORBit will be the premier open-source ORB.\n\nCouldn't agree more.\n\nMichael\n-- \nDr. Michael Meskes, Manager of the Western Branch Office, Datenrevision GmbH\nwork: Cuxhavener Str. 36, D-21149 Hamburg\nhome: Th.-Heuss-Str. 61, D-41812 Erkelenz, [email protected]\nGo SF49ers! Go Rhein Fire! Use Debian GNU/Linux!\n\n\n", "msg_date": "Sat, 14 Nov 1998 14:20:59 +0100", "msg_from": "Michael Meskes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] More PostgreSQL+CORBA" }, { "msg_contents": "On Fri, Nov 13, 1998 at 11:29:11PM -0400, The Hermit Hacker wrote:\n> \tCan we do the first implemntation using MICO and convert over\n> later, if the need arises? Or are those 857 items not important?\n\nI think so. After all that's what GNOME did too.\n\nMichael\n-- \nDr. Michael Meskes, Manager of the Western Branch Office, Datenrevision GmbH\nwork: Cuxhavener Str. 36, D-21149 Hamburg\nhome: Th.-Heuss-Str. 61, D-41812 Erkelenz, [email protected]\nGo SF49ers! Go Rhein Fire! Use Debian GNU/Linux!\n\n\n", "msg_date": "Sat, 14 Nov 1998 14:21:29 +0100", "msg_from": "Michael Meskes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] More PostgreSQL+CORBA" } ]
[ { "msg_contents": "Bruce Momjian <[email protected]> writes:\n>Current FE/BE protocol seems pretty optimized to me, but you should know\n>the best. Seems like a waste to try and get it to match some standard,\n\nWell, I suppose you could make the same argument about SQL-92 compliance.\n\nStandards compliance is useful because it plugs you into the positive-\nfeedback cycle of product development. Your product works with more \nproducts, more people use your product, more products are developed which\nwork with your product, etc.\n\n\t-Michael Robinson\n\n", "msg_date": "Sat, 14 Nov 1998 13:29:36 +0800 (GMT)", "msg_from": "Michael Robinson <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] More PostgreSQL+CORBA" }, { "msg_contents": "> Bruce Momjian <[email protected]> writes:\n> >Current FE/BE protocol seems pretty optimized to me, but you should know\n> >the best. Seems like a waste to try and get it to match some standard,\n> \n> Well, I suppose you could make the same argument about SQL-92 compliance.\n> \n> Standards compliance is useful because it plugs you into the positive-\n> feedback cycle of product development. Your product works with more \n> products, more people use your product, more products are developed which\n> work with your product, etc.\n\nNot really the same. There are end-user visible changes, and\ndeveloper-visible changes.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 14 Nov 1998 02:35:26 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] More PostgreSQL+CORBA" }, { "msg_contents": "On Sat, 14 Nov 1998, Bruce Momjian wrote:\n\n> > Bruce Momjian <[email protected]> writes:\n> > >Current FE/BE protocol seems pretty optimized to me, but you should know\n> > >the best. Seems like a waste to try and get it to match some standard,\n> > \n> > Well, I suppose you could make the same argument about SQL-92 compliance.\n> > \n> > Standards compliance is useful because it plugs you into the positive-\n> > feedback cycle of product development. Your product works with more \n> > products, more people use your product, more products are developed which\n> > work with your product, etc.\n> \n> Not really the same. There are end-user visible changes, and\n> developer-visible changes.\n\n\tFrom how things sound, I think that implementing Corba will be the\ntrigger for 7.0...:)\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Sat, 14 Nov 1998 10:24:50 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] More PostgreSQL+CORBA" }, { "msg_contents": "> On Sat, 14 Nov 1998, Bruce Momjian wrote:\n> \n> > > Bruce Momjian <[email protected]> writes:\n> > > >Current FE/BE protocol seems pretty optimized to me, but you should know\n> > > >the best. Seems like a waste to try and get it to match some standard,\n> > > \n> > > Well, I suppose you could make the same argument about SQL-92 compliance.\n> > > \n> > > Standards compliance is useful because it plugs you into the positive-\n> > > feedback cycle of product development. Your product works with more \n> > > products, more people use your product, more products are developed which\n> > > work with your product, etc.\n> > \n> > Not really the same. There are end-user visible changes, and\n> > developer-visible changes.\n> \n> \tFrom how things sound, I think that implementing Corba will be the\n> trigger for 7.0...:)\n\nYep.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 14 Nov 1998 10:43:10 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] More PostgreSQL+CORBA" }, { "msg_contents": "The Hermit Hacker <[email protected]> writes:\n> \tFrom how things sound, I think that implementing Corba will be the\n> trigger for 7.0...:)\n\nYeah, I was going to say the same. If we really take this idea to heart\nit would be a *big* change in the way the system is perceived and used,\neven though old apps should continue to work fine.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 14 Nov 1998 11:41:48 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] More PostgreSQL+CORBA " }, { "msg_contents": "> The Hermit Hacker <[email protected]> writes:\n> > \tFrom how things sound, I think that implementing Corba will be the\n> > trigger for 7.0...:)\n> \n> Yeah, I was going to say the same. If we really take this idea to heart\n> it would be a *big* change in the way the system is perceived and used,\n> even though old apps should continue to work fine.\n\nYes, I think you are correct. Would be very interesting. We even could\ncall it protcol X, and continue to use the original protocol too, if\nthere was some advantage to doing it that way.\n\nOur code is clean and modular, so such a change is very possible. It is\nnot like some function in the executor starts shooting data directly to\nthe client.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 14 Nov 1998 12:23:00 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] More PostgreSQL+CORBA" }, { "msg_contents": "On Sat, Nov 14, 1998 at 10:24:50AM -0400, The Hermit Hacker wrote:\n> \tFrom how things sound, I think that implementing Corba will be the\n> trigger for 7.0...:)\n\nAgreed. But then we have to/can start working towards it as soon as time\npermits.\n\nMichael\n-- \nDr. Michael Meskes, Manager of the Western Branch Office, Datenrevision GmbH\nwork: Cuxhavener Str. 36, D-21149 Hamburg\nhome: Th.-Heuss-Str. 61, D-41812 Erkelenz, [email protected]\nGo SF49ers! Go Rhein Fire! Use Debian GNU/Linux!\n\n\n", "msg_date": "Sat, 14 Nov 1998 19:55:09 +0100", "msg_from": "Michael Meskes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] More PostgreSQL+CORBA" }, { "msg_contents": "On Sat, Nov 14, 1998 at 10:24:50AM -0400, The Hermit Hacker wrote:\n> \tFrom how things sound, I think that implementing Corba will be the\n> trigger for 7.0...:)\n\nForgot that one: When do we want to finish 7.0?\n\nMichael\n-- \nDr. Michael Meskes, Manager of the Western Branch Office, Datenrevision GmbH\nwork: Cuxhavener Str. 36, D-21149 Hamburg\nhome: Th.-Heuss-Str. 61, D-41812 Erkelenz, [email protected]\nGo SF49ers! Go Rhein Fire! Use Debian GNU/Linux!\n\n\n", "msg_date": "Sat, 14 Nov 1998 19:55:37 +0100", "msg_from": "Michael Meskes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] More PostgreSQL+CORBA" }, { "msg_contents": "On Sat, 14 Nov 1998, Michael Meskes wrote:\n\n> On Sat, Nov 14, 1998 at 10:24:50AM -0400, The Hermit Hacker wrote:\n> > \tFrom how things sound, I think that implementing Corba will be the\n> > trigger for 7.0...:)\n> \n> Agreed. But then we have to/can start working towards it as soon as time\n> permits.\n\n\tas far as I'm concerned, there is nothing stop'ng ppl from working\non this now, if they want, and submitting patches that are wrapped in\n#ifdef/#else/#endif code...\n\n\tWould it be of value to those wishing to work on this to have a\nlist seperate from -hackers for this?\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Sat, 14 Nov 1998 21:59:07 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] More PostgreSQL+CORBA" }, { "msg_contents": "On Sat, 14 Nov 1998, Michael Meskes wrote:\n\n> On Sat, Nov 14, 1998 at 10:24:50AM -0400, The Hermit Hacker wrote:\n> > \tFrom how things sound, I think that implementing Corba will be the\n> > trigger for 7.0...:)\n> \n> Forgot that one: When do we want to finish 7.0?\n\n\tWhen Corba is ready? :)\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Sat, 14 Nov 1998 21:59:23 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] More PostgreSQL+CORBA" }, { "msg_contents": "> \tWould it be of value to those wishing to work on this to have a\n> list seperate from -hackers for this?\n\nYes! Then we can stop spamming you all with this stuff.\n\nTaral\n", "msg_date": "Sat, 14 Nov 1998 22:30:16 -0600", "msg_from": "\"Taral\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: [HACKERS] More PostgreSQL+CORBA" }, { "msg_contents": "[Charset iso-8859-1 unsupported, filtering to ASCII...]\n> > \tWould it be of value to those wishing to work on this to have a\n> > list seperate from -hackers for this?\n> \n> Yes! Then we can stop spamming you all with this stuff.\n\nSeems like the current 'interfaces' list would be a very good place. \nNot used very much either, and seems logical.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 15 Nov 1998 00:21:39 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] More PostgreSQL+CORBA" }, { "msg_contents": "On Sun, 15 Nov 1998, Bruce Momjian wrote:\n\n> [Charset iso-8859-1 unsupported, filtering to ASCII...]\n> > > \tWould it be of value to those wishing to work on this to have a\n> > > list seperate from -hackers for this?\n> > \n> > Yes! Then we can stop spamming you all with this stuff.\n> \n> Seems like the current 'interfaces' list would be a very good place. \n> Not used very much either, and seems logical.\n\n\tThat works too...let's move it over tehre :)\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Sun, 15 Nov 1998 01:38:03 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] More PostgreSQL+CORBA" }, { "msg_contents": "On Sat, Nov 14, 1998 at 09:59:23PM -0400, The Hermit Hacker wrote:\n> > Forgot that one: When do we want to finish 7.0?\n> \n> \tWhen Corba is ready? :)\n\nSounds good. :-)\n\nMichael\n-- \nDr. Michael Meskes, Manager of the Western Branch Office, Datenrevision GmbH\nwork: Cuxhavener Str. 36, D-21149 Hamburg\nhome: Th.-Heuss-Str. 61, D-41812 Erkelenz, [email protected]\nGo SF49ers! Go Rhein Fire! Use Debian GNU/Linux!\n\n\n", "msg_date": "Mon, 16 Nov 1998 12:31:51 +0100", "msg_from": "Michael Meskes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] More PostgreSQL+CORBA" } ]
[ { "msg_contents": "Today I discovered an astonishing thing !!!\n\nIf I declare a rule on a table, PgAccess will not show that table in the\n\"Tables\" panel, but in \"Views\" panel !!!!!\nEven psql, at \\dt is showing that my table with rules is a \"view?\" !!!\n:-)\n\nI remember that one year ago, when I started PgAccess development, I\ndiscovered that the only thing that shows me that the object is a view\nis the hasrules column.\n\nToday, it seems that I did a HUUUUUGE mistake !\n\nWhat should I do ? Should I remove completely the \"Views\" panel from\nPgAccess ? Doing so, opening a real view in table browser window, that\nwill allow inserting and updating in a view! What would worth this ? Is\nthere any application where inserting in views should help something ?\n\nAny other way to detect \"real views\" from \"real tables with rules\ndefined on them\" ?\n\nPlease CC: me directly to [email protected]\n\nConstantin Teodorescu\nFLEX Consulting Braila, ROMANIA\n", "msg_date": "Sat, 14 Nov 1998 11:07:59 +0200", "msg_from": "Constantin Teodorescu <[email protected]>", "msg_from_op": true, "msg_subject": "How to detect views objects ?" } ]
[ { "msg_contents": "I have created a table , let's say \"people\" that has a column called\n\"id\".\nI declared this column as \"serial\", that means unique, sequential\nassigned values.\n\nI have created another table, \"students\", that inherits the columns from\n\"people\" and has other fields, let's say \"university\".\n\nThe \"students\" table inherits the sequential attribute of it's parent\ntable, \"people\", but lacks the attribute of unique field.\n\nI realise that a unique key index is builded and maintained at table\ninstance level, but this case seems to me that it's a \"half-inheritance\nbehaviour\". How do you feel about it ? From my point of view, a \"unique\"\nattribute of a field declared at parent level should be inherited\nthrough all instances of that table.\n\nConstantin Teodorescu\nFLEX Consulting Braila, ROMANIA\n", "msg_date": "Sat, 14 Nov 1998 19:40:32 +0200", "msg_from": "Constantin Teodorescu <[email protected]>", "msg_from_op": true, "msg_subject": "A question of object inheritance behaviour" } ]
[ { "msg_contents": "Do any of you sit on IRC somewhere? Sometimes I feel like the discussions\nwould be smoother/faster with real-time... I'm on efnet all the time (as\nTaral, of course :)\n\nTaral\n\n", "msg_date": "Sat, 14 Nov 1998 21:26:26 -0600", "msg_from": "\"Taral\" <[email protected]>", "msg_from_op": true, "msg_subject": "Communication" }, { "msg_contents": "On Sat, 14 Nov 1998, Taral wrote:\n\n> Do any of you sit on IRC somewhere? Sometimes I feel like the discussions\n> would be smoother/faster with real-time... I'm on efnet all the time (as\n> Taral, of course :)\n\nD'Arcy and myself are almost always on #PostgreSQL, even if only\nphysically...alot of the time, Bruce is there too...and I've seen Tom Lane\ncome through time and again...I dont' recall ever seeing thomas lockhart\nin there, but short term memory is shot :)\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Sat, 14 Nov 1998 23:37:20 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Communication" }, { "msg_contents": "\nOn 15-Nov-98 The Hermit Hacker wrote:\n> On Sat, 14 Nov 1998, Taral wrote:\n> \n>> Do any of you sit on IRC somewhere? Sometimes I feel like the discussions\n>> would be smoother/faster with real-time... I'm on efnet all the time (as\n>> Taral, of course :)\n> \n> D'Arcy and myself are almost always on #PostgreSQL, even if only\n> physically...alot of the time, Bruce is there too...and I've seen Tom Lane\n> come through time and again...I dont' recall ever seeing thomas lockhart\n> in there, but short term memory is shot :)\n\nWhat network? Efnet?\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> TEAM-OS2\n Online Searchable Campground Listings http://www.camping-usa.com\n \"There is no outfit less entitled to lecture me about bloat\n than the federal government\" -- Tony Snow\n==========================================================================\n\n\n", "msg_date": "Sat, 14 Nov 1998 23:49:18 -0500 (EST)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Communication" }, { "msg_contents": "On Sat, 14 Nov 1998, Vince Vielhaber wrote:\n\n> \n> On 15-Nov-98 The Hermit Hacker wrote:\n> > On Sat, 14 Nov 1998, Taral wrote:\n> > \n> >> Do any of you sit on IRC somewhere? Sometimes I feel like the discussions\n> >> would be smoother/faster with real-time... I'm on efnet all the time (as\n> >> Taral, of course :)\n> > \n> > D'Arcy and myself are almost always on #PostgreSQL, even if only\n> > physically...alot of the time, Bruce is there too...and I've seen Tom Lane\n> > come through time and again...I dont' recall ever seeing thomas lockhart\n> > in there, but short term memory is shot :)\n> \n> What network? Efnet?\n\n\tOf course, there isn't another...is there? *grin*\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Sun, 15 Nov 1998 01:12:49 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Communication" }, { "msg_contents": "> On Sat, 14 Nov 1998, Taral wrote:\n> \n> > Do any of you sit on IRC somewhere? Sometimes I feel like the discussions\n> > would be smoother/faster with real-time... I'm on efnet all the time (as\n> > Taral, of course :)\n> \n> D'Arcy and myself are almost always on #PostgreSQL, even if only\n> physically...alot of the time, Bruce is there too...and I've seen Tom Lane\n> come through time and again...I dont' recall ever seeing thomas lockhart\n> in there, but short term memory is shot :)\n\nTom was there a few times when we needed to discuss some release issues.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 15 Nov 1998 00:19:54 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Communication" }, { "msg_contents": "\nOn 15-Nov-98 The Hermit Hacker wrote:\n> On Sat, 14 Nov 1998, Vince Vielhaber wrote:\n> \n>> \n>> On 15-Nov-98 The Hermit Hacker wrote:\n>> > On Sat, 14 Nov 1998, Taral wrote:\n>> > \n>> >> Do any of you sit on IRC somewhere? Sometimes I feel like the\n>> >> discussions\n>> >> would be smoother/faster with real-time... I'm on efnet all the time\n>> >> (as\n>> >> Taral, of course :)\n>> > \n>> > D'Arcy and myself are almost always on #PostgreSQL, even if only\n>> > physically...alot of the time, Bruce is there too...and I've seen Tom\n>> > Lane\n>> > come through time and again...I dont' recall ever seeing thomas lockhart\n>> > in there, but short term memory is shot :)\n>> \n>> What network? Efnet?\n> \n> Of course, there isn't another...is there? *grin*\n\nDepends who ya ask, I guesses. That'll make three for me! Has anyone\ntried kirc for a client or is there a better one I haven't found yet?\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> TEAM-OS2\n Online Searchable Campground Listings http://www.camping-usa.com\n \"There is no outfit less entitled to lecture me about bloat\n than the federal government\" -- Tony Snow\n==========================================================================\n\n\n", "msg_date": "Sun, 15 Nov 1998 08:54:13 -0500 (EST)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Communication" }, { "msg_contents": "On Sun, 15 Nov 1998, Vince Vielhaber wrote:\n\n> Depends who ya ask, I guesses. That'll make three for me! Has anyone\n> tried kirc for a client or is there a better one I haven't found yet?\n\n\tI use straight IRC with the phoenix filter...been using it pretty\nmuch since IRC was birthed way way back in the early 90s...\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Sun, 15 Nov 1998 13:05:20 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Communication" }, { "msg_contents": "Thus spake Bruce Momjian\n> > On Sat, 14 Nov 1998, Taral wrote:\n> > > Do any of you sit on IRC somewhere? Sometimes I feel like the discussions\n> > D'Arcy and myself are almost always on #PostgreSQL, even if only\n> > physically...alot of the time, Bruce is there too...and I've seen Tom Lane\n> > come through time and again...I dont' recall ever seeing thomas lockhart\n> > in there, but short term memory is shot :)\n> \n> Tom was there a few times when we needed to discuss some release issues.\n\nI don't think we would have got the inet/cidr stuff into 6.4 if we weren't\nbanging our heads together in real time in there.\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 424 2871 (DoD#0082) (eNTP) | what's for dinner.\n", "msg_date": "Sun, 15 Nov 1998 13:03:12 -0500 (EST)", "msg_from": "[email protected] (D'Arcy J.M. Cain)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Communication" } ]
[ { "msg_contents": "Since I don't know exactly who else is working on this CORBA thing (maybe we\nshould make a list?), I'll send this here.\n\nThe OMG publishes extensive information about CORBA on their web site:\nhttp://www.omg.org/corba/\n\nThey have a beginners section (which I think EVERYONE should read) at\nhttp://www.omg.org/corba/beginners.html\nIt details things like \"What is CORBA?\" which not all of us may be certain\nabout :)\n\nAnyway, HTH.\n\nTaral\n\n", "msg_date": "Sat, 14 Nov 1998 22:20:26 -0600", "msg_from": "\"Taral\" <[email protected]>", "msg_from_op": true, "msg_subject": "CORBA information" } ]
[ { "msg_contents": "The Hermit Hacker <[email protected]> writes:\n>> Yea, that would be neat. But considering no one really totally supports\n>> CORBA yet, and we already have tons of working interfaces, perhaps we\n>> can consider it in the future, or were you thinking in the next 6-9\n>> months?\n>\n>\tGuess that's the next question (vs statement)...who actually\n>supports Corba at this time? two, off the top of my head, are Gnome and\n>Koffice...anyone know of a list of others?\n\n\thttp://www.corba.org/vendors/index.html\n\n>\tAs for 6-9 months...I think this is more in Michael court then\n>anything...I don't see why work can't start on it now, even if its nothing\n>more then Michael submitting patches that have the relevant sections\n>#ifdef's so that they are only enabled for those working on it. I don't\n>imagine this is going to be a \"now it isn't, now it is\" sort of thing...it\n>might take 6-9 months to implement...\n\nThis is my plan:\n\t1. Wrap the current libpq API in CORBA, as a proof of concept\n\t2. Implement a static row-level interface, which maps PostgreSQL\n\t types to CORBA types\n\t3. Design a fully dynamic interface, complete with Interface\n\t Repository integration with the PostgreSQL type system\n\t4. Implement the design\n\nNumber one shouldn't take very long (a few weeks, once I get the whole\nCORBA development thing sorted out). Two shouldn't take much longer.\nThree and four is anybody's guess, and, as I mentioned earlier, four\ndepends on currently unimplemented sections of ORBit.\n\n\t-Michael\n\n", "msg_date": "Sun, 15 Nov 1998 12:24:19 +0800 (GMT)", "msg_from": "Michael Robinson <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] More PostgreSQL+CORBA" }, { "msg_contents": "On Sun, 15 Nov 1998, Michael Robinson wrote:\n\n> This is my plan:\n> \t1. Wrap the current libpq API in CORBA, as a proof of concept\n> \t2. Implement a static row-level interface, which maps PostgreSQL\n> \t types to CORBA types\n> \t3. Design a fully dynamic interface, complete with Interface\n> \t Repository integration with the PostgreSQL type system\n> \t4. Implement the design\n> \n> Number one shouldn't take very long (a few weeks, once I get the whole\n> CORBA development thing sorted out). Two shouldn't take much longer.\n> Three and four is anybody's guess, and, as I mentioned earlier, four\n> depends on currently unimplemented sections of ORBit.\n\n\tIf this is to be implemented, please do *not* make it dependent on\nany one version of Corba...IMHO, that is like telling everyone they must\ninstalled GCC 2.8.1 as their compiler to compile PostgreSQL...it takes the\nchoice out of ppls hands and forces them to support one implementation.\n\n\tIf it takes using Mico, at this point in time, to implement this\n*properly*, so be it...but however it is implemented, it has to be done in\nsuch a way that it is as generic as possible so that the choice of what\nCORBA implementation is left up to the person running the end-system, not\nthe developer because he had a bias/preference.\n\n\tAnd, ya, I currently have a bias, and that is towards MICO, since,\nby your own admission, ORBit's only supports a sub-set of CORBA...there is\nabsolutely no reason why our CORBA implementation should be tied to the\ndevelopment pace of one particular implementation...\n\n Marc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Sun, 15 Nov 1998 00:51:05 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] More PostgreSQL+CORBA" }, { "msg_contents": "Ok, moved to interfaces as suggested\n\nMaybe it would be a good idea to strip CC:s ?\n\nMichael Robinson wrote:\n> \n> \n> > As for 6-9 months...I think this is more in Michael court then\n> >anything...I don't see why work can't start on it now, even if its nothing\n> >more then Michael submitting patches that have the relevant sections\n> >#ifdef's so that they are only enabled for those working on it. I don't\n> >imagine this is going to be a \"now it isn't, now it is\" sort of thing...it\n> >might take 6-9 months to implement...\n> \n> This is my plan:\n> 1. Wrap the current libpq API in CORBA, as a proof of concept\n\nAdd 1.A. here: extend current FE<->BE interface to support prepared \nstatements, at least to the level of SPI, maybe even _start_ from\n\"extended\" \nSPI CORBA interface. \"Extended\" because SPI had same deficiences as \nwell compared to the \"main\" FE<->BE protocol.\n\n> 2. Implement a static row-level interface, which maps PostgreSQL\n> types to CORBA types\n\n2.A. Define a standard way for CORBA'fying the postgresql \nuser-defined types\n\n> 3. Design a fully dynamic interface, complete with Interface\n> Repository integration with the PostgreSQL type system\n> 4. Implement the design\n>\n> Number one shouldn't take very long (a few weeks, once I get the whole\n> CORBA development thing sorted out). Two shouldn't take much longer.\n> Three and four is anybody's guess, and, as I mentioned earlier, four\n> depends on currently unimplemented sections of ORBit.\n\nI think, that we should start with an ORB that has C-interface \n'cause the rest of PostgreSQL is in C. \n\nThe time will probably be better spent on implementing the \nunimplemented sections rather than writing a C to C++ wrapper \n(the part which should in theory be made unneccesary by CORBA !)\n\nSo supporting other language IDL mappings\n(C++,java,python,ada,smalltalk,...) \ndirectly in backend seems not to be a very good idea\n\nOf course we should support all C mappings.\n\n------------------\nHannu\n", "msg_date": "Sun, 15 Nov 1998 16:06:23 +0200", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] More PostgreSQL+CORBA" } ]
[ { "msg_contents": "Hi all\n\nMinor detail, but when I did 'pg_dump -z -f dump.file dbname' and then\nwent to restore it, I found that the grant statments are like:\nGRANT ALL on \"tablename\" to \"tablename\";\ninstead of\nGRANT ALL on \"tablename\" to \"username\";\n\nI used vim to edit the dump file, so am ok.\nJust thought you might want to know.\nOh, ya, that is the release version 6.4, just downloaded this afternoon.\n\nHave a great night\nTerry Mackintosh <[email protected]> http://www.terrym.com\nsysadmin/owner Please! No MIME encoded or HTML mail, unless needed.\n\nProudly powered by R H Linux 4.2, Apache 1.3, PHP 3, PostgreSQL 6.3\n-------------------------------------------------------------------\nSuccess Is A Choice ... book by Rick Patino, get it, read it!\n\n", "msg_date": "Sat, 14 Nov 1998 23:37:33 -0500 (EST)", "msg_from": "Terry Mackintosh <[email protected]>", "msg_from_op": true, "msg_subject": "BIG grant problem" }, { "msg_contents": "> Hi all\n> \n> Minor detail, but when I did 'pg_dump -z -f dump.file dbname' and then\n> went to restore it, I found that the grant statments are like:\n> GRANT ALL on \"tablename\" to \"tablename\";\n> instead of\n> GRANT ALL on \"tablename\" to \"username\";\n> \n> I used vim to edit the dump file, so am ok.\n> Just thought you might want to know.\n> Oh, ya, that is the release version 6.4, just downloaded this afternoon.\n\nYikes, confirmed here. We need to know how this got into the tree\nwithout showing up on our tests, are there any more pg_dump bugs we have\nnot found, and a fix.\n\nThis is our first major 6.4 bug, aside from large objects! \nCongradulations.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 15 Nov 1998 00:25:23 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] BIG grant problem" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> Terry Mackintosh wrote:\n>> Minor detail, but when I did 'pg_dump -z -f dump.file dbname' and then\n>> went to restore it, I found that the grant statments are like:\n>> GRANT ALL on \"tablename\" to \"tablename\";\n>> instead of\n>> GRANT ALL on \"tablename\" to \"username\";\n\n> Yikes, confirmed here. We need to know how this got into the tree\n> without showing up on our tests,\n\nWell, that's easy --- there are no regression tests that test pg_dump\nat all, nor any that test multiple table owners and permissions.\n\nFWIW, pg_dump -z works correctly for GRANT to PUBLIC --- otherwise\nI would've noticed some time ago. But I hadn't had occasion\nto check granting permission to specific users :-( ... and I don't\nthink most of the rest of the developers work with databases that\neven have multiple users, let alone put access restrictions on\nindividual tables.\n\nIt's certainly true that pg_dump is pretty weak in the area of\ntable ownerships and permissions. We have fixed several bugs\nin that area since 6.3.2, and I'm not particularly surprised to\nhear of another one. We need someone who actually has occasion\nto work with access-restricted databases to pound on pg_dump for\na while and flush out the bugs. (Terry, can you volunteer?)\nI don't think the bugs will be hard to fix, it's just a matter\nof not having done enough testing.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 15 Nov 1998 01:31:42 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] BIG grant problem " }, { "msg_contents": "I said:\n> I don't think the bugs will be hard to fix,\n\nAnd, indeed, this bug is pretty trivial. Fix will be applied\nmomentarily.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 15 Nov 1998 01:43:29 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] BIG grant problem " }, { "msg_contents": "Hi\n\nOn Sun, 15 Nov 1998, Tom Lane wrote:\n\n> hear of another one. We need someone who actually has occasion\n> to work with access-restricted databases to pound on pg_dump for\n> a while and flush out the bugs. (Terry, can you volunteer?)\n\nSure.\n\n> I don't think the bugs will be hard to fix, it's just a matter\n> of not having done enough testing.\n\nI started to look at it last night, found where the line is created in the\npg_dump.c file, looks ok there, so the problem is a little deeper then\njust an easy typo.\n\nShould be able to look at it today.\n\nHave a great day\nTerry Mackintosh <[email protected]> http://www.terrym.com\nsysadmin/owner Please! No MIME encoded or HTML mail, unless needed.\n\nProudly powered by R H Linux 4.2, Apache 1.3, PHP 3, PostgreSQL 6.3\n-------------------------------------------------------------------\nSuccess Is A Choice ... book by Rick Patino, get it, read it!\n\n", "msg_date": "Sun, 15 Nov 1998 08:13:29 -0500 (EST)", "msg_from": "Terry Mackintosh <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] BIG grant problem " }, { "msg_contents": "Hi Tom\n\nOn Sun, 15 Nov 1998, Tom Lane wrote:\n\n> I said:\n> > I don't think the bugs will be hard to fix,\n> \n> And, indeed, this bug is pretty trivial. Fix will be applied\n> momentarily.\n> \n> \t\t\tregards, tom lane\n\nOh good, will there be a loose patch? so I and others don't have to\ndownload the whole source again?\n\nThanks\nTerry Mackintosh <[email protected]> http://www.terrym.com\nsysadmin/owner Please! No MIME encoded or HTML mail, unless needed.\n\nProudly powered by R H Linux 4.2, Apache 1.3, PHP 3, PostgreSQL 6.3\n-------------------------------------------------------------------\nSuccess Is A Choice ... book by Rick Patino, get it, read it!\n\n", "msg_date": "Sun, 15 Nov 1998 08:19:43 -0500 (EST)", "msg_from": "Terry Mackintosh <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] BIG grant problem " }, { "msg_contents": "> Hi Tom\n> \n> On Sun, 15 Nov 1998, Tom Lane wrote:\n> \n> > I said:\n> > > I don't think the bugs will be hard to fix,\n> > \n> > And, indeed, this bug is pretty trivial. Fix will be applied\n> > momentarily.\n> > \n> > \t\t\tregards, tom lane\n> \n> Oh good, will there be a loose patch? so I and others don't have to\n> download the whole source again?\n> \n\nI think I can say it will be in 6.4,1, which should be released in a\nweek or so. Hopefully, we can generate a 6.4.1, and a 6.4_to_6.4.1\npatch.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 15 Nov 1998 11:34:57 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] BIG grant problem" }, { "msg_contents": "Terry Mackintosh <[email protected]> writes:\n> Oh good, will there be a loose patch? so I and others don't have to\n> download the whole source again?\n\nIf you need it here's the patch. (I already checked this into both\n6.4 and 6.5 trees.) I found a second place that had the same problem,\nbtw. Calling fmtID() twice in one expression is no good 'cuz it uses\na static result area...\n\n\t\t\tregards, tom lane\n\n\n\n*** pg_dump.c.orig\tFri Nov 6 10:56:42 1998\n--- pg_dump.c\tSun Nov 15 02:11:29 1998\n***************\n*** 2563,2577 ****\n \t{\n \t\tif (ACLlist[k].privledges != (char *) NULL)\n \t\t{\n \t\t\tif (ACLlist[k].user == (char *) NULL)\n! \t\t\t\tfprintf(fout,\n! \t\t\t\t\t\t\"GRANT %s on %s to PUBLIC;\\n\",\n! \t\t\t\t\t\tACLlist[k].privledges, fmtId(tbinfo.relname));\n \t\t\telse\n! \t\t\t\tfprintf(fout,\n! \t\t\t\t\t\t\"GRANT %s on %s to %s;\\n\",\n! \t\t\t\t\t\tACLlist[k].privledges, fmtId(tbinfo.relname),\n! \t\t\t\t\t\tfmtId(ACLlist[k].user));\n \t\t}\n \t}\n }\n--- 2563,2578 ----\n \t{\n \t\tif (ACLlist[k].privledges != (char *) NULL)\n \t\t{\n+ \t\t\t/* If you change this code, bear in mind fmtId() can be\n+ \t\t\t * used only once per printf() call...\n+ \t\t\t */\n+ \t\t\tfprintf(fout,\n+ \t\t\t\t\t\"GRANT %s on %s to \",\n+ \t\t\t\t\tACLlist[k].privledges, fmtId(tbinfo.relname));\n \t\t\tif (ACLlist[k].user == (char *) NULL)\n! \t\t\t\tfprintf(fout, \"PUBLIC;\\n\");\n \t\t\telse\n! \t\t\t\tfprintf(fout, \"%s;\\n\", fmtId(ACLlist[k].user));\n \t\t}\n \t}\n }\n***************\n*** 2851,2873 ****\n \n \t\t\tstrcpy(id1, fmtId(indinfo[i].indexrelname));\n \t\t\tstrcpy(id2, fmtId(indinfo[i].indrelname));\n! \t\t\tsprintf(q, \"CREATE %s INDEX %s on %s using %s (\",\n \t\t\t (strcmp(indinfo[i].indisunique, \"t\") == 0) ? \"UNIQUE\" : \"\",\n \t\t\t\t\tid1,\n \t\t\t\t\tid2,\n \t\t\t\t\tindinfo[i].indamname);\n \t\t\tif (funcname)\n \t\t\t{\n! \t\t\t\tsprintf(q, \"%s %s (%s) %s );\\n\",\n! \t\t\t\t\t\tq, fmtId(funcname), attlist, fmtId(classname[0]));\n \t\t\t\tfree(funcname);\n \t\t\t\tfree(classname[0]);\n \t\t\t}\n \t\t\telse\n! \t\t\t\tsprintf(q, \"%s %s );\\n\",\n! \t\t\t\t\t\tq, attlist);\n! \n! \t\t\tfputs(q, fout);\n \t\t}\n \t}\n \n--- 2852,2872 ----\n \n \t\t\tstrcpy(id1, fmtId(indinfo[i].indexrelname));\n \t\t\tstrcpy(id2, fmtId(indinfo[i].indrelname));\n! \t\t\tfprintf(fout, \"CREATE %s INDEX %s on %s using %s (\",\n \t\t\t (strcmp(indinfo[i].indisunique, \"t\") == 0) ? \"UNIQUE\" : \"\",\n \t\t\t\t\tid1,\n \t\t\t\t\tid2,\n \t\t\t\t\tindinfo[i].indamname);\n \t\t\tif (funcname)\n \t\t\t{\n! \t\t\t\t/* need 2 printf's here cuz fmtId has static return area */\n! \t\t\t\tfprintf(fout, \" %s\", fmtId(funcname));\n! \t\t\t\tfprintf(fout, \" (%s) %s );\\n\", attlist, fmtId(classname[0]));\n \t\t\t\tfree(funcname);\n \t\t\t\tfree(classname[0]);\n \t\t\t}\n \t\t\telse\n! \t\t\t\tfprintf(fout, \" %s );\\n\", attlist);\n \t\t}\n \t}\n \n", "msg_date": "Sun, 15 Nov 1998 17:08:12 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] BIG grant problem " } ]
[ { "msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\n\nThis is a multipart MIME message.\n\n- --==_Exmh_18043825520\nContent-Type: text/plain; charset=us-ascii\n\n============================================================================\n POSTGRESQL BUG REPORT TEMPLATE\n============================================================================\n\n\nYour name\t\t: Billy G. Allie\t\nYour email address\t: [email protected]\n\n\nSystem Configuration\n- ---------------------\n Architecture (example: Intel Pentium) \t: Intel 486DX2\n\n Operating System (example: Linux 2.0.26 ELF) \t: UnixWare 7.0\n\n PostgreSQL version (example: PostgreSQL-6.4) : PostgreSQL-6.4\n\n Compiler used (example: gcc 2.8.0)\t\t: Optimizing C Compilation System \n(CCS) 3.2 08/18/98 (u701)\n\nPlease enter a FULL description of your problem:\n- ------------------------------------------------\nThere are a number of problems with using mixed case table names:\n\n1. Using constraints on tables whose name contains mixed case will fail.\n\n2. Creating triggers on tables whose name contains mixed case will fail.\n\n3. In pgsql, the command '\\d *' will fail.\n\nPlease describe a way to repeat the problem. Please try to provide a\nconcise reproducible example, if at all possible: \n- ----------------------------------------------------------------------\n1. Create a table that has a mixed case name and a constraint such as\n a default or a primary key.\n\n2. Create a trigger on a table with a mixed case name.\n\n3. In psql, execute the '\\d *' command in a database that has tables\n mixed case names.\n\nIf you know how this problem might be fixed, list the solution below:\n- ---------------------------------------------------------------------\nThe following patch file fixes problems 1 and 3. Thomas G. Lockhart's\npatch to fix problem 1 was incomplete. I added the additional changes\nto 'heap.c' that were needed to complete the fix.\n\n\n\n- --==_Exmh_18043825520\nContent-Type: application/x-patch ; name=\"uw7-0.patch\"\nContent-Description: uw7-0.patch\nContent-Transfer-Encoding: base64\nContent-Disposition: attachment; filename=\"uw7-0.patch\"\n\nKioqIHNyYy9iYWNrZW5kL2NhdGFsb2cvaGVhcC5jLm9yaWcJU2F0IE5vdiAxNCAyMjoyMDo0\nNiAxOTk4Ci0tLSBzcmMvYmFja2VuZC9jYXRhbG9nL2hlYXAuYwlTYXQgTm92IDE0IDIyOjI1\nOjI3IDE5OTgKKioqKioqKioqKioqKioqCioqKiAxNDQ0LDE0NTAgKioqKgogIAlleHRlcm4g\nR2xvYmFsTWVtb3J5IENhY2hlQ3h0OwogIAogIHN0YXJ0OjsKISAJc3ByaW50ZihzdHIsICJz\nZWxlY3QgJXMlcyBmcm9tICUuKnMiLCBhdHRyZGVmLT5hZHNyYywgY2FzdCwKICAJCQlOQU1F\nREFUQUxFTiwgcmVsLT5yZF9yZWwtPnJlbG5hbWUuZGF0YSk7CiAgCXNldGhlYXBvdmVycmlk\nZSh0cnVlKTsKICAJcGxhblRyZWVfbGlzdCA9IChMaXN0ICopIHBnX3BhcnNlX2FuZF9wbGFu\nKHN0ciwgTlVMTCwgMCwgJnF1ZXJ5VHJlZV9saXN0LCBOb25lLCBGQUxTRSk7Ci0tLSAxNDQ0\nLDE0NTMgLS0tLQogIAlleHRlcm4gR2xvYmFsTWVtb3J5IENhY2hlQ3h0OwogIAogIHN0YXJ0\nOjsKISAJLyogU3Vycm91bmQgdGFibGUgbmFtZSB3aXRoIGRvdWJsZSBxdW90ZXMgdG8gYWxs\nb3cgbWl4ZWQtY2FzZSBhbmQKISAJICogd2hpdGVzcGFjZXMgaW4gbmFtZXMuIC0gQkdBIDE5\nOTgtMTEtMTQKISAJICovCiEgCXNwcmludGYoc3RyLCAic2VsZWN0ICVzJXMgZnJvbSBcIiUu\nKnNcIiIsIGF0dHJkZWYtPmFkc3JjLCBjYXN0LAogIAkJCU5BTUVEQVRBTEVOLCByZWwtPnJk\nX3JlbC0+cmVsbmFtZS5kYXRhKTsKICAJc2V0aGVhcG92ZXJyaWRlKHRydWUpOwogIAlwbGFu\nVHJlZV9saXN0ID0gKExpc3QgKikgcGdfcGFyc2VfYW5kX3BsYW4oc3RyLCBOVUxMLCAwLCAm\ncXVlcnlUcmVlX2xpc3QsIE5vbmUsIEZBTFNFKTsKKioqKioqKioqKioqKioqCioqKiAxNTE1\nLDE1MjEgKioqKgogIAljaGFyCQludWxsc1s0XSA9IHsnICcsICcgJywgJyAnLCAnICd9Owog\nIAlleHRlcm4gR2xvYmFsTWVtb3J5IENhY2hlQ3h0OwogIAohIAlzcHJpbnRmKHN0ciwgInNl\nbGVjdCAxIGZyb20gJS4qcyB3aGVyZSAlcyIsCiAgCQkJTkFNRURBVEFMRU4sIHJlbC0+cmRf\ncmVsLT5yZWxuYW1lLmRhdGEsIGNoZWNrLT5jY3NyYyk7CiAgCXNldGhlYXBvdmVycmlkZSh0\ncnVlKTsKICAJcGxhblRyZWVfbGlzdCA9IChMaXN0ICopIHBnX3BhcnNlX2FuZF9wbGFuKHN0\nciwgTlVMTCwgMCwgJnF1ZXJ5VHJlZV9saXN0LCBOb25lLCBGQUxTRSk7Ci0tLSAxNTE4LDE1\nMjcgLS0tLQogIAljaGFyCQludWxsc1s0XSA9IHsnICcsICcgJywgJyAnLCAnICd9OwogIAll\neHRlcm4gR2xvYmFsTWVtb3J5IENhY2hlQ3h0OwogIAohIAkvKiBDaGVjayBmb3IgdGFibGUn\ncyBleGlzdGFuY2UuIFN1cnJvdW5kIHRhYmxlIG5hbWUgd2l0aCBkb3VibGUtcXVvdGVzCiEg\nCSAqIHRvIGFsbG93IG1peGVkLWNhc2UgYW5kIHdoaXRlc3BhY2UgbmFtZXMuIC0gdGhvbWFz\nIDE5OTgtMTEtMTIKISAJICovCiEgCXNwcmludGYoc3RyLCAic2VsZWN0IDEgZnJvbSBcIiUu\nKnNcIiB3aGVyZSAlcyIsCiAgCQkJTkFNRURBVEFMRU4sIHJlbC0+cmRfcmVsLT5yZWxuYW1l\nLmRhdGEsIGNoZWNrLT5jY3NyYyk7CiAgCXNldGhlYXBvdmVycmlkZSh0cnVlKTsKICAJcGxh\nblRyZWVfbGlzdCA9IChMaXN0ICopIHBnX3BhcnNlX2FuZF9wbGFuKHN0ciwgTlVMTCwgMCwg\nJnF1ZXJ5VHJlZV9saXN0LCBOb25lLCBGQUxTRSk7CioqKiBzcmMvYmluL3BzcWwvcHNxbC5j\nLm9yaWcJU3VuIE5vdiAxNSAwMDo1NjozNCAxOTk4Ci0tLSBzcmMvYmluL3BzcWwvcHNxbC5j\nCVN1biBOb3YgMTUgMDE6MzI6NTQgMTk5OAoqKioqKioqKioqKioqKioKKioqIDQ2MCw0NzEg\nKioqKgogIAkJCQlwZXJyb3IoIm1hbGxvYyIpOwogIAogIAkJCS8qIGxvYWQgdGFibGUgdGFi\nbGUgKi8KICAJCQlmb3IgKGkgPSAwOyBpIDwgbkNvbHVtbnM7IGkrKykKICAJCQl7CiEgCQkJ\nCXRhYmxlW2ldID0gKGNoYXIgKikgbWFsbG9jKFBRZ2V0bGVuZ3RoKHJlcywgaSwgMSkgKiBz\naXplb2YoY2hhcikgKyAxKTsKICAJCQkJaWYgKHRhYmxlW2ldID09IE5VTEwpCiAgCQkJCQlw\nZXJyb3IoIm1hbGxvYyIpOwohIAkJCQlzdHJjcHkodGFibGVbaV0sIFBRZ2V0dmFsdWUocmVz\nLCBpLCAxKSk7CiAgCQkJfQogIAogIAkJCVBRY2xlYXIocmVzKTsKLS0tIDQ2MCw0NzYgLS0t\nLQogIAkJCQlwZXJyb3IoIm1hbGxvYyIpOwogIAogIAkJCS8qIGxvYWQgdGFibGUgdGFibGUg\nKi8KKyAJCQkvKiBQdXQgZG91YmxlIHF1b3RlcyBhcm91bmQgdGhlIHRhYmxlIG5hbWUgdG8g\nYWxsb3cgZm9yIG1peGVkLWNhc2UKKyAJCQkgKiBhbmQgd2hpdGVzcGFjZXMgaW4gdGhlIHRh\nYmxlIG5hbWUuIC0gQkdBIDE5OTgtMTEtMTQKKyAJCQkgKi8KICAJCQlmb3IgKGkgPSAwOyBp\nIDwgbkNvbHVtbnM7IGkrKykKICAJCQl7CiEgCQkJCXRhYmxlW2ldID0gKGNoYXIgKikgbWFs\nbG9jKFBRZ2V0bGVuZ3RoKHJlcywgaSwgMSkgKiBzaXplb2YoY2hhcikgKyAzKTsKICAJCQkJ\naWYgKHRhYmxlW2ldID09IE5VTEwpCiAgCQkJCQlwZXJyb3IoIm1hbGxvYyIpOwohIAkJCQlz\ndHJjcHkodGFibGVbaV0sICJcIiIpOwohIAkJCQlzdHJjYXQodGFibGVbaV0sIFBRZ2V0dmFs\ndWUocmVzLCBpLCAxKSk7CiEgCQkJCXN0cmNhdCh0YWJsZVtpXSwgIlwiIik7CiAgCQkJfQog\nIAogIAkJCVBRY2xlYXIocmVzKTsK\n\n- --==_Exmh_18043825520\nContent-Type: text/plain; charset=us-ascii\n\n____ | Billy G. Allie | Domain....: [email protected]\n| /| | 7436 Hartwell | Compuserve: 76337,2061\n|-/-|----- | Dearborn, MI 48126| MSN.......: [email protected]\n|/ |LLIE | (313) 582-1540 | \n\n- --==_Exmh_18043825520--\n\n\n\n-----BEGIN PGP SIGNATURE-----\nVersion: PGPfreeware 5.0i for non-commercial use\nCharset: noconv\n\niQA/AwUBNk57LqFebSRz8o+3EQL40QCfUplxf/CSLceopBQlLXUb+v+cMaoAoNW3\nhA5qlmdH9XtfGsTpkzruCyI2\n=eurJ\n-----END PGP SIGNATURE-----\n\n", "msg_date": "Sun, 15 Nov 1998 01:56:47 -0500", "msg_from": "\"Billy G. Allie\" <[email protected]>", "msg_from_op": true, "msg_subject": "[REL6.4] Mixed case table name problems with some fixes." } ]
[ { "msg_contents": "============================================================================\n POSTGRESQL BUG REPORT TEMPLATE\n============================================================================\n\n\nYour name\t\t: Billy G. Allie\t\nYour email address\t: [email protected]\n\n\nSystem Configuration\n---------------------\n Architecture (example: Intel Pentium) \t: Intel 486DX2\n\n Operating System (example: Linux 2.0.26 ELF) \t: UnixWare 7.0\n\n PostgreSQL version (example: PostgreSQL-6.4) : PostgreSQL-6.4\n\n Compiler used (example: gcc 2.8.0)\t\t: Optimizing C Compilation System \n(CCS) 3.2 08/18/98 (u701)\n\nPlease enter a FULL description of your problem:\n------------------------------------------------\nThere are a number of problems with using mixed case table names:\n\n1. Using constraints on tables whose name contains mixed case will fail.\n\n2. Creating triggers on tables whose name contains mixed case will fail.\n\n3. In pgsql, the command '\\d *' will fail.\n\nPlease describe a way to repeat the problem. Please try to provide a\nconcise reproducible example, if at all possible: \n----------------------------------------------------------------------\n1. Create a table that has a mixed case name and a constraint such as\n a default or a primary key.\n\n2. Create a trigger on a table with a mixed case name.\n\n3. In psql, execute the '\\d *' command in a database that has tables\n mixed case names.\n\nIf you know how this problem might be fixed, list the solution below:\n---------------------------------------------------------------------\nThe following patch file fixes problems 1 and 3. Thomas G. Lockhart's\npatch to fix problem 1 was incomplete. I added the additional changes\nto 'heap.c' that were needed to complete the fix.\n\n\n\n\n____ | Billy G. Allie | Domain....: [email protected]\n| /| | 7436 Hartwell | Compuserve: 76337,2061\n|-/-|----- | Dearborn, MI 48126| MSN.......: [email protected]\n|/ |LLIE | (313) 582-1540 |", "msg_date": "Sun, 15 Nov 1998 02:02:36 -0500", "msg_from": "\"Billy G. Allie\" <[email protected]>", "msg_from_op": true, "msg_subject": "[REL6.4] Mixed case table name problems with some fixes." } ]
[ { "msg_contents": "\nApplixware 4.4.1 proports to be able to talk to corba objects.\n\n>From the 'Builder' documentation...\n\nFor the Builder Programmer, CORBA allows you to access useful objects\nresiding in any part of your network. These objects may exist on the\nsame machine as your Builder application, or they may exist on a\ndifferent machine. The objects may be implemented in C++, Smalltalk,\nor some other programming language supported by your ORB's IDL\n(Interface Definition Language) compiler.\n\nAt the center of any CORBA distributed application is an ORB (Object\nRequest Broker). Applixware supports any CORBA 2-compilant ORB that\nsupports the IIOP protocol. The following figure shows the\narchitecture of a Builder application accessing a CORBA object.\n\n...\n\n\nHmm.. Interesting... I need to read the rest of this.\n\nNB: I have not tried to get this to work. I posted to the applixware\nlist asking if anyone had get it to work, and didn't get a response.\nPlus it is new.\n\n-- cary\n\n[1] Builder is their RAD tool.\n", "msg_date": "Sun, 15 Nov 1998 09:28:41 -0500 (EST)", "msg_from": "\"Cary B. O'Brien\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] More PostgreSQL+CORBA" } ]
[ { "msg_contents": "Hi all\n\nJust a follow up, did that datetime stuff I submitted ever become part of\nthe soon to be 6.4.1 tree?\n\nI just never seen any reply one way or the other, and wanted to make sure\nit did not just fall into a balck hole :-)\n\nIt seemed to me that it may also be usefull to others. For those who\ndon't know or forgot, it is a function to be called from a triger befor\nupdate, and will set the specified datetime field to the current datetime,\nthus implimenting a modification time stamp.\n\nI have re-attached the tgz file just incase it got lost.\nIt should be unpacked in ..../pgsql/contrib/spi/ and the Makefile patched\nwith the supplied patch.\n\nThanks\nTerry Mackintosh <[email protected]> http://www.terrym.com\nsysadmin/owner Please! No MIME encoded or HTML mail, unless needed.\n\nProudly powered by R H Linux 4.2, Apache 1.3, PHP 3, PostgreSQL 6.3\n-------------------------------------------------------------------\nSuccess Is A Choice ... book by Rick Patino, get it, read it!", "msg_date": "Sun, 15 Nov 1998 13:37:07 -0500 (EST)", "msg_from": "Terry Mackintosh <[email protected]>", "msg_from_op": true, "msg_subject": "6.4.1 contrib/spi/" }, { "msg_contents": "Hello:\n\nI noticed the locking code in the backend/storage/lmgr directory has had a\nlot of modifications between 6.3.2 vs. 6.4. I know that Vadim is working\non changing the table-level locking scheme of 6.3.2 towards a\nmulti-version concurrency control scheme. I'm wondering how much along\nthese modifications are -- it looks like there were changes made to the\nexisting locking scheme but no additional features were added. This is\nbased on a very cursory look at the locking code in 6.4 (the locking code\nis a lot more complicated than I had initially thought it was going to\nbe).\n\nI'm curious as to how the multi-version scheme will be implemented. Vadim\nsaid that Postgres has a non-overwriting storage manager which can be\nexploited for this concurrency control scheme. I'm not sure I understand\nhim -- values that are updated in a table are written to the database in\nsuch a fashion that the old value remains accessible? This is\naccomplished without a recovery log?\n\nAlso, there is some user-level locking code in the contrib directory by\nMassimo that (if I am correct in my understanding of it), seems to be\nproviding row-level locking capabilities through query selects. Is this\nsomething that will be added to the Postgresql core at a future date?\n\nThanks in advance for any information you can provide.\n\n--------------< LINUX: The choice of a GNU generation. >--------------\nSteve Frampton <[email protected]> http://qlink.queensu.ca/~3srf\n\n\n\n", "msg_date": "Sun, 15 Nov 1998 18:04:10 -0500 (EST)", "msg_from": "Steve Frampton <[email protected]>", "msg_from_op": false, "msg_subject": "Concurrency control questions 6.3.2 vs. 6.4" }, { "msg_contents": "> Hello:\n> \n> I noticed the locking code in the backend/storage/lmgr directory has had a\n> lot of modifications between 6.3.2 vs. 6.4. I know that Vadim is working\n> on changing the table-level locking scheme of 6.3.2 towards a\n> multi-version concurrency control scheme. I'm wondering how much along\n> these modifications are -- it looks like there were changes made to the\n> existing locking scheme but no additional features were added. This is\n> based on a very cursory look at the locking code in 6.4 (the locking code\n> is a lot more complicated than I had initially thought it was going to\n> be).\n\nThat may have been me. I renamed a lot of the structures at one point,\nbecause they were so misnamed as to add to the confusion. No real code\nchanged in that pass, though we have made incremental improvements to\nthe code in this release.\n\nI don't think Vadim has started making changes for LLL there yet, but he\ncan tell us.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 15 Nov 1998 22:13:15 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Concurrency control questions 6.3.2 vs. 6.4" }, { "msg_contents": "Hi everyone:\n\nThanks for the helpful responses. Are you folks getting sick of me yet?\nI'm hoping somebody could help me understand a bit better the way locking\nprotocols are used in PostGreSQL 6.4, including how a query is parsed,\nexecuted, etc.\n\nI understand that there are two locks available: one for reads and one for\nwrites. They are called by RelationSetLockForRead() and \nRelationSetLockForWrite(), respectively, which are both implemented in\nbackend/storage/lmgr.c.\n\nThese functions are called by the query parser, trigger handler, and\nindexing subsystem. The query parser is responsible for parsing a given\nexpression in backend/parser/parse_expr.c and actually grabbing tuples in\nbackend/parser/parse_func.c which are passed as a heap array to the\nbackend which in turn passes the information to the client. Am I still\nokay?\n\nI'm interested in the locking protocols as used for query processing\nso I guess I can ignore the trigger and indexing for now.\n\nLocking is not accomplished with calls to the operating system but instead\nis managed by the locking manager through a lock hash table which lives in\nshared memory. The table contains information on locks such as the type of\nlock (read/write), number of locks currently held, an array of bitmasks\nshowing lock conflicts, and lock priority level (used to prevent\nstarvation). In addition, each relation has its own data structure which \nincludes some locking information.\n\nHere's where things get fuzzy -- there's a lot of code here so please be\npatient with me if I really screwed up in my interpretation. :-)\n\nWhen the RelationSetLockFor...() function is called, it ensures that the\nrelation and lock information for the relation are both valid. It then\ncalls MultiLockReln() with a pointer to the relation's lock information \nand the appropriate lock type. MultiLockReln() initializes a lock tag\nwhich is passed to MultiAcquire().\n\nI'm a little vague on MultiAcquire(). It seems to search through the\nlock hash table to see if a lock should be allowed? And if so it calls\nLockAcquire(). But LockAcquire() itself checks for conflicts, sleeps if\none exists, or sets the appropriate lock, adding it to the lock table. So\nI'm a bit confused here...\n\nUnlocks are accomplished in much the same fashion.\nRelationUnsetLockFor...() is called which in turn calls MultiRelease() \nwhich searches the lock table using the same algorithm as in\nMultiAcquire(). MultiRelease() calls LockRelease() which performs two\nfunctions. First, it removes the lock information from the lock table. \nSecond, this function will awaken any transaction which had blocked\nwaiting for the same lock. This is done here because if it was not, a new\nprocess could come along and request the lock causing a race condition.\n\nSo...did I even come *close* to understanding this behemoth? -_-;\n\nCorrections would be appreciated. Sorry again to be such a pain.\n\n--------------< LINUX: The choice of a GNU generation. >--------------\nSteve Frampton <[email protected]> http://qlink.queensu.ca/~3srf\n\n", "msg_date": "Tue, 17 Nov 1998 20:02:14 -0500 (EST)", "msg_from": "Steve Frampton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Concurrency control questions 6.3.2 vs. 6.4" }, { "msg_contents": "> \n> Hello:\n> \n> I noticed the locking code in the backend/storage/lmgr directory has had a\n> lot of modifications between 6.3.2 vs. 6.4. I know that Vadim is working\n> on changing the table-level locking scheme of 6.3.2 towards a\n> multi-version concurrency control scheme. I'm wondering how much along\n> these modifications are -- it looks like there were changes made to the\n> existing locking scheme but no additional features were added. This is\n> based on a very cursory look at the locking code in 6.4 (the locking code\n> is a lot more complicated than I had initially thought it was going to\n> be).\n> \n> I'm curious as to how the multi-version scheme will be implemented. Vadim\n> said that Postgres has a non-overwriting storage manager which can be\n> exploited for this concurrency control scheme. I'm not sure I understand\n> him -- values that are updated in a table are written to the database in\n> such a fashion that the old value remains accessible? This is\n> accomplished without a recovery log?\n> \n> Also, there is some user-level locking code in the contrib directory by\n> Massimo that (if I am correct in my understanding of it), seems to be\n> providing row-level locking capabilities through query selects. Is this\n> something that will be added to the Postgresql core at a future date?\n\nNo, this isn't row-level locking, it is a non-blocking mechanism which\ncan be used by applications to signal that some entities should not be\nmodified by other users because they are user-locked by one application\ninstance.\nThis is totally transparent and orthogonal with respect to standard locks.\n\n-- \nMassimo Dal Zotto\n\n+----------------------------------------------------------------------+\n| Massimo Dal Zotto email: [email protected] |\n| Via Marconi, 141 phone: ++39-0461534251 |\n| 38057 Pergine Valsugana (TN) www: http://www.cs.unitn.it/~dz/ |\n| Italy pgp: finger [email protected] |\n+----------------------------------------------------------------------+\n", "msg_date": "Wed, 18 Nov 1998 11:42:36 +0100 (MET)", "msg_from": "Massimo Dal Zotto <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Concurrency control questions 6.3.2 vs. 6.4" }, { "msg_contents": "> Hi all\n> \n> Just a follow up, did that datetime stuff I submitted ever become part of\n> the soon to be 6.4.1 tree?\n> \n> I just never seen any reply one way or the other, and wanted to make sure\n> it did not just fall into a balck hole :-)\n> \n> It seemed to me that it may also be usefull to others. For those who\n> don't know or forgot, it is a function to be called from a triger befor\n> update, and will set the specified datetime field to the current datetime,\n> thus implimenting a modification time stamp.\n> \n> I have re-attached the tgz file just incase it got lost.\n> It should be unpacked in ..../pgsql/contrib/spi/ and the Makefile patched\n> with the supplied patch.\n\nDone. But in 6.5 tree, not 6.4.1. You mentioned it has not been\ncompletely tested, so only in 6.5.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 12 Dec 1998 17:51:37 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] 6.4.1 contrib/spi/" }, { "msg_contents": "Hi Bruce and all\n\nOn Sat, 12 Dec 1998, Bruce Momjian wrote:\n\n> > Just a follow up, did that datetime stuff I submitted ever become part of\n> > the soon to be 6.4.1 tree?\n> \n> Done. But in 6.5 tree, not 6.4.1. You mentioned it has not been\n> completely tested, so only in 6.5.\n> \n\nThats fine, thanks. I have now tested it more, all seems very well, not\neven one glitch so far. Maybe I finally got some thing right ?-)\n\nI did notice later that I forgot to add a section to the README file, if\nyou would like I can do that and submit a patch for the README file?\n\nHave a great night\nTerry Mackintosh <[email protected]> http://www.terrym.com\nsysadmin/owner Please! No MIME encoded or HTML mail, unless needed.\n\nProudly powered by R H Linux 4.2, Apache 1.3, PHP 3, PostgreSQL 6.4\n-------------------------------------------------------------------\nSuccess Is A Choice ... book by Rick Patino, get it, read it!\n\n", "msg_date": "Sat, 12 Dec 1998 19:04:16 -0500 (EST)", "msg_from": "Terry Mackintosh <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] 6.4.1 contrib/spi/" }, { "msg_contents": "> Hi Bruce and all\n> \n> On Sat, 12 Dec 1998, Bruce Momjian wrote:\n> \n> > > Just a follow up, did that datetime stuff I submitted ever become part of\n> > > the soon to be 6.4.1 tree?\n> > \n> > Done. But in 6.5 tree, not 6.4.1. You mentioned it has not been\n> > completely tested, so only in 6.5.\n> > \n> \n> Thats fine, thanks. I have now tested it more, all seems very well, not\n> even one glitch so far. Maybe I finally got some thing right ?-)\n> \n> I did notice later that I forgot to add a section to the README file, if\n> you would like I can do that and submit a patch for the README file?\n> \n\nSure. Sounds good.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 12 Dec 1998 19:37:39 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] 6.4.1 contrib/spi/" }, { "msg_contents": "> Hi everyone:\n> \n> Thanks for the helpful responses. Are you folks getting sick of me yet?\n> I'm hoping somebody could help me understand a bit better the way locking\n> protocols are used in PostGreSQL 6.4, including how a query is parsed,\n> executed, etc.\n> \n> I understand that there are two locks available: one for reads and one for\n> writes. They are called by RelationSetLockForRead() and \n> RelationSetLockForWrite(), respectively, which are both implemented in\n> backend/storage/lmgr.c.\n> \n> These functions are called by the query parser, trigger handler, and\n> indexing subsystem. The query parser is responsible for parsing a given\n> expression in backend/parser/parse_expr.c and actually grabbing tuples in\n> backend/parser/parse_func.c which are passed as a heap array to the\n> backend which in turn passes the information to the client. Am I still\n> okay?\n> \n> I'm interested in the locking protocols as used for query processing\n> so I guess I can ignore the trigger and indexing for now.\n> \n> Locking is not accomplished with calls to the operating system but instead\n> is managed by the locking manager through a lock hash table which lives in\n> shared memory. The table contains information on locks such as the type of\n> lock (read/write), number of locks currently held, an array of bitmasks\n> showing lock conflicts, and lock priority level (used to prevent\n> starvation). In addition, each relation has its own data structure which \n> includes some locking information.\n> \n> Here's where things get fuzzy -- there's a lot of code here so please be\n> patient with me if I really screwed up in my interpretation. :-)\n> \n> When the RelationSetLockFor...() function is called, it ensures that the\n> relation and lock information for the relation are both valid. It then\n> calls MultiLockReln() with a pointer to the relation's lock information \n> and the appropriate lock type. MultiLockReln() initializes a lock tag\n> which is passed to MultiAcquire().\n> \n> I'm a little vague on MultiAcquire(). It seems to search through the\n> lock hash table to see if a lock should be allowed? And if so it calls\n> LockAcquire(). But LockAcquire() itself checks for conflicts, sleeps if\n> one exists, or sets the appropriate lock, adding it to the lock table. So\n> I'm a bit confused here...\n> \n> Unlocks are accomplished in much the same fashion.\n> RelationUnsetLockFor...() is called which in turn calls MultiRelease() \n> which searches the lock table using the same algorithm as in\n> MultiAcquire(). MultiRelease() calls LockRelease() which performs two\n> functions. First, it removes the lock information from the lock table. \n> Second, this function will awaken any transaction which had blocked\n> waiting for the same lock. This is done here because if it was not, a new\n> process could come along and request the lock causing a race condition.\n\nSounds pretty close. I assume you have studied the backend flowcart on\nthe web support page and in src/tools/backend?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 12 Dec 1998 21:30:30 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Concurrency control questions 6.3.2 vs. 6.4" }, { "msg_contents": "Hi Bruce and all\n\nOK, here is a diff for the README file in /usr/src/pgsql/contrib/spi/.\nFor the 6.5 tree.\n\nHave a great night.\nTerry\nOn Sat, 12 Dec 1998, Bruce Momjian wrote:\n\n> > Hi Bruce and all\n> > \n> > On Sat, 12 Dec 1998, Bruce Momjian wrote:\n> > \n> > > > Just a follow up, did that datetime stuff I submitted ever become part of\n> > > > the soon to be 6.4.1 tree?\n> > > \n> > > Done. But in 6.5 tree, not 6.4.1. You mentioned it has not been\n> > > completely tested, so only in 6.5.\n> > > \n> > \n> > Thats fine, thanks. I have now tested it more, all seems very well, not\n> > even one glitch so far. Maybe I finally got some thing right ?-)\n> > \n> > I did notice later that I forgot to add a section to the README file, if\n> > you would like I can do that and submit a patch for the README file?\n> > \n> \n> Sure. Sounds good.\n> \n> \n> -- \n> Bruce Momjian | http://www.op.net/~candle\n> [email protected] | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> \n\nTerry Mackintosh <[email protected]> http://www.terrym.com\nsysadmin/owner Please! No MIME encoded or HTML mail, unless needed.\n\nProudly powered by R H Linux 4.2, Apache 1.3, PHP 3, PostgreSQL 6.4\n-------------------------------------------------------------------\nSuccess Is A Choice ... book by Rick Patino, get it, read it!", "msg_date": "Mon, 14 Dec 1998 00:03:18 -0500 (EST)", "msg_from": "Terry Mackintosh <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] 6.4.1 contrib/spi/" }, { "msg_contents": "Applied.\n\n> Hi Bruce and all\n> \n> OK, here is a diff for the README file in /usr/src/pgsql/contrib/spi/.\n> For the 6.5 tree.\n> \n> Have a great night.\n> Terry\n> On Sat, 12 Dec 1998, Bruce Momjian wrote:\n> \n> > > Hi Bruce and all\n> > > \n> > > On Sat, 12 Dec 1998, Bruce Momjian wrote:\n> > > \n> > > > > Just a follow up, did that datetime stuff I submitted ever become part of\n> > > > > the soon to be 6.4.1 tree?\n> > > > \n> > > > Done. But in 6.5 tree, not 6.4.1. You mentioned it has not been\n> > > > completely tested, so only in 6.5.\n> > > > \n> > > \n> > > Thats fine, thanks. I have now tested it more, all seems very well, not\n> > > even one glitch so far. Maybe I finally got some thing right ?-)\n> > > \n> > > I did notice later that I forgot to add a section to the README file, if\n> > > you would like I can do that and submit a patch for the README file?\n> > > \n> > \n> > Sure. Sounds good.\n> > \n> > \n> > -- \n> > Bruce Momjian | http://www.op.net/~candle\n> > [email protected] | (610) 853-3000\n> > + If your life is a hard drive, | 830 Blythe Avenue\n> > + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> > \n> \n> Terry Mackintosh <[email protected]> http://www.terrym.com\n> sysadmin/owner Please! No MIME encoded or HTML mail, unless needed.\n> \n> Proudly powered by R H Linux 4.2, Apache 1.3, PHP 3, PostgreSQL 6.4\n> -------------------------------------------------------------------\n> Success Is A Choice ... book by Rick Patino, get it, read it!\nContent-Description: \n\n> *** README\tFri Oct 17 05:55:29 1997\n> --- README\tSun Dec 13 23:31:35 1998\n> ***************\n> *** 135,137 ****\n> --- 135,149 ----\n> \n> To CREATE FUNCTION use insert_username.sql (will be made by gmake from\n> insert_username.source).\n> + \n> + \n> + 5. moddatetime.c - function for maintaining a modification datetime stamp.\n> + \n> + You have to create a BEFORE UPDATE trigger using the function moddatetime().\n> + One argument must be given, that is the name of the field that is of type \n> + datetime that is to be used as the modification time stamp.\n> + \n> + There is an example in moddatetime.example.\n> + \t\n> + To CREATE FUNCTION use moddatetime.sql ( will be made by gmake from \n> + moddatetime.source).\n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 14 Dec 1998 00:13:38 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] 6.4.1 contrib/spi/" } ]
[ { "msg_contents": "Since we're supposedly an \"object relational database\", is there any chance\nwe might be able to move towards supporting OQL93?\n\nTaral\n\n", "msg_date": "Sun, 15 Nov 1998 21:02:34 -0600", "msg_from": "\"Taral\" <[email protected]>", "msg_from_op": true, "msg_subject": "SQL vs. OQL" }, { "msg_contents": "Taral wrote:\n> \n> Since we're supposedly an \"object relational database\", is there any chance\n> we might be able to move towards supporting OQL93?\n\nWhere can I get the spec of OQL93 ?\n\nHannu\n", "msg_date": "Mon, 16 Nov 1998 12:22:19 +0200", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] SQL vs. OQL" }, { "msg_contents": "Read http://www.jcc.com/sql_odmg_convergence.html and follow the links :)\n\nTaral\n\n> -----Original Message-----\n> From: Hannu Krosing [mailto:[email protected]]\n> Sent: Monday, November 16, 1998 4:22 AM\n> To: Taral\n> Cc: [email protected]\n> Subject: Re: [HACKERS] SQL vs. OQL\n> \n> \n> Taral wrote:\n> > \n> > Since we're supposedly an \"object relational database\", is \n> there any chance\n> > we might be able to move towards supporting OQL93?\n> \n> Where can I get the spec of OQL93 ?\n> \n> Hannu\n> \n", "msg_date": "Mon, 16 Nov 1998 09:25:24 -0600", "msg_from": "\"Taral\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] SQL vs. OQL" }, { "msg_contents": "> Read http://www.jcc.com/sql_odmg_convergence.html and follow the links \n\nThe pages and some of the links seem to have frozen about a year ago.\nAnything still active on these efforts?\n\n - Tom\n", "msg_date": "Tue, 17 Nov 1998 03:40:17 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] SQL vs. OQL" }, { "msg_contents": "AFAICT, there's an agreement to eventually merge the query parts of OQL and\nSQL so that there's only one query language. You did check the ODMG page?\n\nTaral\n\n> -----Original Message-----\n> From: [email protected] [mailto:[email protected]]On Behalf\n> Of Thomas G. Lockhart\n> Sent: Monday, November 16, 1998 9:40 PM\n> To: Taral\n> Cc: Hannu Krosing; [email protected]\n> Subject: Re: [HACKERS] SQL vs. OQL\n>\n>\n> > Read http://www.jcc.com/sql_odmg_convergence.html and follow the links\n>\n> The pages and some of the links seem to have frozen about a year ago.\n> Anything still active on these efforts?\n>\n> - Tom\n>\n\n", "msg_date": "Tue, 17 Nov 1998 12:12:23 -0600", "msg_from": "\"Taral\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] SQL vs. OQL" } ]
[ { "msg_contents": "OK, folks, I need help on this one. We use the new structure slock_t,\nbut it is not defined in all include/port/*.h files. It is missing in:\n\t\n\t#$ ls *.h |diff - /tmp/x\n\t< dgux.h\n\t< sco.h\n\t< sunos4.h\n\t< ultrix4.h\n\t< win32.h\n\nIf any of these ports tries to access the locking code, the compile will\nfail, or is that wrong? I thought sco works, or is that unixware works?\nsunos4 seems to require it, and doesn't have it.\n\nCan someone knowledgeable about this area supply a patch?\n\n\n> \n> \tI decided to build the current on my 4.1.4 system. No luck - there is\n> the usual missing def's for fprintf, printf... which are easy to list for\n> a possible massive #include for sunsos builds - I can send you the list later.\n> The main problem is in s_lock.c\n> \n> ....\n> gcc -I../../../include -I../../../backend -Wall -Wmissing-prototypes \n> -I../..\n> -c s_lock.c -o s_lock.o\n> s_lock.c:43: warning: type defaults to `int' in declaration of `slock_t'\n> s_lock.c:43: parse error before `*'\n> s_lock.c: In function `s_lock_stuck':\n> s_lock.c:47: `lock' undeclared (first use in this function)\n> s_lock.c:47: (Each undeclared identifier is reported only once\n> s_lock.c:47: for each function it appears in.)\n> s_lock.c:47: `file' undeclared (first use in this function)\n> s_lock.c:47: `line' undeclared (first use in this function)\n> s_lock.c: At top level:\n> s_lock.c:60: warning: type defaults to `int' in declaration of `slock_t'\n> s_lock.c:60: parse error before `*'\n> s_lock.c:61: warning: no previous prototype for `s_lock'\n> s_lock.c: In function `s_lock':\n> s_lock.c:64: warning: implicit declaration of function `TAS'\n> s_lock.c:64: `lock' undeclared (first use in this function)\n> s_lock.c:74: `file' undeclared (first use in this function)\n> s_lock.c:74: `line' undeclared (first use in this function)\n> gmake[3]: *** [s_lock.o] Error 1\n> ...\n> \n> which means that .o never is built and much later the entire build fails.\n> \n> \tSince you last checked out this file - I figured you might want to\n> take a stab at it. I am open to suggestions before I try to puzzle it out.\n> \n> -- \n> \tStephen N. Kogge\n> \[email protected]\n> \thttp://www.uimage.com\n> \n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 15 Nov 1998 22:10:57 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Postgres for Sunos 4.1.4" }, { "msg_contents": "> \n> \tI decided to build the current on my 4.1.4 system. No luck - there is\n> the usual missing def's for fprintf, printf... which are easy to list for\n> a possible massive #include for sunsos builds - I can send you the list later.\n> The main problem is in s_lock.c\n> \n> ....\n> gcc -I../../../include -I../../../backend -Wall -Wmissing-prototypes \n> -I../..\n> -c s_lock.c -o s_lock.o\n> s_lock.c:43: warning: type defaults to `int' in declaration of `slock_t'\n> s_lock.c:43: parse error before `*'\n> s_lock.c: In function `s_lock_stuck':\n> s_lock.c:47: `lock' undeclared (first use in this function)\n> s_lock.c:47: (Each undeclared identifier is reported only once\n> s_lock.c:47: for each function it appears in.)\n> s_lock.c:47: `file' undeclared (first use in this function)\n> s_lock.c:47: `line' undeclared (first use in this function)\n> s_lock.c: At top level:\n> s_lock.c:60: warning: type defaults to `int' in declaration of `slock_t'\n> s_lock.c:60: parse error before `*'\n> s_lock.c:61: warning: no previous prototype for `s_lock'\n> s_lock.c: In function `s_lock':\n> s_lock.c:64: warning: implicit declaration of function `TAS'\n> s_lock.c:64: `lock' undeclared (first use in this function)\n> s_lock.c:74: `file' undeclared (first use in this function)\n> s_lock.c:74: `line' undeclared (first use in this function)\n> gmake[3]: *** [s_lock.o] Error 1\n> ...\n> \n> which means that .o never is built and much later the entire build fails.\n> \n> \tSince you last checked out this file - I figured you might want to\n> take a stab at it. I am open to suggestions before I try to puzzle it out.\n> \n\nCan you try this and let me know:\n\n---------------------------------------------------------------------------\n\n*** ./s_lock.c.orig\tSat Dec 12 21:22:33 1998\n--- ./s_lock.c\tSat Dec 12 21:22:43 1998\n***************\n*** 30,37 ****\n #define S_NSPINCYCLE\t20\n #define S_MAX_BUSY\t\t500 * S_NSPINCYCLE\n \n! int\t\t\ts_spincycle[S_NSPINCYCLE] =\n! {0, 0, 0, 0, 10000, 0, 0, 0, 10000, 0,\n \t0, 10000, 0, 0, 10000, 0, 10000, 0, 10000, 10000\n };\n \n--- 30,37 ----\n #define S_NSPINCYCLE\t20\n #define S_MAX_BUSY\t\t500 * S_NSPINCYCLE\n \n! int\t\t\ts_spincycle[S_NSPINCYCLE] = {\n! 0, 0, 0, 0, 10000, 0, 0, 0, 10000, 0,\n \t0, 10000, 0, 0, 10000, 0, 10000, 0, 10000, 10000\n };\n \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 12 Dec 1998 21:23:32 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Postgres for Sunos 4.1.4" } ]
[ { "msg_contents": "It seems that full support for CORBA and the COS Query Service requires us\nto enable the user to parse, prepare, and execute the query in three\nseparate stages. Are we also planning to support PREPARE? If so, we should\nco-ordinate the effort, since the full COSQS support will require pulling\napart pg_parse_and_plan().\n\nTaral\n\n", "msg_date": "Mon, 16 Nov 1998 14:19:32 -0600", "msg_from": "\"Taral\" <[email protected]>", "msg_from_op": true, "msg_subject": "PREPARE" }, { "msg_contents": "On Mon, 16 Nov 1998, Taral wrote:\n\n> It seems that full support for CORBA and the COS Query Service requires us\n> to enable the user to parse, prepare, and execute the query in three\n> separate stages. Are we also planning to support PREPARE? If so, we should\n> co-ordinate the effort, since the full COSQS support will require pulling\n> apart pg_parse_and_plan().\n\nImplementing PREPARE would benefit JDBC.\n\nCurrently, were implementing it in the driver but having this in the\nbackend would benefit JDBC a lot in performance.\n\n-- \n Peter T Mount [email protected]\n Main Homepage: http://www.retep.org.uk\nPostgreSQL JDBC Faq: http://www.retep.org.uk/postgres\n Java PDF Generator: http://www.retep.org.uk/pdf\n\n", "msg_date": "Tue, 17 Nov 1998 06:58:39 +0000 (GMT)", "msg_from": "Peter T Mount <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] PREPARE" }, { "msg_contents": "Taral wrote:\n> \n> It seems that full support for CORBA and the COS Query Service requires us\n> to enable the user to parse, prepare, and execute the query in three\n> separate stages. Are we also planning to support PREPARE? If so, we should\n> co-ordinate the effort, since the full COSQS support will require pulling\n> apart pg_parse_and_plan().\n\nWe should.\n\nCurrently we do support PREPARE (kind of) in the SPI interface.\n\nHowever, it is not strictly necessary (both ODBC and JDBC currently \nsimulate it on the client side), but it would enable interactive \napplications perform much better if we did.\n\nThe current FE<->BE protocol is strange mix of CLI and directly \nusable psql replacement ;)\n\nBTW, what does CORBA prescribe about transactions (if anything) ?\n\nIs the current transaction model adequate or do we need nested \ntransactions ?\n\nPS. It would probably be beneficial to look also at Microsofts ADO for\nideas,\nafaik this is the DCOM version of what we are trying to do with CORBA.\n\n------------\nHannu\n", "msg_date": "Tue, 17 Nov 1998 10:15:56 +0200", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] PREPARE" }, { "msg_contents": "On Mon, Nov 16, 1998 at 02:19:32PM -0600, Taral wrote:\n> separate stages. Are we also planning to support PREPARE? If so, we should\n> co-ordinate the effort, since the full COSQS support will require pulling\n> apart pg_parse_and_plan().\n\nHopefully. I'm still holding back PREPARE for ecpg until I can think of a\ngood solution. The best of course would be in the backend. Hmm, how do ODBC\nand JDBC solve this?\n\nMichael\n-- \nDr. Michael Meskes, Manager of the Western Branch Office, Datenrevision GmbH\nwork: Cuxhavener Str. 36, D-21149 Hamburg\nhome: Th.-Heuss-Str. 61, D-41812 Erkelenz, [email protected]\nGo SF49ers! Go Rhein Fire! Use Debian GNU/Linux!\n\n\n", "msg_date": "Tue, 17 Nov 1998 10:18:24 +0100", "msg_from": "Michael Meskes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] PREPARE" }, { "msg_contents": "> I'm still holding back PREPARE for ecpg until I can think of a\n> good solution. The best of course would be in the backend.\n\nSo what would it take to do this in the backend? I think the places\nwhich would need to be touched fall into areas I either know about or am\nstarting to look at to implement the CASE clause.\n\nWe'd need:\n - a \"named buffer\" (or several) to hold the intermediate input\n - a way to pass in parameters or substitution arguments\n - a way to decide if previous parser/planner/executor\n results can be reused\n\nWhat else?\n\n - Tom\n", "msg_date": "Tue, 17 Nov 1998 13:45:19 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] PREPARE" }, { "msg_contents": "> > I'm still holding back PREPARE for ecpg until I can think of a\n> > good solution. The best of course would be in the backend.\n> \n> So what would it take to do this in the backend? I think the places\n> which would need to be touched fall into areas I either know about or am\n> starting to look at to implement the CASE clause.\n> \n> We'd need:\n> - a \"named buffer\" (or several) to hold the intermediate input\n\nportals\n\n> - a way to pass in parameters or substitution arguments\n\nSQL functions?\n\n> - a way to decide if previous parser/planner/executor\n> results can be reused\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 17 Nov 1998 11:16:52 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] PREPARE" }, { "msg_contents": "[ Cross-post to pgsql-interfaces ]\n\n> BTW, what does CORBA prescribe about transactions (if anything) ?\n>\n> Is the current transaction model adequate or do we need nested\n> transactions ?\n\nThe Query Service is read-only, so does not have locking or transactions...\nWe will have to implement the Transaction Service...\n\nCurrent service list for our implementation: (in order of importance)\n\nNaming Service (provided by most 2.2 ORBs)\nLifeCycle Service (provided by mico) (dependent on NS)\nQuery Service\nSecurity Service\nConcurrencyControl Service\nTransaction Service (dependent on CCS)\nRelationship Service (provided by mico)\n\n(Not sure about the ordering of the last few...)\n\nAs you can see, this is a non-trivial list of interfaces :)\n\nTaral\n\n", "msg_date": "Tue, 17 Nov 1998 12:21:52 -0600", "msg_from": "\"Taral\" <[email protected]>", "msg_from_op": true, "msg_subject": "CORBAservices (was RE: [HACKERS] PREPARE)" }, { "msg_contents": "> Is the current transaction model adequate or do we need nested\n> transactions ?\n\nErr... I didn't answer your question, did I? The COS Transaction Service\nimplements nested transactions.\n\nTaral\n\n", "msg_date": "Tue, 17 Nov 1998 12:26:05 -0600", "msg_from": "\"Taral\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] PREPARE" }, { "msg_contents": "\n\nMichael Meskes wrote:\n\n> On Mon, Nov 16, 1998 at 02:19:32PM -0600, Taral wrote:\n> > separate stages. Are we also planning to support PREPARE? If so, we should\n> > co-ordinate the effort, since the full COSQS support will require pulling\n> > apart pg_parse_and_plan().\n>\n> Hopefully. I'm still holding back PREPARE for ecpg until I can think of a\n> good solution. The best of course would be in the backend. Hmm, how do ODBC\n> and JDBC solve this?\n\nSpeaking for ODBC, we keep the PREPARE'd statement in a malloc'ed buffer in the\ndriver. The fun part is that we must support a set of API calls which request\nthings like the number of parameters, and result set, column info. We get the\nparameter count by simply counting the parameter markers. To get the column\ninfo, we send the statement to the backend, retrieve the column info and discard\nany returned rows. Not very elegant nor inefficient. But it works ok.\n\nThis functionality should be handled by the backend. May I suggest a protocol\nthat will allow this typical interchange.\n\nsend PREPARE(statement)\nreceive stmt_handle\n\nsend GET_PARAM_COUNT(stmt_handle)\nreceive param_count\nfor i = 1 to param_count\n send DESCRIBE_PARAMETER(stmt_handle, i); -- include: type, nullability,\nscale, & precision\n receive parameter description.\nend for\n\nsend GET_COLUMN_COUNT(stmt_handle);\nreceive column_count\nfor i = 1 to column_count\n send DESCRIBE_COLUMN(stmt_handle, i); -- included: tablename,\ncolumn name, column alias, type, nullability, scale & precision\n receive column description.\nend for\n\n-- There are other column info attributes worth sending such as: owner,\nsearchable, signed/unsigned, updateable, case sensitive & autoincrement\n-- I will be quite content if we get the main ones specified above.\n\nfor n set of parameters\n for i = 1 to param_count\n send PUT_DATA(stmt_handle, i, param_data[i])\n end for\n send EXECUTE(stmt_handle)\n receive result set\nend for\n\nsend FREE(stmt_handle)\n\n\n\n\n", "msg_date": "Tue, 17 Nov 1998 13:33:23 -0500", "msg_from": "David Hartwig <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] PREPARE" }, { "msg_contents": "> > Is the current transaction model adequate or do we need nested\n> > transactions ?\n>\n> Err... I didn't answer your question, did I? The COS Transaction Service\n> implements nested transactions.\n\nAha... finally found the line I was looking for:\n\n\"An implementation of the Transaction Service is not required to support\nnested transactions.\"\n\nTaral\n\n", "msg_date": "Tue, 17 Nov 1998 12:36:55 -0600", "msg_from": "\"Taral\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] PREPARE" }, { "msg_contents": "On Tue, 17 Nov 1998, Michael Meskes wrote:\n\n> On Mon, Nov 16, 1998 at 02:19:32PM -0600, Taral wrote:\n> > separate stages. Are we also planning to support PREPARE? If so, we should\n> > co-ordinate the effort, since the full COSQS support will require pulling\n> > apart pg_parse_and_plan().\n> \n> Hopefully. I'm still holding back PREPARE for ecpg until I can think of a\n> good solution. The best of course would be in the backend. Hmm, how do ODBC\n> and JDBC solve this?\n\nBackground:\n\nJDBC has a class called PrepareStatement. It's created by the\nprepareStatement() method in the Connection class. The statement passed to\nit has each required parameter represented by a ?\n\ninsert into mytable (field1,field2,field3) values (?,?,?);\n\nNow the current postgresql jdbc implementation stores this string, and has\na Vector (Java for a dynamic array) that has each value stored in it as\nthe client application sets them. When the client calls the\nexecuteUpdate() or executeQuery() methods, we just replace the ?'s with\nthe values in sequence, and pass the query to the backend as normal.\n\nIt's a real botch, but it works.\n\n-- \n Peter T Mount [email protected]\n Main Homepage: http://www.retep.org.uk\nPostgreSQL JDBC Faq: http://www.retep.org.uk/postgres\n Java PDF Generator: http://www.retep.org.uk/pdf\n\n", "msg_date": "Tue, 17 Nov 1998 18:40:01 +0000 (GMT)", "msg_from": "Peter T Mount <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] PREPARE" }, { "msg_contents": "On Tue, Nov 17, 1998 at 01:45:19PM +0000, Thomas G. Lockhart wrote:\n> So what would it take to do this in the backend? I think the places\n> which would need to be touched fall into areas I either know about or am\n> starting to look at to implement the CASE clause.\n> \n> We'd need:\n> - a \"named buffer\" (or several) to hold the intermediate input\n\nI didn't get this one completly. What input do you mean?\n\n> - a way to pass in parameters or substitution arguments\n\nYes. That means changing of declare cursor as well.\n\n> - a way to decide if previous parser/planner/executor\n> results can be reused\n\nYes.\n\n> What else?\n\nRunning planner on the statement as it is without the variables to be\nsubstituted. So execution of declare gets faster.\n\nMichael\n-- \nDr. Michael Meskes, Manager of the Western Branch Office, Datenrevision GmbH\nwork: Cuxhavener Str. 36, D-21149 Hamburg\nhome: Th.-Heuss-Str. 61, D-41812 Erkelenz, [email protected]\nGo SF49ers! Go Rhein Fire! Use Debian GNU/Linux!\n\n\n", "msg_date": "Tue, 17 Nov 1998 20:30:09 +0100", "msg_from": "Michael Meskes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] PREPARE" }, { "msg_contents": "> > - a \"named buffer\" (or several) to hold the intermediate input\n> I didn't get this one completly. What input do you mean?\n\nJust the original string/query to be prepared...\n\n - Tom\n", "msg_date": "Wed, 18 Nov 1998 03:23:30 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] PREPARE" }, { "msg_contents": "On Tue, Nov 17, 1998 at 06:40:01PM +0000, Peter T Mount wrote:\n> it has each required parameter represented by a ?\n> \n> insert into mytable (field1,field2,field3) values (?,?,?);\n> \n> Now the current postgresql jdbc implementation stores this string, and has\n> a Vector (Java for a dynamic array) that has each value stored in it as\n> the client application sets them. When the client calls the\n> executeUpdate() or executeQuery() methods, we just replace the ?'s with\n> the values in sequence, and pass the query to the backend as normal.\n\nThat's exactly what I wanted to use for ecpg. But I guess I postpone it just\na little more. :-)\n\nMichael\n-- \nDr. Michael Meskes, Manager of the Western Branch Office, Datenrevision GmbH\nwork: Cuxhavener Str. 36, D-21149 Hamburg\nhome: Th.-Heuss-Str. 61, D-41812 Erkelenz, [email protected]\nGo SF49ers! Go Rhein Fire! Use Debian GNU/Linux!\n\n\n", "msg_date": "Wed, 18 Nov 1998 08:47:43 +0100", "msg_from": "Michael Meskes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] PREPARE" }, { "msg_contents": "On Wed, Nov 18, 1998 at 03:23:30AM +0000, Thomas G. Lockhart wrote:\n> > I didn't get this one completly. What input do you mean?\n> \n> Just the original string/query to be prepared...\n\nI see. But wouldn't it be more useful to preprocess the query and store the\nresulting nodes instead? We don't want to parse the statement everytime a\nvariable binding comes in.\n\nMichael\n-- \nDr. Michael Meskes, Manager of the Western Branch Office, Datenrevision GmbH\nwork: Cuxhavener Str. 36, D-21149 Hamburg\nhome: Th.-Heuss-Str. 61, D-41812 Erkelenz, [email protected]\nGo SF49ers! Go Rhein Fire! Use Debian GNU/Linux!\n\n\n", "msg_date": "Wed, 18 Nov 1998 08:48:43 +0100", "msg_from": "Michael Meskes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] PREPARE" }, { "msg_contents": ">>>>> \"T\" == Taral <[email protected]> writes:\n\n >> > Is the current transaction model adequate or do we need nested\n >> > transactions ?\n >> \n >> Err... I didn't answer your question, did I? The COS Transaction Service\n >> implements nested transactions.\n\n T> Aha... finally found the line I was looking for:\n\n T> \"An implementation of the Transaction Service is not required to support\n T> nested transactions.\"\n\nTo my mind there are _no_ nested transactions in Postgres. \n\n\n-- \nAnatoly K. Lasareff Email: [email protected] \nSenior programmer\n", "msg_date": "18 Nov 1998 10:49:49 +0300", "msg_from": "[email protected] (Anatoly K. Lasareff)", "msg_from_op": false, "msg_subject": "Re: [INTERFACES] RE: [HACKERS] PREPARE" }, { "msg_contents": "Michael Meskes wrote:\n\n>\n> On Wed, Nov 18, 1998 at 03:23:30AM +0000, Thomas G. Lockhart wrote:\n> > > I didn't get this one completly. What input do you mean?\n> >\n> > Just the original string/query to be prepared...\n>\n> I see. But wouldn't it be more useful to preprocess the query and store the\n> resulting nodes instead? We don't want to parse the statement everytime a\n> variable binding comes in.\n\n Right. A real improvement would only be to have the prepared\n execution plan in the backend and just giving the parameter\n values.\n\n I can think of the following construct:\n\n PREPARE optimizable-statement;\n\n That one will run parser/rewrite/planner, create a new memory\n context with a unique identifier and saves the querytree's\n and plan's in it. Parameter values are identified by the\n usual $n notation. The command returns the identifier.\n\n EXECUTE QUERY identifier [value [, ...]];\n\n then get's back the prepared plan and querytree by the id,\n creates an executor context with the given values in the\n parameter array and calls ExecutorRun() for them.\n\n The PREPARE needs to analyze the resulting parsetrees to get\n the datatypes (and maybe atttypmod's) of the parameters, so\n EXECUTE QUERY can convert the values into Datum's using the\n types input functions. And the EXECUTE has to be handled\n special in tcop (it's something between a regular query and\n an utility statement). But it's not too hard to implement.\n\n Finally a\n\n FORGET QUERY identifier;\n\n (don't remember how the others named it) will remove the\n prepared plan etc. simply by destroying the memory context\n and dropping the identifier from the id->mcontext+prepareinfo\n mapping.\n\n This all restricts the usage of PREPARE to optimizable\n statements. Is it required to be able to prepare utility\n statements (like CREATE TABLE or so) too?\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Wed, 18 Nov 1998 21:02:06 +0100 (MET)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] PREPARE" }, { "msg_contents": "> But wouldn't it be more useful to preprocess the query and store the\n> resulting nodes instead? We don't want to parse the statement \n> everytime a variable binding comes in.\n\nSure. Sorry I wasn't being very specific. Also, whoever implements it\ngets to do it either way at first :)\n\nbtw, I'm buried in trying to get a CASE statement to work, so am not\nvolunteering for this one...\n\n - Tom\n", "msg_date": "Thu, 19 Nov 1998 03:32:54 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] PREPARE" }, { "msg_contents": "On Thu, Nov 19, 1998 at 03:32:54AM +0000, Thomas G. Lockhart wrote:\n> Sure. Sorry I wasn't being very specific. Also, whoever implements it\n> gets to do it either way at first :)\n\n:-)\n\n> btw, I'm buried in trying to get a CASE statement to work, so am not\n> volunteering for this one...\n\nNow I get depressed. :-)\n\nHopefully someone finds time for this. I don't know the internals enough and\nprobably will be short on time too as I change jobs yet again.\n\nMichael\n-- \nDr. Michael Meskes, Manager of the Western Branch Office, Datenrevision GmbH\nwork: Cuxhavener Str. 36, D-21149 Hamburg\nhome: Th.-Heuss-Str. 61, D-41812 Erkelenz, [email protected]\nGo SF49ers! Go Rhein Fire! Use Debian GNU/Linux!\n\n\n", "msg_date": "Thu, 19 Nov 1998 08:52:23 +0100", "msg_from": "Michael Meskes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] PREPARE" } ]
[ { "msg_contents": "\tI've run into a few problem compiling Postgres v6.4 on an Alpha\nrunning Red Hat Linux v5.2. I'm not asking anybody to drop\nwhat they're doing to fix these... hopefully I'll do that\nmyself soon :). I provide the following for informational\npurposes more than anything and to see if, perhaps, someone has \nrun across a similar problem and/or might have suggestions.\n\tOriginally, I was using the following configuration...\n\n[postgres@griddle adt]$ uname -a\nLinux griddle.sped.ukans.edu 2.0.35 #1 Fri Oct 9 02:16:20 EDT 1998 alpha unknown\n[postgres@griddle adt]$ gcc -v\nReading specs from /usr/lib/gcc-lib/alpha-redhat-linux/egcs-2.90.29/specs\ngcc version egcs-2.90.29 980515 (egcs-1.0.3 release)\n\n\tWhat doesn't show up in the above is that this is an Alpha XL300.\nThought I'd mention that...\n\tThis resulted in an internal compiler error which is documented\nbelow (for those who are really interested). I then updated to the newest\nstable version of egcs available.\n\n[postgres@griddle postgres]$ gcc -v\nReading specs from /usr/local/lib/gcc-lib/alphaev5-unknown-linux-gnu/egcs-2.91.57/specs\ngcc version egcs-2.91.57 19980901 (egcs-1.1 release)\n\n\tAfter upgrading, I was able to compile the entire distribution\nsuccessfully. At least one problem remains, however. All the following\ncommands are run from user \"postgres\", an account which has been\nconfigured exactly as described in the documentation. When trying to\ninitialize the the database using the \"initdb\" command, I receive the\nfollowing error:\n\n\n[postgres@griddle postgres]$ initdb\n\nWe are initializing the database system with username postgres (uid=503).\nThis user will own all the files and must also own the server process.\n\nCreating template database in /usr/local/pgsql/data/base/template1\n\nFATAL: s_lock(202ba680) at spin.c:114, stuck spinlock. Aborting.\n\nFATAL: s_lock(202ba680) at spin.c:114, stuck spinlock. Aborting.\ninitdb: could not create template database\ninitdb: cleaning up by wiping out /usr/local/pgsql/data/base/template1\n \n\n\tDue to an extremely hectic schedule at the moment, I've been\nunable to investigate this error any further. If any further information\nis required, please contact me at the \"ReplyTo\" address given above.\n\tFor the record, I received the following internal compiler error\nusing egcs 1.0.3. I include this only because egcs 1.0.3a, the compiler\nused in this build, is apparently the default compiler for Red Hat v5.2.\nThe text below begins with the src/backend/utils directory, which is\nwhere the problems begin. I include all the warnings as well in case\nanyone was interested in fixing any or all of them...\n\n[snip, snip]\nmake[2]: Entering directory `/home/postgres/postgresql-v6.4/src/backend/utils'\nfor i in adt cache error fmgr hash init misc mmgr sort time; do make -C $i SUBSYS.o; done\nmake[3]: Entering directory `/home/postgres/postgresql-v6.4/src/backend/utils/adt'\ngcc -I../../../include -I../../../backend -O2 -mieee -Wall -Wmissing-prototypes -I../.. -c acl.c -o acl.o\ngcc -I../../../include -I../../../backend -O2 -mieee -Wall -Wmissing-prototypes -I../.. -c arrayfuncs.c -o arrayfuncs.o\narrayfuncs.c: In function `array_clip':\narrayfuncs.c:1006: warning: cast from pointer to integer of different size\narrayfuncs.c:1021: warning: cast to pointer from integer of different size\narrayfuncs.c: In function `array_assgn':\narrayfuncs.c:1258: warning: cast from pointer to integer of different size\narrayfuncs.c: In function `ArrayCastAndSet':\narrayfuncs.c:1371: warning: cast from pointer to integer of different size\narrayfuncs.c: In function `_LOtransfer':\narrayfuncs.c:1690: warning: cast from pointer to integer of different size\narrayfuncs.c:1695: warning: cast from pointer to integer of different size\ngcc -I../../../include -I../../../backend -O2 -mieee -Wall -Wmissing-prototypes -I../.. -c arrayutils.c -o arrayutils.o\ngcc -I../../../include -I../../../backend -O2 -mieee -Wall -Wmissing-prototypes -I../.. -c bool.c -o bool.o\ngcc -I../../../include -I../../../backend -O2 -mieee -Wall -Wmissing-prototypes -I../.. -c cash.c -o cash.o\ngcc -I../../../include -I../../../backend -O2 -mieee -Wall -Wmissing-prototypes -I../.. -c char.c -o char.o\ngcc -I../../../include -I../../../backend -O2 -mieee -Wall -Wmissing-prototypes -I../.. -c chunk.c -o chunk.o\nchunk.c: In function `_ReadChunkArray':\nchunk.c:530: warning: cast from pointer to integer of different size\nchunk.c:561: warning: cast from pointer to integer of different size\ngcc -I../../../include -I../../../backend -O2 -mieee -Wall -Wmissing-prototypes -I../.. -c date.c -o date.o\ngcc -I../../../include -I../../../backend -O2 -mieee -Wall -Wmissing-prototypes -I../.. -c datetime.c -o datetime.o\ndatetime.c: In function `datetime_date':\ndatetime.c:290: internal error--unrecognizable insn:\n\n[Ed. gcc error begins here... as if you couldn't tell... :) ]\n\n(insn 53 66 67 (clobber (reg:DI 42 $f10)) -1 (insn_list:REG_DEP_ANTI 66 (nil))\n (expr_list:REG_UNUSED (reg:DI 42 $f10)\n (nil)))\ngcc: Internal compiler error: program cc1 got fatal signal 6\nmake[3]: *** [datetime.o] Error 1\n\n[Ed. Okay, we're all done]\n\nmake[3]: Leaving directory `/home/postgres/postgresql-v6.4/src/backend/utils/adt'\nmake[3]: Entering directory `/home/postgres/postgresql-v6.4/src/backend/utils/cache'\ngcc -I../../../include -I../../../backend -O2 -mieee -Wall -Wmissing-prototypes -I../.. -c catcache.c -o catcache.o\ngcc -I../../../include -I../../../backend -O2 -mieee -Wall -Wmissing-prototypes -I../.. -c inval.c -o inval.o\ngcc -I../../../include -I../../../backend -O2 -mieee -Wall -Wmissing-prototypes -I../.. -c rel.c -o rel.o\ngcc -I../../../include -I../../../backend -O2 -mieee -Wall -Wmissing-prototypes -I../.. -c relcache.c -o relcache.o\ngcc -I../../../include -I../../../backend -O2 -mieee -Wall -Wmissing-prototypes -I../.. -c syscache.c -o syscache.o\nsyscache.c: In function `TypeDefaultRetrieve':\nsyscache.c:733: warning: cast to pointer from integer of different size\ngcc -I../../../include -I../../../backend -O2 -mieee -Wall -Wmissing-prototypes -I../.. -c lsyscache.c -o lsyscache.o\ngcc -I../../../include -I../../../backend -O2 -mieee -Wall -Wmissing-prototypes -I../.. -c fcache.c -o fcache.o\nld -r -o SUBSYS.o catcache.o inval.o rel.o relcache.o syscache.o lsyscache.o fcache.o\nmake[3]: Leaving directory `/home/postgres/postgresql-v6.4/src/backend/utils/cache'\nmake[3]: Entering directory `/home/postgres/postgresql-v6.4/src/backend/utils/error'\ngcc -I../../../include -I../../../backend -O2 -mieee -Wall -Wmissing-prototypes -I../.. -c assert.c -o assert.o\ngcc -I../../../include -I../../../backend -O2 -mieee -Wall -Wmissing-prototypes -I../.. -c elog.c -o elog.o\ngcc -I../../../include -I../../../backend -O2 -mieee -Wall -Wmissing-prototypes -I../.. -c exc.c -o exc.o\ngcc -I../../../include -I../../../backend -O2 -mieee -Wall -Wmissing-prototypes -I../.. -c excabort.c -o excabort.o\ngcc -I../../../include -I../../../backend -O2 -mieee -Wall -Wmissing-prototypes -I../.. -c excid.c -o excid.o\ngcc -I../../../include -I../../../backend -O2 -mieee -Wall -Wmissing-prototypes -I../.. -c format.c -o format.o\nld -r -o SUBSYS.o assert.o elog.o exc.o excabort.o excid.o format.o\nmake[3]: Leaving directory `/home/postgres/postgresql-v6.4/src/backend/utils/error'\nmake[3]: Entering directory `/home/postgres/postgresql-v6.4/src/backend/utils/fmgr'\ngcc -I../../../include -I../../../backend -O2 -mieee -Wall -Wmissing-prototypes -I../.. -c dfmgr.c -o dfmgr.o\ndfmgr.c:283: warning: no previous prototype for `trigger_dynamic'\ngcc -I../../../include -I../../../backend -O2 -mieee -Wall -Wmissing-prototypes -I../.. -c fmgr.c -o fmgr.o\nld -r -o SUBSYS.o dfmgr.o fmgr.o\nmake[3]: Leaving directory `/home/postgres/postgresql-v6.4/src/backend/utils/fmgr'\nmake[3]: Entering directory `/home/postgres/postgresql-v6.4/src/backend/utils/hash'\ngcc -I../../../include -I../../../backend -O2 -mieee -Wall -Wmissing-prototypes -I../.. -c dynahash.c -o dynahash.o\ngcc -I../../../include -I../../../backend -O2 -mieee -Wall -Wmissing-prototypes -I../.. -c hashfn.c -o hashfn.o\nld -r -o SUBSYS.o dynahash.o hashfn.o\nmake[3]: Leaving directory `/home/postgres/postgresql-v6.4/src/backend/utils/hash'\nmake[3]: Entering directory `/home/postgres/postgresql-v6.4/src/backend/utils/init'\ngcc -I../../../include -I../../../backend -O2 -mieee -Wall -Wmissing-prototypes -I../.. -c enbl.c -o enbl.o\ngcc -I../../../include -I../../../backend -O2 -mieee -Wall -Wmissing-prototypes -I../.. -c findbe.c -o findbe.o\ngcc -I../../../include -I../../../backend -O2 -mieee -Wall -Wmissing-prototypes -I../.. -c globals.c -o globals.o\ngcc -I../../../include -I../../../backend -O2 -mieee -Wall -Wmissing-prototypes -I../.. -c miscinit.c -o miscinit.o\ngcc -I../../../include -I../../../backend -O2 -mieee -Wall -Wmissing-prototypes -I../.. -c postinit.c -o postinit.o\nld -r -o SUBSYS.o enbl.o findbe.o globals.o miscinit.o postinit.o\nmake[3]: Leaving directory `/home/postgres/postgresql-v6.4/src/backend/utils/init'\nmake[3]: Entering directory `/home/postgres/postgresql-v6.4/src/backend/utils/misc'\ngcc -I../../../include -I../../../backend -O2 -mieee -Wall -Wmissing-prototypes -I../.. -c database.c -o database.o\ngcc -I../../../include -I../../../backend -O2 -mieee -Wall -Wmissing-prototypes -I../.. -c superuser.c -o superuser.o\ngcc -I../../../include -I../../../backend -O2 -mieee -Wall -Wmissing-prototypes -I../.. -c trace.c -o trace.o\nld -r -o SUBSYS.o database.o superuser.o trace.o \nmake[3]: Leaving directory `/home/postgres/postgresql-v6.4/src/backend/utils/misc'\nmake[3]: Entering directory `/home/postgres/postgresql-v6.4/src/backend/utils/mmgr'\ngcc -I../../../include -I../../../backend -O2 -mieee -Wall -Wmissing-prototypes -I../.. -c aset.c -o aset.o\ngcc -I../../../include -I../../../backend -O2 -mieee -Wall -Wmissing-prototypes -I../.. -c mcxt.c -o mcxt.o\ngcc -I../../../include -I../../../backend -O2 -mieee -Wall -Wmissing-prototypes -I../.. -c palloc.c -o palloc.o\ngcc -I../../../include -I../../../backend -O2 -mieee -Wall -Wmissing-prototypes -I../.. -c portalmem.c -o portalmem.o\ngcc -I../../../include -I../../../backend -O2 -mieee -Wall -Wmissing-prototypes -I../.. -c oset.c -o oset.o\nld -r -o SUBSYS.o aset.o mcxt.o palloc.o portalmem.o oset.o\nmake[3]: Leaving directory `/home/postgres/postgresql-v6.4/src/backend/utils/mmgr'\nmake[3]: Entering directory `/home/postgres/postgresql-v6.4/src/backend/utils/sort'\ngcc -I../../../include -I../../../backend -O2 -mieee -Wall -Wmissing-prototypes -I../.. -c lselect.c -o lselect.o\ngcc -I../../../include -I../../../backend -O2 -mieee -Wall -Wmissing-prototypes -I../.. -c psort.c -o psort.o\nld -r -o SUBSYS.o lselect.o psort.o\nmake[3]: Leaving directory `/home/postgres/postgresql-v6.4/src/backend/utils/sort'\nmake[3]: Entering directory `/home/postgres/postgresql-v6.4/src/backend/utils/time'\ngcc -I../../../include -I../../../backend -O2 -mieee -Wall -Wmissing-prototypes -I../.. -c tqual.c -o tqual.o\nld -r -o SUBSYS.o tqual.o\nmake[3]: Leaving directory `/home/postgres/postgresql-v6.4/src/backend/utils/time'\ngcc -I../../include -I../../backend -O2 -mieee -Wall -Wmissing-prototypes -I.. -c fmgrtab.c -o fmgrtab.o\nmake[2]: *** No rule to make target `adt/SUBSYS.o', needed by `SUBSYS.o'. Stop.\nmake[2]: Leaving directory `/home/postgres/postgresql-v6.4/src/backend/utils'\nmake[1]: *** [utils.dir] Error 2\nmake[1]: Leaving directory `/home/postgres/postgresql-v6.4/src/backend'\nmake: *** [all] Error 2\n\n\n", "msg_date": "Mon, 16 Nov 1998 18:07:38 -0600 (EST)", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Problems with Postgres v6.4 on RedHat 5.2" } ]
[ { "msg_contents": "\nPardon the message, I received no responses in the other pgsql lists I\nposted this too.\n\nI'm using a fully updated RedHat 5.1 box\n\nflex version 2.5.4\nGNU Bison version 1.25\n\n---------- Forwarded message ----------\nDate: Sun, 15 Nov 1998 22:24:58 -0700 (MST)\nFrom: Dax Kelson <[email protected]>\nTo: [email protected]\nSubject: [ADMIN] New to PostgreSQL, is this a DoS?\n\n\nI compiled and install 6.4 according to the INSTALL doc.\n\nI created a database with \"createdb test\",\n\nrunning as user \"postgres\", I connected \"psql template1\" and ran:\n\nCREATE USER billybob WITH PASSWORD hehe CREATEDB CREATEUSER;\n\nI then modified pg_hba.conf by adding:\n\nhost all 10.0.0.2 255.255.255.255 crypt\n\nI then killed and restarted postmaster with \"-i\".\n\n>>From the remote machine \"10.0.0.2\" I connected to the database \"test\" as\nuser \"billybob\" and that worked.\n\nHowever, I had problems trying to create a table.\n\n>>From that remote machine, I ran:\n\nCREATE TABLE weather (\ncity varchar(80),\ntemp_lo int,\ntemp_hi int,\nprcp real,\ndate date\n);\n\nAnd it supposedly \"worked\", it said \"CREATE\". However, running\n\n\\d returned\n\nCouldn't find any tables, sequences or indices!\n\nThen from the machine actually running PostgreSQL, as user \"postgres\" I\nconnected to \"test\"\n\n\\d returned\n\nCouldn't find any tables, sequences or indices!\n\nSo I tried running the CREATE TABLE weather command again, but it\nreturned:\n\nERROR: weather relation already exists\n\nbut,\n\n\\d returned\n\nCouldn't find any tables, sequences or indices!\n\n>>From remotely as \"billybob\" or localy as \"postgres\" I could succesfully\ndrop this phantom table.\n\nIs this a denial of service? A remote user can connect and create as many\nphantom tables as they want which could possibly interfere with normal\noperation? How would the admin even know the name of the table to drop?\n\nIf I connect locally as user \"postgres\" and I can successfully create and\nsee the table.\n\nIs it normal behaviour that user \"postgres\" must first create the tables\nfor them to be useable?\n\nThanks,\nDax Kelson\nInternet Connect, Inc.\n\n\n\n\n\n", "msg_date": "Mon, 16 Nov 1998 18:43:04 -0700 (MST)", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "New to PostgreSQL, is this a DoS?" }, { "msg_contents": "[email protected] wrote:\n> \n> Pardon the message, I received no responses in the other pgsql lists I\n> posted this too.\n> \n> I'm using a fully updated RedHat 5.1 box\n> \n> flex version 2.5.4\n> GNU Bison version 1.25\n> \n> ---------- Forwarded message ----------\n> Date: Sun, 15 Nov 1998 22:24:58 -0700 (MST)\n> From: Dax Kelson <[email protected]>\n> To: [email protected]\n> Subject: [ADMIN] New to PostgreSQL, is this a DoS?\n> \n> I compiled and install 6.4 according to the INSTALL doc.\n\nDid you do initdb ?\n\nrunning the new postgres over old 6.3 database could possibly \nexplain the strange behaviour you see\n\n> I created a database with \"createdb test\",\n> \n> running as user \"postgres\", I connected \"psql template1\" and ran:\n> \n> CREATE USER billybob WITH PASSWORD hehe CREATEDB CREATEUSER;\n\n\nHannu\n", "msg_date": "Tue, 17 Nov 1998 09:55:46 +0200", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] New to PostgreSQL, is this a DoS?" }, { "msg_contents": "On Mon, 16 Nov 1998 [email protected] wrote:\n\n> I compiled and install 6.4 according to the INSTALL doc.\n> \n> I created a database with \"createdb test\",\n> \n> running as user \"postgres\", I connected \"psql template1\" and ran:\n> \n> CREATE USER billybob WITH PASSWORD hehe CREATEDB CREATEUSER;\n\nStill being half asleep and just guessing (withoug looking it up), does\nthe CREATEDB CREATEUSER also imply SELECT privileges? IOW, you may \nwanna try GRANT.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> TEAM-OS2\n Online Searchable Campground Listings http://www.camping-usa.com\n \"There is no outfit less entitled to lecture me about bloat\n than the federal government\" -- Tony Snow\n==========================================================================\n\n\n\n", "msg_date": "Tue, 17 Nov 1998 06:17:04 -0500 (EST)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] New to PostgreSQL, is this a DoS?" }, { "msg_contents": "On Tue, 17 Nov 1998, Vince Vielhaber wrote:\n\n> On Mon, 16 Nov 1998 [email protected] wrote:\n> \n> > I compiled and install 6.4 according to the INSTALL doc.\n> > \n> > I created a database with \"createdb test\",\n> > \n> > running as user \"postgres\", I connected \"psql template1\" and ran:\n> > \n> > CREATE USER billybob WITH PASSWORD hehe CREATEDB CREATEUSER;\n> \n> Still being half asleep and just guessing (withoug looking it up), does\n> the CREATEDB CREATEUSER also imply SELECT privileges? IOW, you may \n> wanna try GRANT.\n\n>From what I've read, you can't GRANT on a database, it has be a on an\nobject within a database.\n\nIt seems it is the chicken and the egg problem.\n\n\n\n", "msg_date": "Tue, 17 Nov 1998 11:17:33 -0700 (MST)", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Re: [HACKERS] New to PostgreSQL, is this a DoS?" }, { "msg_contents": "On Tue, 17 Nov 1998 [email protected] wrote:\n\n> On Tue, 17 Nov 1998, Vince Vielhaber wrote:\n> \n> > On Mon, 16 Nov 1998 [email protected] wrote:\n> > \n> > > I compiled and install 6.4 according to the INSTALL doc.\n> > > \n> > > I created a database with \"createdb test\",\n> > > \n> > > running as user \"postgres\", I connected \"psql template1\" and ran:\n> > > \n> > > CREATE USER billybob WITH PASSWORD hehe CREATEDB CREATEUSER;\n> > \n> > Still being half asleep and just guessing (withoug looking it up), does\n> > the CREATEDB CREATEUSER also imply SELECT privileges? IOW, you may \n> > wanna try GRANT.\n> \n> >From what I've read, you can't GRANT on a database, it has be a on an\n> object within a database.\n> \n> It seems it is the chicken and the egg problem.\n\nGRANT ALL TO billybob\n\nThat's a GRANT for command permissions which is different from the GRANT\nfor object permissions that you're thinking of. Sybase supports the \nabove command I gave, I don't know if PostgreSQL does.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> TEAM-OS2\n Online Searchable Campground Listings http://www.camping-usa.com\n \"There is no outfit less entitled to lecture me about bloat\n than the federal government\" -- Tony Snow\n==========================================================================\n\n\n\n", "msg_date": "Tue, 17 Nov 1998 13:41:39 -0500 (EST)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] New to PostgreSQL, is this a DoS?" } ]
[ { "msg_contents": "\nmore...\n\n---------- Forwarded message ----------\nDate: Sun, 15 Nov 1998 22:55:04 -0700 (MST)\nFrom: Dax Kelson <[email protected]>\nTo: [email protected]\nSubject: Re: [ADMIN] New to PostgreSQL, is this a DoS?\n\n\nOn Sun, 15 Nov 1998, Dax Kelson wrote:\n\n> Is this a denial of service? A remote user can connect and create as many\n> phantom tables as they want which could possibly interfere with normal\n> operation? How would the admin even know the name of the table to drop?\n> \n> If I connect locally as user \"postgres\" and I can successfully create and\n> see the table.\n> \n> Is it normal behaviour that user \"postgres\" must first create the tables\n> for them to be useable?\n\nAfter more testing I found futher strange behavior. After locally\nuser \"postgres\" creates any table in the database, any remote users can\nthen create any new table, and it isn't \"phantom\" (can see it/use it).\n\nIs this desirable and/or expected behavior? It doesn't seem to me that it\nis.\n\n\n\n\n\n\n", "msg_date": "Mon, 16 Nov 1998 18:45:11 -0700 (MST)", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "New to PostgreSQL, is this a DoS?" } ]
[ { "msg_contents": "\nAnother one...\n\n---------- Forwarded message ----------\nDate: Sun, 15 Nov 1998 23:08:27 -0700 (MST)\nFrom: Dax Kelson <[email protected]>\nCc: [email protected]\nSubject: [ADMIN] Permissions/security on pg_* tables?\n\n\nIs it normal/desirable for any user to be able to select (haven't tried\ninsert/update) in the pg_* tables?\n\nIs it possible to GRANT/REVOKE on the pg_* tables? It seems it is. How\ncan you see the permissions if \\z doesn't work on the pg_* tables?\n\nGiven a multi-user environment were each user (and the sysadmin) values\nsecurity quite highly, what is the best way to secure PostgreSQL as\ntightly as possible (not just looking at data in tables, but general\nsnooping around)?\n\nIn pg_hba.conf under \"host\" the second parameter is \"dbname\". Is it\npossible to have a list of databases?\n\nIe:\n\nhost db1,db2,db3 10.0.0.3 255.255.255.255 crypt\n\nOr is it limited to (all|samename|onedbname)?\n\nThanks for your help,\nDax Kelson\nInternet Connect, Inc.\n\n\n\n\n", "msg_date": "Mon, 16 Nov 1998 18:46:00 -0700 (MST)", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Permissions/security on pg_* tables?" } ]
[ { "msg_contents": "subscribe\n\n", "msg_date": "Mon, 16 Nov 1998 18:36:30 -0800", "msg_from": "\"David N. Cicalo\" <[email protected]>", "msg_from_op": true, "msg_subject": "Subscribe" } ]
[ { "msg_contents": "Hi~ Everyone...\n\nI have some problem... with PostgreSql Version 6.4\n\nI don't know how to use text type data filed...\n\nHm...\n---------------------------------------------------------------\ncreate table test(\n id int primary key,\n data1 text)\n\nHow to use \"insert into test........\" ??\nHow to use \"update test .....\" ???\n\ndata1 size is so!!! Big....~~~ Can't execute query...in C Language....\n\nDo you have some sample code with C Language...???\n\nAnybody.... Please.. Help me...!!\n\n\n\n", "msg_date": "Tue, 17 Nov 1998 14:58:43 +0900", "msg_from": "\"=?euc-kr?B?vNux4r/4?=\" <[email protected]>", "msg_from_op": true, "msg_subject": "How to use text type field...." }, { "msg_contents": "At 7:58 +0200 on 17/11/98, =?euc-kr?B?vNux4r/4?= wrote:\n\n\n>\n> Hi~ Everyone...\n>\n> I have some problem... with PostgreSql Version 6.4\n>\n> I don't know how to use text type data filed...\n>\n> Hm...\n> ---------------------------------------------------------------\n> create table test(\n> id int primary key,\n> data1 text)\n>\n> How to use \"insert into test........\" ??\n> How to use \"update test .....\" ???\n>\n> data1 size is so!!! Big....~~~ Can't execute query...in C Language....\n>\n> Do you have some sample code with C Language...???\n>\n> Anybody.... Please.. Help me...!!\n\nText type is nothing special. You just use\n\nINSERT INTO TEST (id, data1)\nVALUES ( 12345, 'This is the text I wanted to Enter' );\n\nand\n\nUPDATE TEST\nSET data1 = 'This is the new text I wanted to Enter';\n\nThat's all there is to it. Put the text within single quoute. Escape single\nquotes within the text by doubling them ('' instead of '). The text may\ninclude newlines and everything:\n\nINSERT INTO TESTE (id, data1)\nVALUES (\n123456,\n'This is a three-line\ntext field\nwithin the table' );\n\nJust remember not to pass the maximum row size, which I believe is still 8k\nfor all the fields together.\n\nHerouth\n\n--\nHerouth Maoz, Internet developer.\nOpen University of Israel - Telem project\nhttp://telem.openu.ac.il/~herutma\n\n\n", "msg_date": "Tue, 17 Nov 1998 12:12:43 +0200", "msg_from": "Herouth Maoz <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [SQL] How to use text type field...." } ]
[ { "msg_contents": "Hi all\n\nI'm working on writing a search engine, and need a branching linked list,\nor tree. When I did the message board there was a similar need which was\nover come at the application level. But I don't really want to do it that\nway this time, and at least one other person has asked about how to\nimplment a tree structure of items.\n\nSo, what I decided on was a free-form IP-address-like field, instead of a\nrigid structure 4 levels deep, 255 wide to a level, as is an IP address, I\nthought to have any number of levels each as wide as need be.\n\nHow to impliment? I started off with char(255) and figured on some sort\nof trigger to both generate the next number on insert, and update the\nparent record of the record just inserted to increment a 'next\nnode-number' field. Whould probably hack the .../contrib/spi/* stuff\nagian to make the function to do all this.\n\nAs a tree like structure is occasionally needed by others as well, I am\nwoundering if maybe a better way to impliment it might be as a new data\ntype? Except that I'm not sure of how to do that, or if that would be the\nbest way?\n\nTable example (for those who care):\nCREATE TABLE categories (\n category char(30) NOT NULL,\n pcatid char(255) NOT NULL,\n cat_id char(255) PRIMARY KEY,\n nidsufix int4 DEFAULT 1 NOT NULL,\n UNIQUE ( category, pcatid ));\n\npcatid stands for 'parent category id'.\nnidsufix stands for 'next id sufix'.\n\nSo, the very first record will have pcatid = 0, cat_id = 1, nidsufix = 1.\nIf a child record is then inserted, it's pcatid = 1, cat_id = 1.1 (the\nfirst '1' is the cat_id of the parent, the second '1' is the nidsufix of\nthe parent), nidsufix = 1 *AND* the parent record (cat_id = 1) has to have\nit's nidsufix incremented to 2, thus the next child of '1' would have a\ncat_id of '1.2', a child of that child: cat_id = '1.2.1' and so on.\nThe only limit on both depth and width is the amount of numbers and dots\nthat will fit into a char(255) field.\n\nThanks for any advice\nTerry Mackintosh <[email protected]> http://www.terrym.com\nsysadmin/owner Please! No MIME encoded or HTML mail, unless needed.\n\nProudly powered by R H Linux 4.2, Apache 1.3, PHP 3, PostgreSQL 6.4\n-------------------------------------------------------------------\nSuccess Is A Choice ... book by Rick Patino, get it, read it!\n\n", "msg_date": "Tue, 17 Nov 1998 07:54:01 -0500 (EST)", "msg_from": "Terry Mackintosh <[email protected]>", "msg_from_op": true, "msg_subject": "Tree type, how best to impliment?" }, { "msg_contents": "Hi again\n\nI got to thinking that doing a tree address like this, the parent address\nis all but the last dot-number of the current address, so is redundant,\nas it can be derived from the current address.\n\nSo, the new idea looks something like this:\nCREATE TABLE categories (\n category char(30) NOT NULL,\n cat_id char(255) PRIMARY KEY,\n nidsufix int4 DEFAULT 1 NOT NULL,\n UNIQUE ( category, substr(cat_id, 1,\n lenght(cat_id) - (lenght(cat_id) - strpos(cat_id, '.')))));\n\nTwo problems\n1. functions can not be called from inside of the UNIQUE constraint?\n2. strpos() returns the FIRST '.', and I need the LAST '.'. Is there a\nsimilar function that will return the last position of the substring?\n\nIf both of these can not be resolved, then it will be neccesary to use a\nparent id field, even though that infomation is contained with in the\ncat_id field.\n\nThanks\nTerry Mackintosh <[email protected]> http://www.terrym.com\nsysadmin/owner Please! No MIME encoded or HTML mail, unless needed.\n\nProudly powered by R H Linux 4.2, Apache 1.3, PHP 3, PostgreSQL 6.4\n-------------------------------------------------------------------\nSuccess Is A Choice ... book by Rick Patino, get it, read it!\n\n", "msg_date": "Tue, 17 Nov 1998 09:55:14 -0500 (EST)", "msg_from": "Terry Mackintosh <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Tree type, how best to impliment?" }, { "msg_contents": "Terry Mackintosh <[email protected]> writes:\n> CREATE TABLE categories (\n> category char(30) NOT NULL,\n> pcatid char(255) NOT NULL,\n> cat_id char(255) PRIMARY KEY,\n> nidsufix int4 DEFAULT 1 NOT NULL,\n> UNIQUE ( category, pcatid ));\n\nOK, let me get this straight ...\n\n1. cat_id is the unique object identifier for the current table row.\n You provide an index on it (via PRIMARY KEY) so it can be used for\n fast lookup.\n2. pcatid is a child node's back-link to its parent node.\n3. nidsufix exists to allow easy generation of the next child ID for\n a given node.\n4. category is what? Payload data? It sure doesn't seem related to\n the tree structure per se.\n\nWhy is \"category, pcatid\" unique? This seems to constrain a parent\nto have only one child per category value --- is that what you want?\nIf so, why not use the category code as the ID suffix, and not have to\nbother with maintaining a next-ID counter?\n\nIn theory pcatid is redundant, since you could form it by stripping the\nlast \".xxx\" section from cat_id. It might be worth storing anyway to\nspeed up relational queries --- eg you'd do\n\tSELECT ... WHERE pcatid = 'something'\nto find the children of a given node. But without an index for pcatid\nit's not clear that's a win. If you make a SQL function parent_ID() to\nstrip the textual suffix, then a functional index on parent_ID(cat_id)\nshould be as fast as an indexed pcatid field for searches, and it'd save\nstorage.\n\n> The only limit on both depth and width is the amount of numbers and dots\n> that will fit into a char(255) field.\n\nIf you use type text instead of a fixed-width char() field, there's no\nlimit to the depth ... and for normal not-too-deep trees it'd save\nmuch storage compared to a fixed-width char(255) field...\n\nA purely stylistic suggestion: IDs of the form \"1.2.3.4\" might be\nmistaken for IP addresses, which of course they ain't. It might save\nconfusion down the road to use a different delimiter. Not slash either\nunless you want the things to look like filenames ... maybe comma or\ncolon?\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 17 Nov 1998 10:25:31 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Tree type, how best to impliment? " }, { "msg_contents": "On Tue, 17 Nov 1998, Tom Lane wrote:\n\n> Terry Mackintosh <[email protected]> writes:\n> > CREATE TABLE categories (\n> > category char(30) NOT NULL,\n> > pcatid char(255) NOT NULL,\n> > cat_id char(255) PRIMARY KEY,\n> > nidsufix int4 DEFAULT 1 NOT NULL,\n> > UNIQUE ( category, pcatid ));\n> \n> OK, let me get this straight ...\n> \n> 1. cat_id is the unique object identifier for the current table row.\n> You provide an index on it (via PRIMARY KEY) so it can be used for\n> fast lookup.\n> 2. pcatid is a child node's back-link to its parent node.\n> 3. nidsufix exists to allow easy generation of the next child ID for\n> a given node.\n\nYes to all.\n\n> 4. category is what? Payload data? It sure doesn't seem related to\n> the tree structure per se.\n\nYes, this will be a tree of categories for a search engine, could also be\na message id in a threaded message database.\n\nExample: Things (1)\n / \\\n Big (1.1) Small (1.2)\n / \\\n Cars (1.1.1) Boats (1.1.2)\n\n\n> Why is \"category, pcatid\" unique? This seems to constrain a parent\n> to have only one child per category value --- is that what you want?\n\nYes.\nIt is very much like a directory structure, one would not want two\ndirectories of the same name both off the same point in the file system.\n\n> If so, why not use the category code as the ID suffix, and not have to\n> bother with maintaining a next-ID counter?\n\ncategory is human readable, for display, the id is not, and when deciding\nwhat the next child's name should be, if not for the next-ID one would\nhave to go count all the other records that have the same parent.\n\n> In theory pcatid is redundant, since you could form it by stripping the\n> last \".xxx\" section from cat_id. It might be worth storing anyway to\n> speed up relational queries --- eg you'd do\n> \tSELECT ... WHERE pcatid = 'something'\n\nYes, I soon realized this :-) but as per my other post, could not figure\nout how to do this for the UNIQUE constraint.\n\n> to find the children of a given node. But without an index for pcatid\n\nI had planed to index it.\n\n> ...\n> > The only limit on both depth and width is the amount of numbers and dots\n> > that will fit into a char(255) field.\n> \n> If you use type text instead of a fixed-width char() field, there's no\n> limit to the depth ... and for normal not-too-deep trees it'd save\n> much storage compared to a fixed-width char(255) field...\n\nYes, I just was not sure how well indexes work with text fields?\n\n> A purely stylistic suggestion: IDs of the form \"1.2.3.4\" might be\n> mistaken for IP addresses, which of course they ain't. It might save\n> confusion down the road to use a different delimiter. Not slash either\n> unless you want the things to look like filenames ... maybe comma or\n> colon?\n\nActually a directory structure is probably the closest analogy, and for\nthat reason I had thought about using slashes.\n\nI had also though about one field only (text), where the categories would\nbe all chained together delimited with slashes and have a PRIMARY KEY\nindex. That would automate by design all of the above problems. But it\nwould creat new ones:-) Like for many deep records, the table would be\nmuch bigger.\n\nAlso, I wanted to use this same concept in other projects, and a one field\napproch would only be good (maybe) for this project. And if worked out,\nthis could be usefull to others as well.\n \nThanks, and have a great day\nTerry Mackintosh <[email protected]> http://www.terrym.com\nsysadmin/owner Please! No MIME encoded or HTML mail, unless needed.\n\nProudly powered by R H Linux 4.2, Apache 1.3, PHP 3, PostgreSQL 6.4\n-------------------------------------------------------------------\nSuccess Is A Choice ... book by Rick Patino, get it, read it!\n\n", "msg_date": "Tue, 17 Nov 1998 15:21:11 -0500 (EST)", "msg_from": "Terry Mackintosh <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Tree type, how best to impliment? " }, { "msg_contents": "Terry Mackintosh <[email protected]> writes:\n> On Tue, 17 Nov 1998, Tom Lane wrote:\n>> Why is \"category, pcatid\" unique? This seems to constrain a parent\n>> to have only one child per category value --- is that what you want?\n\n> Yes.\n> It is very much like a directory structure, one would not want two\n> directories of the same name both off the same point in the file system.\n\nPrecisely...\n\n>> If so, why not use the category code as the ID suffix, and not have to\n>> bother with maintaining a next-ID counter?\n\n> category is human readable, for display, the id is not, and when deciding\n> what the next child's name should be, if not for the next-ID one would\n> have to go count all the other records that have the same parent.\n\nWhy do you need a next ID at all? Think directory structure. Using\nyour example, this is perfectly valid:\n\n Things\n / \\\n Big Small\n / \\\n Cars Boats\n\nThe index will work just as well (probably better) with key strings\nlike \"Things/Big/Small\" as with key strings like \"1.1.2\". Moreover,\nyou don't need a separate index to enforce uniqueness, and you don't\nneed to update the parent row when adding a child.\n\nYou do need invented IDs if the category items are not necessarily\nunique, but it seems your problem does not need that, so why complicate\nthe concept?\n\n> Yes, I just was not sure how well indexes work with text fields?\n\nI use 'em all the time...\n\n> I had also though about one field only (text), where the categories would\n> be all chained together delimited with slashes and have a PRIMARY KEY\n> index. That would automate by design all of the above problems. But it\n> would creat new ones:-) Like for many deep records, the table would be\n> much bigger.\n\nIf your higher-level nodes have long category names, then the space\nsavings from using an ID instead of a category name might become\ninteresting. But if you were willing to use a fixed-size char(255)\n(in fact two of 'em!) in every tuple for IDs, I don't think you can\ncomplain about the average space cost of this approach...\n\nAnother way to approach this is to give each node a unique serial\nnumber, and use the serial number as the child's back-link:\n\n\tCREATE TABLE tab (\n\t\tnodeID\t\tint4\tserial primary key,\n\t\tparentID\tint4\tnot null, -- nodeID of parent node\n\t\tcategory\ttext\tnot null,\n\t\tunique (parentID, category)\n\t);\n\n(I may not have the \"serial\" syntax quite right, but you get the idea.)\nThis approach is very compact, but a row's position in the hierarchy\nis represented only indirectly --- you have to chase the parentID links\nif you need to build the full pathname given just the nodeID. You can't\nlookup a row directly by its \"category/subcategory/subsubcategory\" path\neither; you need a query for each level in order to fetch the parent\nIDs. But you needed that in your original design too, since there's no\nother way to get the numeric IDs for subcategories.\n\nA further space saving is to use the Postgres OIDs as the row\nidentifiers, but that makes it difficult to dump and reload the table,\nso I don't recommend it.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 17 Nov 1998 18:03:38 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Tree type, how best to impliment? " }, { "msg_contents": "Tom Lane wrote:\n> A purely stylistic suggestion: IDs of the form \"1.2.3.4\" might be\n> mistaken for IP addresses, which of course they ain't. It might save\n> confusion down the road to use a different delimiter. Not slash either\n> unless you want the things to look like filenames ... maybe comma or\n> colon?\n> \n> regards, tom lane\n\nBut the 'dot' notation also looks like an entry in an SNMP MIB.\nI've been thinking about a similar structure for psql storage\nof a MIB.\n\n-- \n\n--------------\nMark Hollomon\[email protected]\n", "msg_date": "Mon, 23 Nov 1998 09:34:28 -0500", "msg_from": "\"Mark Hollomon\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Tree type, how best to impliment?" }, { "msg_contents": "Hi\n\nOn Mon, 23 Nov 1998, Mark Hollomon wrote:\n\n> Tom Lane wrote:\n> > A purely stylistic suggestion: IDs of the form \"1.2.3.4\" might be\n> > mistaken for IP addresses, which of course they ain't. It might save\n> > confusion down the road to use a different delimiter. Not slash either\n> > unless you want the things to look like filenames ... maybe comma or\n> > colon?\n> > \n> > regards, tom lane\n> \n> But the 'dot' notation also looks like an entry in an SNMP MIB.\n> I've been thinking about a similar structure for psql storage\n> of a MIB.\n\nWell, as it logically mimics a directory structure, I finally went with a\n'/' and full names instead of numbers.\nAnd as there were no takers to help improve my understanding of how to\nwork with the SPI stuff, and as the docs on it are not very extensive, I\nfinally did the referincial integrity stuff for cross linking categories \nat the application lever, where it was realatively simple to write.\n'cross links' == symbolic links in a file system. In fact, logically\nspeaking, it is a file system.\n\nI don't know what an 'MIB' is, but if this sort of thing is what you need,\nthen I can work with you, as I already have the details worked out, and I\nwould love to impliment some of the now-app-level-details on the database\nside, where they belong.\n\nHave a great day\nTerry Mackintosh <[email protected]> http://www.terrym.com\nsysadmin/owner Please! No MIME encoded or HTML mail, unless needed.\n\nProudly powered by R H Linux 4.2, Apache 1.3, PHP 3, PostgreSQL 6.4\n-------------------------------------------------------------------\nSuccess Is A Choice ... book by Rick Patino, get it, read it!\n\n", "msg_date": "Mon, 23 Nov 1998 21:04:33 -0500 (EST)", "msg_from": "Terry Mackintosh <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Tree type, how best to impliment?" }, { "msg_contents": "Terry Mackintosh wrote:\n> \n> Hi\n> \n> Well, as it logically mimics a directory structure, I finally went with a\n> '/' and full names instead of numbers.\n\nCan't say I blame you.\n\n> And as there were no takers to help improve my understanding of how to\n> work with the SPI stuff, and as the docs on it are not very extensive, I\n> finally did the referincial integrity stuff for cross linking categories\n> at the application lever, where it was realatively simple to write.\n> 'cross links' == symbolic links in a file system. In fact, logically\n> speaking, it is a file system.\n> \n> I don't know what an 'MIB' is, but if this sort of thing is what you need,\n> then I can work with you, as I already have the details worked out, and I\n> would love to impliment some of the now-app-level-details on the database\n> side, where they belong.\n\nMIB stands for Managemnet Information Base.\nThe Simple Network Management Protocol defines a 'database' of\ninformation\nabout the devices that are to managed called a MIB. Each device as a\n'OID'\nthat represents a traversal of a tree structure.\nhttp://www.dordt.edu:457/NetAdminG/snmpC.smi.html\nis a very short intro to the tree structure.\nEach node gets a numerical label and an optional alpha label. The\n'official'\nOID for the node is the numeric labels for all ancestor nodes starting\nat\nthe root, strung together with dots between them. (sound familiar?).\nYou can string together the alpha labels to create a symbolic OID. If\nthe\nalpha label is unique, it is even permitted to use just the alpha label\nof\nthe node of interest. This is actually useful. The top layers of the\ntree\nare set by the standards commitees. The local MIB is almost always a\nproper\nsubtree of the 'internet' node of the standard tree. So, you can start\nyour\nnaming at 'internet' instead of having to alwas specify [ iso org dod\ninternet ... ]\n\nWe have a PostgreSQL database that keeps our inventory of routers,\ndesktops,\netc. The MIB however, is editted by hand. My ultimate goal is to be able\nto generate the MIB from data stored in the database.\n\nI'm afraid I am somewhat time constrained at the moment. Haven't even\nupgraded\nto 6.4 yet. My boss seems to think I ought to do what _he_ wants me to\ndo.\nOh, well.\n\n\n--------------\nMark Hollomon\[email protected]\n", "msg_date": "Tue, 24 Nov 1998 08:43:22 -0500", "msg_from": "\"Mark Hollomon\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Tree type, how best to impliment?" } ]
[ { "msg_contents": "I was trying to commit Billy's fixes for mixed-case table names and have\na problem! cvs is core dumping when I try to commit psql.c. I can commit\nother files just fine (or at least did before running into this). The\nproblem is on the REL6_4 branch, and I succeeded in committing the same\npatches to the main branch before running into this problem.\n\nI tried brute-force removing the locks and retrying but get the core\ndump repeatably. I've left the locks in for now until we have a fix.\nbtw, the cvs file is pretty big, but I don't know if that has anything\nto do with it. Any suggestions?\n\nIf it helps, I have a local copy of the cvs tree on my machine (from\nCVSup) and perhaps that would be a good backup copy for the psql.c\narchive file??\n\n - Tom\n\n> cvs commit .\ncvs commit: Examining .\nAbort (core dumped)\n> cvs commit psql.c\ncvs commit: [09:45:39] waiting for thomas's lock in\n/usr/local/cvsroot/pgsql/src/bin/psql\n^Ccvs [commit aborted]: received interrupt signal\n> to /usr/local/cvsroot/pgsql/src/bin/psql\n> dir\ntotal 497\n 33151 drwxrwxr-x 2 thomas pgsql 512 Nov 17 09:44 #cvs.lock/\n 79962 -rw-rw-r-- 1 thomas pgsql 0 Nov 17 09:44\n#cvs.wfl.hub.org.18025\n 81988 -r--r--r-- 1 scrappy pgsql 7725 Nov 4 17:01 Makefile.in,v\n 80185 -r--r--r-- 1 thomas pgsql 403727 Nov 17 09:26 psql.c,v\n", "msg_date": "Tue, 17 Nov 1998 14:55:35 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": true, "msg_subject": "Ack! Core dump when committing..." } ]
[ { "msg_contents": "\n\tHello\n\n\tI am the fellow trying to build 6.4 for sunos - it seems changes somewhere\nin the past have messed up the sunos4.1.4 build.\n\n\tI have a list to keep -Wall from complaining (just warnings) and have\nadded the needed #defines back for the port.\n\n\tI am down to the build in \n...src/interfaces/libpq\n\n\twhere it tries to do\n\n...\nar crs libpq.a `lorder fe-auth.o fe-connect.o fe-exec.o fe-misc.o fe-print.o \nfe-\nlobj.o dllist.o pqsignal.o | tsort`\ntsort: cycle in data\ntsort: fe-connect.o\ntsort: fe-auth.o\ntsort: cycle in data\ntsort: fe-exec.o\ntsort: fe-connect.o\nar: bad option `s'\ngmake[2]: *** [libpq.a] Error 1\n...\n\n\tsince sunos does not have a 's' option for ar I have traced this back\nto Makefile.global which sets\n\nAROPT=crs\n\n\tThe override for sunos (cr) is not invoked. HUMMMM\n\n\tThe 's' option not available means an explicit ranlib is required.\n\n\tIf whoever thinks they understand configure can come up with a patch so\nsunos sets AROPT=cr and does the ranlib I will apply it and pass back the other\npatches for sunos - really not many. It just takes a real long time to build on\na IPC (SS1+). \n\n\tIf no-one speaks up I will see if I can come up with a configure patch, but\nif it is built from another tool then that tool needs to be patched.\n\n-- \n\tStephen N. Kogge\n\[email protected]\n\thttp://www.uimage.com\n\n\n", "msg_date": "Tue, 17 Nov 1998 13:54:58 -0500", "msg_from": "Stephen Kogge <[email protected]>", "msg_from_op": true, "msg_subject": "building 6.4 on sunos 4.1.4" }, { "msg_contents": "Stephen Kogge <[email protected]> writes:\n> \tsince sunos does not have a 's' option for ar I have traced this back\n> to Makefile.global which sets\n> AROPT=crs\n> \tThe override for sunos (cr) is not invoked. HUMMMM\n\nHmm. It sounds like configure is not selecting the right template file,\nbecause both template/sunos4_cc and template/sunos4_gcc contain \"AROPT:cr\".\n\nI think you need to run configure with \"--with-template=sunos4_cc\"\n(or gcc if you prefer) to force it to use the right template. A lot\nof your other problems might go away too ;-).\n\nYou might want to look at config.guess and try to figure out why that's\nnot detecting sunos4 to begin with. Hard to believe that that's\nbroken...\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 17 Nov 1998 18:19:43 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] building 6.4 on sunos 4.1.4 " } ]
[ { "msg_contents": "Hi all this relates to the 'tree type' posting.\n\nI keep crashing the backend with:\n pid = SPI_getvalue(rettuple, tupdesc, pid_index); /* get parent id */\n\nI've checked pid_index via elog() and it is 2, there are 4 fields total.\nAnd rettuple and tupdesc have been used several times earlyer in the code,\nand worked fine. So all veriables seem to be OK.\n\n\nI am unclear if this requiers a SPI_connect() or not?\nThe SPI docs say that some functions do and some functions don't, but is\nnot clear about which do and don't. In any event, I tried it both ways.\n\nExcept for this, I think the function I'm writing is finished.\n\nThanks, have a great day\nTerry Mackintosh <[email protected]> http://www.terrym.com\nsysadmin/owner Please! No MIME encoded or HTML mail, unless needed.\n\nProudly powered by R H Linux 4.2, Apache 1.3, PHP 3, PostgreSQL 6.4\n-------------------------------------------------------------------\nSuccess Is A Choice ... book by Rick Patino, get it, read it!\n\n", "msg_date": "Tue, 17 Nov 1998 15:34:05 -0500 (EST)", "msg_from": "Terry Mackintosh <[email protected]>", "msg_from_op": true, "msg_subject": "SPI function help needed." } ]
[ { "msg_contents": "Hi all\n\nWhere are the built in function defined at?\nIn particular, strpos()?\n\nThanks, have a great night.\nTerry Mackintosh <[email protected]> http://www.terrym.com\nsysadmin/owner Please! No MIME encoded or HTML mail, unless needed.\n\nProudly powered by R H Linux 4.2, Apache 1.3, PHP 3, PostgreSQL 6.4\n-------------------------------------------------------------------\nSuccess Is A Choice ... book by Rick Patino, get it, read it!\n\n", "msg_date": "Tue, 17 Nov 1998 21:15:14 -0500 (EST)", "msg_from": "Terry Mackintosh <[email protected]>", "msg_from_op": true, "msg_subject": "where does strpos() come from?" } ]
[ { "msg_contents": "Hi.\n\nI've been doing some more tests with the above configuration, and have\nnoticed the following problems when running the regression tests:\n\n1)\tTHE REGRESSION TEST FOR FLOAT8 IS BROKEN!!!\n\nIndeed, the \"expected\" output for the exp() operator \":\" is brain damaged.\nThis is the query at line 197 of expected/float8.out:\n\nQUERY: SELECT '' AS bad, : (f.f1) from FLOAT8_TBL f;\n\nwhich operates on this table:\n\nQUERY: SELECT '' AS five, FLOAT8_TBL.*;\nfive|f1\n----+--------------------\n |0\n |1004.3\n |-34.84\n |1.2345678901234e+200\n |1.2345678901234e-200\n(5 rows)\n\nAs you can see, the query tries to compute exp(1004.3), which should be\nout of range (the maximum computable is exp(709.78271289338402)), and \nexp(1.2345678901234e+200), which also should be out of range (but only a\nbit ;-). So, the query should fail with an out of range error, as does on\nmy system (\"ERROR: exp() result is out of range\"). However, the\n\"expected\" output is:\n\nbad| ?column?\n---+--------------------\n | 1\n |7.39912306090513e-16\n | 0\n | 0\n | 1\n(5 rows)\n\nwhich is clearly incorrect.\n\n2) The tests for char, varchar, select_implicit, select_having and rules\nfail when locale is enabled, and pass when locale is disabled. This is due\nto character comparisons being case insensitive when locale is enabled. I\ndon't think this is the correct behaviour. I'd appreciate any hint on\nwhere to look to investigate this further.\n\n3) The int8 type is completely brain damaged on my system. The output from\nany select that gets an int8 is '4831823328'. Pointers?\n\n4) The int2 and int4 tests fail due to differences in error messages.\nThis is innocuous, but annoying. I've seen that some systems have\ncustomized expected results. I am willing to build these customized\nresults for DU4, if some kind soul wants to instruct me on how to do it.\n\n5) The geometry test fails because of rounding differences on the last\ndigit. I don't know if this is really a problem and/or how to fix it.\n\n6) The inet test fails, but I haven't looked at it yet.\n\n7) The abstime, tinterval and horology tests fail. It seems to be caused\nby incorrect handling of the daylight savings. However, the output seems\nto be \"less incorrect\" than on previous versions.\n\n8) The plpgsql test fails, but I haven't looked at it yet.\n\nWell, this is enough for now. Let me know what you thing about these\nproblems, and let's see how we can fix them.\n\n-------------------------------------------------------------------\nPedro Jos� Lobo Perea Tel: +34 91 336 78 19\nCentro de C�lculo Fax: +34 91 331 92 29\nEUIT Telecomunicaci�n - UPM e-mail: [email protected]\n\n", "msg_date": "Wed, 18 Nov 1998 19:08:26 +0100 (MET)", "msg_from": "\"Pedro J. Lobo\" <[email protected]>", "msg_from_op": true, "msg_subject": "More on 6.4 on DEC Alpha + Digital Unix 4.0d + DEC C compiler" }, { "msg_contents": "> 1) THE REGRESSION TEST FOR FLOAT8 IS BROKEN!!!\n> the \"expected\" output for the exp() operator \":\" is brain damaged.\n\nThe reference platform never lies. Better figure out how to break your\nmachine instead :)\n\n> 2) The tests for char, varchar, select_implicit, select_having and \n> rules fail when locale is enabled, and pass when locale is disabled. \n> This is due to character comparisons being case insensitive when \n> locale is enabled. I don't think this is the correct behaviour. I'd \n> appreciate any hint on where to look to investigate this further.\n\nsrc/backend/utils/adt/varlena.c\n\n> 3) The int8 type is completely brain damaged on my system. The output \n> from any select that gets an int8 is '4831823328'. Pointers?\n\nIt should be easy to fix, since you have a real 64-bit machine. Sorry\nI've lost access to my DUnix-4.0 boxes so can't help directly. Look at\nwhat configure decided your int8 setup should be.\n\n> 4) ...\n> 5) ...\n\nBoth are solved with platform-specific \"expected\" results (as would the\nexp \"failure\" earlier, though it should be fixed on the reference\nplatform).\n\n> 7) The abstime, tinterval and horology tests fail. It seems to be \n> caused by incorrect handling of the daylight savings. However, the \n> output seems to be \"less incorrect\" than on previous versions.\n\nThis has always been due to conflicts between the two styles of\ndate/time support on Unix boxes. Perhaps it isn't being configured\ncorrectly?\n\n - Tom\n", "msg_date": "Thu, 19 Nov 1998 03:17:08 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] More on 6.4 on DEC Alpha + Digital Unix 4.0d + DEC C\n\tcompiler" }, { "msg_contents": "Hi!\n\nOn Thu, 19 Nov 1998, Thomas G. Lockhart wrote:\n> > 2) The tests for char, varchar, select_implicit, select_having and \n> > rules fail when locale is enabled, and pass when locale is disabled. \n> > This is due to character comparisons being case insensitive when \n> > locale is enabled. I don't think this is the correct behaviour. I'd \n> > appreciate any hint on where to look to investigate this further.\n> \n> src/backend/utils/adt/varlena.c\n\n \"character comparisons being case insensitive\"? It is a problem in your\nlocale, not in the comparison code (I hope). There is no such thing as\n\"case-insensitive strcoll\", and I just used strcoll in varlena.c.\n The same problem popped up many times among linux users. Just install\ncorrect locale - and all will run well. There are test programs in\nsrc/test/locale, look carefully. If you can - supply test data for your\nlocale.\n\n(It looks like my messages to pgsql-*@postgresql.org are usually dropped to\nfloor. If anyone see it - please confirm, send to: [email protected]. I\nhave no problems receiving mail from these lists, neither sending to many\nother lists I am on).\n\nOleg.\n---- \n Oleg Broytmann http://members.tripod.com/~phd2/ [email protected]\n Programmers don't die, they just GOSUB without RETURN.\n\n", "msg_date": "Thu, 19 Nov 1998 12:26:01 +0300 (MSK)", "msg_from": "Oleg Broytmann <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] More on 6.4 on DEC Alpha + Digital Unix 4.0d + DEC C\n\tcompiler" }, { "msg_contents": "On Thu, 19 Nov 1998, Oleg Broytmann wrote:\n\n>Hi!\n>\n>On Thu, 19 Nov 1998, Thomas G. Lockhart wrote:\n>> > 2) The tests for char, varchar, select_implicit, select_having and \n>> > rules fail when locale is enabled, and pass when locale is disabled. \n>> > This is due to character comparisons being case insensitive when \n>> > locale is enabled. I don't think this is the correct behaviour. I'd \n>> > appreciate any hint on where to look to investigate this further.\n>> \n>> src/backend/utils/adt/varlena.c\n>\n> \"character comparisons being case insensitive\"? It is a problem in your\n>locale, not in the comparison code (I hope). There is no such thing as\n>\"case-insensitive strcoll\", and I just used strcoll in varlena.c.\n> The same problem popped up many times among linux users. Just install\n>correct locale - and all will run well. There are test programs in\n>src/test/locale, look carefully. If you can - supply test data for your\n>locale.\n\nWell, after a bit of investigation, now I know where the problem is. When\nyou do a case sensitive comparison, 'A' is considered \"smaller\" than 'a'.\nExcept, of course, if you use Digital's strcoll() (agh!). The comparison\nwhen locale is enabled isn't case insensitive. Simply, the \"sensitivity\"\nis inverted: 'A' is considered *greater* than 'a' (but not equal, as I\nfirst thought). When you use the 'C' locale (that is, no locale), then the\noutput of strcmp() and strcoll() is the same.\n\nSo, this is indeed a Digital Unix problem. I've made the same tests under\nFreeBSD, and they work as expected on it.\n\nThe next question is what to do now. I will send a message to Digital so\nthey can fix this bug in future releases, but what do I do in the\nmeantime? The problem is not dangerous except if you rely on upper-case\nletters being ordered before lower-case. I could make a custom \"expected\"\noutput so that the regression tests do not fail, or we could simply write\na \"README.DigitalUnix\" telling the story. Writing a custom strcoll() seems\na bit overkill. What do you think?\n\n>(It looks like my messages to pgsql-*@postgresql.org are usually dropped to\n>floor. If anyone see it - please confirm, send to: [email protected]. I\n>have no problems receiving mail from these lists, neither sending to many\n>other lists I am on).\n\nYour message made it to the list (I received it two times, one from you\nand other from the list).\n\n-------------------------------------------------------------------\nPedro Jos� Lobo Perea Tel: +34 91 336 78 19\nCentro de C�lculo Fax: +34 91 331 92 29\nEUIT Telecomunicaci�n - UPM e-mail: [email protected]\n\n", "msg_date": "Thu, 19 Nov 1998 11:37:51 +0100 (MET)", "msg_from": "\"Pedro J. Lobo\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] More on 6.4 on DEC Alpha + Digital Unix 4.0d + DEC C\n\tcompiler" }, { "msg_contents": "Hi!\n\n Custom strcoll() is best approach, but if don't need it, I think writing\nREADME is enough. Locale tests are not in regressoin test, so everyone\nshould be happy.\n I didn't include locale tests in regression test because:\n1) not so many people need locale support\n2) anyway there are so many locale problems that regression test will fail\nfor many testers.\n Initially I thought \"locale problems\" are only in locale data, but your\ninvestigation reveals there are broken locale function out there, too.\n\nOn Thu, 19 Nov 1998, Pedro J. Lobo wrote:\n> Well, after a bit of investigation, now I know where the problem is. When\n> you do a case sensitive comparison, 'A' is considered \"smaller\" than 'a'.\n> Except, of course, if you use Digital's strcoll() (agh!). The comparison\n> when locale is enabled isn't case insensitive. Simply, the \"sensitivity\"\n> is inverted: 'A' is considered *greater* than 'a' (but not equal, as I\n> first thought). When you use the 'C' locale (that is, no locale), then the\n> output of strcmp() and strcoll() is the same.\n> \n> So, this is indeed a Digital Unix problem. I've made the same tests under\n> FreeBSD, and they work as expected on it.\n> \n> The next question is what to do now. I will send a message to Digital so\n> they can fix this bug in future releases, but what do I do in the\n> meantime? The problem is not dangerous except if you rely on upper-case\n> letters being ordered before lower-case. I could make a custom \"expected\"\n> output so that the regression tests do not fail, or we could simply write\n> a \"README.DigitalUnix\" telling the story. Writing a custom strcoll() seems\n> a bit overkill. What do you think?\n\nOleg.\n---- \n Oleg Broytmann National Research Surgery Centre http://sun.med.ru/~phd/\n Programmers don't die, they just GOSUB without RETURN.\n\n", "msg_date": "Thu, 19 Nov 1998 14:10:39 +0300 (MSK)", "msg_from": "Oleg Broytmann <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] More on 6.4 on DEC Alpha + Digital Unix 4.0d + DEC C\n\tcompiler" }, { "msg_contents": "\"Thomas G. Lockhart\" <[email protected]> writes:\n> \"Pedro J. Lobo\" <[email protected]> wrote:\n>> 1) THE REGRESSION TEST FOR FLOAT8 IS BROKEN!!!\n>> the \"expected\" output for the exp() operator \":\" is brain damaged.\n\n> The reference platform never lies.\n\nIn this case the reference platform is broken, IMHO.\n\nHowever, Pedro's not batting 1.000 today either. The exp() problem\nis not overflow but underflow, because a prior query in the float8\ntest alters the table. At the point where the test in question\nexecutes, the actual contents of the f1 table are\n\nQUERY: SELECT '' AS five, FLOAT8_TBL.*;\nfive|f1 \n----+---------------------\n |0 \n |-34.84 \n |-1004.3 \n |-1.2345678901234e+200\n |-1.2345678901234e-200\n(5 rows)\n\n(taken verbatim from a few lines further down in the \"expected\" output).\n\nThe \"expected\" output is\n\nQUERY: SELECT '' AS bad, : (f.f1) from FLOAT8_TBL f;\nbad| ?column?\n---+--------------------\n | 1\n |7.39912306090513e-16\n | 0\n | 0\n | 1\n(5 rows)\n\nThe first two of these are right, and so is the last one, but the\nthird and fourth lines represent underflow. On my machine, when\nthe result of exp(x) is too small to store as a double, the returned\nresult is 0 and errno is set to ERANGE --- and this is the behavior\ndemanded by ANSI C, according to my reference materials.\n\nThe implementation of exp() in float.c reads\n\n#ifndef finite\n\terrno = 0;\n#endif\n\t*result = (float64data) exp(tmp);\n#ifndef finite\n\tif (errno == ERANGE)\n#else\n\tif (!finite(*result))\n#endif\n\t\telog(ERROR, \"exp() result is out of range\");\n\n\nPedro's machine and my machine are obeying the ANSI specification\nand producing the \"exp() result is out of range\" error.\n\nThomas' machine is evidently following the \"ifdef finite\" path.\nZero, however, is finite, so his machine is failing to notice the\nunderflow.\n\nI think we have two possible courses of action here:\n\n1. Follow the ANSI spec and raise an error for exp() underflow.\nThe ERRNO path is already OK for this, but the other would have\nto be made to read\n\tif (!finite(*result) || *result == 0.0)\nand we'd have to fix the expected regress output.\n\n2. Decide that we are smarter than the ANSI C authors and the\ninventors of libm, and that a small exp() result should quietly\nunderflow to zero. In that case the ERRNO path would have to read\n\tif (errno == ERANGE && *result != 0.0)\n\nI like choice #1 myself.\n\nBTW, while I was at it I took the time to figure out why the\npow() part of the test was failing for me (I was getting zeroes\ninstead of the expected \"pow() result is out of range\" error).\nTurns out that depending on which HPUX math library version you\nuse, pow() might fail with EDOM rather than ERANGE for negative\ninputs. I'll change the pow() code to check for either errno\nwhen I get a chance.\n\n>> 7) The abstime, tinterval and horology tests fail. It seems to be \n>> caused by incorrect handling of the daylight savings. However, the \n>> output seems to be \"less incorrect\" than on previous versions.\n\nOn some Unix boxes, the standard time library doesn't know about\ndaylight savings time for dates before 1970. This causes localized\ndiscrepancies in the horology results. I don't see any failures\nrelated to this in abstime or tinterval, however.\n\ntinterval used to have problems with outputs appearing in a bogus\nsort order, but that was fixed by some pg_operator patches applied\nonly a week or so before 6.4 release. Did you do an initdb after\ninstalling 6.4? If not then you still have the busted operator\ntable entries...\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 19 Nov 1998 11:09:10 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] More on 6.4 on DEC Alpha + Digital Unix 4.0d + DEC C\n\tcompiler" }, { "msg_contents": "On Thu, 19 Nov 1998, Tom Lane wrote:\n\n>\"Thomas G. Lockhart\" <[email protected]> writes:\n>> \"Pedro J. Lobo\" <[email protected]> wrote:\n>>> 1) THE REGRESSION TEST FOR FLOAT8 IS BROKEN!!!\n>>> the \"expected\" output for the exp() operator \":\" is brain damaged.\n>\n>> The reference platform never lies.\n>\n>In this case the reference platform is broken, IMHO.\n>\n>However, Pedro's not batting 1.000 today either. The exp() problem\n>is not overflow but underflow, because a prior query in the float8\n>test alters the table. At the point where the test in question\n>executes, the actual contents of the f1 table are\n>\n>QUERY: SELECT '' AS five, FLOAT8_TBL.*;\n>five|f1 \n>----+---------------------\n> |0 \n> |-34.84 \n> |-1004.3 \n> |-1.2345678901234e+200\n> |-1.2345678901234e-200\n>(5 rows)\n>\n>(taken verbatim from a few lines further down in the \"expected\" output).\n\nYep, you are right. I missed the query that multiplies every row in\nFLOAT8_TBL by -1.\n\n>1. Follow the ANSI spec and raise an error for exp() underflow.\n>The ERRNO path is already OK for this, but the other would have\n>to be made to read\n>\tif (!finite(*result) || *result == 0.0)\n>and we'd have to fix the expected regress output.\n>\n>2. Decide that we are smarter than the ANSI C authors and the\n>inventors of libm, and that a small exp() result should quietly\n>underflow to zero. In that case the ERRNO path would have to read\n>\tif (errno == ERANGE && *result != 0.0)\n>\n>I like choice #1 myself.\n\nMe too.\n\n>>> 7) The abstime, tinterval and horology tests fail. It seems to be \n>>> caused by incorrect handling of the daylight savings. However, the \n>>> output seems to be \"less incorrect\" than on previous versions.\n>\n>On some Unix boxes, the standard time library doesn't know about\n>daylight savings time for dates before 1970. This causes localized\n>discrepancies in the horology results. I don't see any failures\n>related to this in abstime or tinterval, however.\n\nThe differences appear only when the row(s) correspond to the year 1947,\nand the only difference is between \"PST\" and \"PDT\" at the end of the date\n(don't remember right now which one is the expected).\n\n>tinterval used to have problems with outputs appearing in a bogus\n>sort order, but that was fixed by some pg_operator patches applied\n>only a week or so before 6.4 release. Did you do an initdb after\n>installing 6.4? If not then you still have the busted operator\n>table entries...\n\nThis was not the case. Yes, I did an initdb, and the order of expected and\nreal results was the same.\n\nAfter all this, I would say that 6.4 is quite useable in Digital Unix. I\nwill be moving my production database in the next weeks.\n\n-------------------------------------------------------------------\nPedro Jos� Lobo Perea Tel: +34 91 336 78 19\nCentro de C�lculo Fax: +34 91 331 92 29\nEUIT Telecomunicaci�n - UPM e-mail: [email protected]\n\n", "msg_date": "Thu, 19 Nov 1998 19:49:11 +0100 (MET)", "msg_from": "\"Pedro J. Lobo\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] More on 6.4 on DEC Alpha + Digital Unix 4.0d + DEC C\n\tcompiler" }, { "msg_contents": "> > The reference platform never lies.\n> In this case the reference platform is broken, IMHO.\n\nUh, yes. I was hoping that my statement was outrageous enough to be\nprima facia absurd. Ha Ha. Pretty funny, eh?\n\n> 1. Follow the ANSI spec and raise an error for exp() underflow.\n> The ERRNO path is already OK for this, but the other would have\n> to be made to read\n> if (!finite(*result) || *result == 0.0)\n> and we'd have to fix the expected regress output.\n> 2. Decide that we are smarter than the ANSI C authors and the\n> inventors of libm, and that a small exp() result should quietly\n> underflow to zero. In that case the ERRNO path would have to read\n> if (errno == ERANGE && *result != 0.0)\n> I like choice #1 myself.\n\nOK, sounds good.\n\n> BTW, while I was at it I took the time to figure out why the\n> pow() part of the test was failing for me (I was getting zeroes\n> instead of the expected \"pow() result is out of range\" error).\n> Turns out that depending on which HPUX math library version you\n> use, pow() might fail with EDOM rather than ERANGE for negative\n> inputs. I'll change the pow() code to check for either errno\n> when I get a chance.\n\nHmm. Any chance of making that HP-specific? It would be a shame to make\nevery platform test for two values on every calculation...\n\nRegards.\n\n - Tom\n", "msg_date": "Tue, 24 Nov 1998 02:35:41 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] More on 6.4 on DEC Alpha + Digital Unix 4.0d + DEC C\n\tcompiler" }, { "msg_contents": "\"Thomas G. Lockhart\" <[email protected]> writes:\n>> 1. Follow the ANSI spec and raise an error for exp() underflow.\n>> The ERRNO path is already OK for this, but the other would have\n>> to be made to read\n>> if (!finite(*result) || *result == 0.0)\n>> and we'd have to fix the expected regress output.\n\n> OK, sounds good.\n\nOK, I'll do something about it this weekend, unless someone beats me\nto it.\n\n>> BTW, while I was at it I took the time to figure out why the\n>> pow() part of the test was failing for me (I was getting zeroes\n>> instead of the expected \"pow() result is out of range\" error).\n>> Turns out that depending on which HPUX math library version you\n>> use, pow() might fail with EDOM rather than ERANGE for negative\n>> inputs. I'll change the pow() code to check for either errno\n>> when I get a chance.\n\n> Hmm. Any chance of making that HP-specific? It would be a shame to make\n> every platform test for two values on every calculation...\n\nAFAICS, *any* error out of the pow() ought to be treated the same.\nSo what I was actually planning to do was\n\n\terrno = 0;\n\tresult = pow(...);\n\tif (errno != 0)\n\t\tELOG(...);\n\nwhich is probably a cycle or two faster than what we have, since\ntesting against zero is usually a shade cheaper than comparison\nto a nonzero constant.\n\nNot that a cycle or three saved or wasted in a pow() function\nevaluation is going to be significant, or even measurable ;-)\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 24 Nov 1998 10:55:43 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] More on 6.4 on DEC Alpha + Digital Unix 4.0d + DEC C\n\tcompiler" } ]
[ { "msg_contents": "Hi\n\nI have the following problem using PostgreSQL 6.4 on RedHat Linux 5.1 \non x86\n\nusing the following table\n\nthplus=> \\d envelope\n\nTable = envelope\n+-------------------------+----------------------------------+-------+\n| Field | Type | Length|\n+-------------------------+----------------------------------+-------+\n| envelope_id | int4 not null default nextval ( | 4 |\n| order_type_id | int4 not null | 4 |\n| envelope_name | varchar() not null | 32 |\n| signed_string | text | var |\n| envelope_comment | text | var |\n| envelope_on_hold | int2 | 2 |\n| envelope_order_count | int4 | 4 |\n| envelope_total | int4 | 4 |\n| envelope_currency | text | var |\n| envelope_modify_time | datetime | 8 |\n| state_id | char() | 1 |\n+-------------------------+----------------------------------+-------+\n\nthplus=> create index envelope_fk2 on envelope(state_id)\n\nI try to use the following query\n\nthplus=>\nexplain \nthplus-> select count(*) from envelope where state_id='H' or\nstate_id='E';\nNOTICE: QUERY PLAN:\n\nAggregate (cost=4.10 size=0 width=0)\n -> Index Scan using envelope_fk2 on envelope (cost=4.10 size=1\nwidth=4)\n\nEXPLAIN\n\nwhen actually running it, I get the following:\n\nthplus=> select count(*) from envelope where state_id='H' or\nstate_id='E';\npqReadData() -- backend closed the channel unexpectedly.\n This probably means the backend terminated abnormally before or\nwhile processing the request.\nWe have lost the connection to the backend, so further processing is\nimpossible. Terminating.\n\n\nBut the following query runs fine:\n\nthplu=> select count(*) from envelope where envelope_id=1 or\nenvelope_id=3;\ncount\n-----\n 2\n(1 row)\n\nas well as this\n\nthplus=> select count(*) from envelope where envelope_id=1 or\nstate_id='E';\ncount\n-----\n 12\n(1 row)\n\nand this\n\nthplus=> select count(*) from envelope where state_id='H'\nthplus-> union\nthplus-> select count(*) from envelope where state_id='E';\ncount\n-----\n 11\n 1140\n(2 rows)\n\n\nSo it seems that there is a problem with using indexes in ORs that are\ndefined over text types\n\nthe same crash happened also when using varchar(1) as the type of\nstate_id\n\nBTW, it does not happen when the state_id is first field \n\n--------------\nHannu\n", "msg_date": "Wed, 18 Nov 1998 21:26:40 +0200", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": true, "msg_subject": "Bug in 6.4 release" }, { "msg_contents": "> So it seems that there is a problem with using indexes in ORs that are\n> defined over text types\n> \n> the same crash happened also when using varchar(1) as the type of\n> state_id\n> \n> BTW, it does not happen when the state_id is first field \n\nRecreated. I am on it.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 18 Nov 1998 14:56:22 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Bug in 6.4 release" }, { "msg_contents": "> Hi\n> \n> I have the following problem using PostgreSQL 6.4 on RedHat Linux 5.1 \n> on x86\n> \n> using the following table\n> \n> thplus=> \\d envelope\n> \n> Table = envelope\n> +-------------------------+----------------------------------+-------+\n> | Field | Type | Length|\n> +-------------------------+----------------------------------+-------+\n> | envelope_id | int4 not null default nextval ( | 4 |\n> | order_type_id | int4 not null | 4 |\n> | envelope_name | varchar() not null | 32 |\n> | signed_string | text | var |\n> | envelope_comment | text | var |\n> | envelope_on_hold | int2 | 2 |\n> | envelope_order_count | int4 | 4 |\n> | envelope_total | int4 | 4 |\n> | envelope_currency | text | var |\n> | envelope_modify_time | datetime | 8 |\n> | state_id | char() | 1 |\n> +-------------------------+----------------------------------+-------+\n> \n> thplus=> create index envelope_fk2 on envelope(state_id)\n> \n> I try to use the following query\n> \n> thplus=>\n> explain \n> thplus-> select count(*) from envelope where state_id='H' or\n> state_id='E';\n> NOTICE: QUERY PLAN:\n> \n> Aggregate (cost=4.10 size=0 width=0)\n> -> Index Scan using envelope_fk2 on envelope (cost=4.10 size=1\n> width=4)\n> \n> EXPLAIN\n> \n> when actually running it, I get the following:\n> \n> thplus=> select count(*) from envelope where state_id='H' or\n> state_id='E';\n> pqReadData() -- backend closed the channel unexpectedly.\n> This probably means the backend terminated abnormally before or\n> while processing the request.\n> We have lost the connection to the backend, so further processing is\n> impossible. Terminating.\n> \n> \n> But the following query runs fine:\n> \n> thplu=> select count(*) from envelope where envelope_id=1 or\n> envelope_id=3;\n> count\n> -----\n> 2\n> (1 row)\n> \n> as well as this\n> \n> thplus=> select count(*) from envelope where envelope_id=1 or\n> state_id='E';\n> count\n> -----\n> 12\n> (1 row)\n> \n> and this\n> \n> thplus=> select count(*) from envelope where state_id='H'\n> thplus-> union\n> thplus-> select count(*) from envelope where state_id='E';\n> count\n> -----\n> 11\n> 1140\n> (2 rows)\n> \n> \n> So it seems that there is a problem with using indexes in ORs that are\n> defined over text types\n> \n> the same crash happened also when using varchar(1) as the type of\n> state_id\n> \n> BTW, it does not happen when the state_id is first field \n> \n> --------------\n> Hannu\n> \n> \n\nI need help with this one. Attached is a patch that also fails, but it\nlooks closer than the original code. The problem appears to be that I\ncan't get a slot that matches the items of the Var node I am trying to\nevaluate. If I used one that matches the heap tuple, that fails,\nbecause if the index is on the second column of the tuple, the attnum is\n1, while it is actually 2nd in the tuple slot.\n\nDoes anyone know the executor well enough to find me that slot that\nmatches the Var node? I can't figure it out.\n\n\n---------------------------------------------------------------------------\n\n\n*** ./backend/executor/nodeIndexscan.c.orig\tFri Nov 20 11:38:27 1998\n--- ./backend/executor/nodeIndexscan.c\tFri Nov 20 13:25:46 1998\n***************\n*** 153,161 ****\n \t\t\t\tfor (prev_index = 0; prev_index < indexstate->iss_IndexPtr;\n \t\t\t\t\t prev_index++)\n \t\t\t\t{\n- \t\t\t\t\tscanstate->cstate.cs_ExprContext->ecxt_scantuple = slot;\n \t\t\t\t\tif (ExecQual(nth(prev_index, node->indxqual),\n! \t\t\t\t\t\t\t\t scanstate->cstate.cs_ExprContext))\n \t\t\t\t\t{\n \t\t\t\t\t\tprev_matches = true;\n \t\t\t\t\t\tbreak;\n--- 153,160 ----\n \t\t\t\tfor (prev_index = 0; prev_index < indexstate->iss_IndexPtr;\n \t\t\t\t\t prev_index++)\n \t\t\t\t{\n \t\t\t\t\tif (ExecQual(nth(prev_index, node->indxqual),\n! \t\t\t\t\t\t\t\tnode->scan.scanstate->cstate.cs_ExprContext))\n \t\t\t\t\t{\n \t\t\t\t\t\tprev_matches = true;\n \t\t\t\t\t\tbreak;\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 20 Nov 1998 13:32:22 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Bug in 6.4 release" } ]
[ { "msg_contents": "Is anybody else using Solaris 7? I am using it on an Ultra 10\nand it has broken cvsup. It broke both the version that I built\nas well as the version available from the ftp site. Is it just\nme?\n\nThanks,\n\nMatthew\n----------\nMatthew C. Aycock\nOperating Systems Analyst/Admin, Senior\nDept Math/CS\nEmory University, Atlanta, GA \nInternet: [email protected] \t\t\n\n\n", "msg_date": "Thu, 19 Nov 1998 11:54:11 -0500 (EST)", "msg_from": "\"Matthew C. Aycock\" <[email protected]>", "msg_from_op": true, "msg_subject": "Solaris 7" }, { "msg_contents": "On Thu, 19 Nov 1998, Matthew C. Aycock wrote:\n\n> Is anybody else using Solaris 7? I am using it on an Ultra 10\n> and it has broken cvsup. It broke both the version that I built\n> as well as the version available from the ftp site. Is it just\n> me?\n\nPossibly by the end of next week, I'll have a copy. A friend of mine has\ngot the free version, and he's loaning it to me, so I'll soon find out if\nit is broken on the intel platform.\n\nPeter\n\n-- \n Peter T Mount [email protected]\n Main Homepage: http://www.retep.org.uk\nPostgreSQL JDBC Faq: http://www.retep.org.uk/postgres\n Java PDF Generator: http://www.retep.org.uk/pdf\n\n", "msg_date": "Sat, 21 Nov 1998 12:29:35 +0000 (GMT)", "msg_from": "Peter T Mount <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Solaris 7" } ]
[ { "msg_contents": "Hi all\n\nBroder line off topic here, but...\nUpdate: The new search engine I'v been working on is almost done, except\nfor one little (==big:-) thing.\n\nTo do/maintian cross linked categories, it will need a complecated spi\nfunction that can be called by a trigger. The closest thing on hand is\nrefint.c, which will be a starting point.\n\nI'm looking for some one to work with on this, as it would be nice to have\nit done before the year 2000:-)\n\nAny volentiers?\n\nP.S. It can be seen/downloaded at my site.\n\nThanks, and Have a great day\nTerry Mackintosh <[email protected]> http://www.terrym.com\nsysadmin/owner Please! No MIME encoded or HTML mail, unless needed.\n\nProudly powered by R H Linux 4.2, Apache 1.3, PHP 3, PostgreSQL 6.4\n-------------------------------------------------------------------\nSuccess Is A Choice ... book by Rick Patino, get it, read it!\n\n", "msg_date": "Thu, 19 Nov 1998 14:13:54 -0500 (EST)", "msg_from": "Terry Mackintosh <[email protected]>", "msg_from_op": true, "msg_subject": "Search engine almost done, need spi knowlage" } ]
[ { "msg_contents": "I was hoping someone could shed some light on the \nfollowing problem:\n\nAfter increasing the buffer size of the postmaster\nusing the -B option to 256, we suffered a backend\ncrash which caused the backend to crash so hard,\nno further connections could be made.\n\nI rebooted the server (an intranet Web server-based\napplication), and attempted to reaccess the database\nusing the application. The application had problems\nquerying one of the tables, where the query would \nsimply suspend forever. After rebooting again, I\ndecided to run a VACUUM ANALYZE manually (We run it\nvia cron every night), and received the following\nmessage:\n\nNOTICE: AbortTransaction and not in in-progress state\nNOTICE: AbortTransaction and not in in-progress state\n\nAfter perusing Usenet for similar problems, someone\nhad reported that they experienced the problem when\nthey increased their buffer size over 128K. The\nUsenet article was pre-6.4 (9/14/98). I decreased the\nbuffer to 128, restarted the postmaster (I actually\nrebooted), and reran VACUUM ANALYZE without problems.\n\nIs it safe to increase the buffer size above 128K?\nI did this because one of the queries (a 6-way join)\ncaused the backend to report an error telling me\nto increase the buffer size.\n\nThanks for any info, \n\nMarcus Mascari ([email protected])\n\n\n\n\n_________________________________________________________\nDO YOU YAHOO!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n", "msg_date": "Thu, 19 Nov 1998 12:04:54 -0800 (PST)", "msg_from": "Marcus Mascari <[email protected]>", "msg_from_op": true, "msg_subject": "AbortTransaction" } ]
[ { "msg_contents": "Please tell me ,\n1. Actually I have to store 2048 to 27584 short integers\ninto the one\nfield of each record. The number of data is not fixed. Presently I am\nstoring them in a file and then importing that file into the database\nand at the time of retrieval I am exporting that file from database. The\npage size of postgres is 8K Bytes. So it is not better according to the\nmemory consumption.\n The second option is array but the record or tuple size\nin\npostgres is limited to 8K. It is not sufficient to store 27584 data of\ntype short integers. So please tell me how can I store this much number\nof data in a single record.\n-- What are other suitable types for me. Please tell me.\n\n2.Can we change page size of database. \n-- \n\t\t\t\t\tHitesh kumar Gulati \n\t\t\t\t\tE-mail-:[email protected]\n\t\t\t\t\tPh. (079)-2864023 (O)\n\t\t\t\t\tEngineer - SC\n\t\t\t\t\tInstitute For Plasma Research\n\t\t\t\t\tNear Indira Bridge\n\t\t\t\t\tBhat Gandhinagar(GUJARAT)\n", "msg_date": "Thu, 19 Nov 1998 17:25:50 -0500", "msg_from": "Hitesh Kumar Gulati <[email protected]>", "msg_from_op": true, "msg_subject": "Problem With Postgres" } ]
[ { "msg_contents": "I was just told in a completely different subject that someone has problems\nwith 6.3.2 running wild. \n\nThey use PHP and have many persistant connections. Sometimes the following\nmessages pop up:\n\nNOTICE: PortalHeapMemoryFree: 0x81c98e0 not in alloc set!\nNOTICE: PortalHeapMemoryFree: 0x81c98b0 not in alloc set!\nNOTICE: PortalHeapMemoryFree: 0x81c9428 not in alloc set!\nNOTICE: PortalHeapMemoryFree: 0x81c93f8 not in alloc set!\nNOTICE: PortalHeapMemoryFree: 0x81c9140 not in alloc set!\nNOTICE: PortalHeapMemoryFree: 0x81c1620 not in alloc set!\nNOTICE: PortalHeapMemoryFree: 0x81c9620 not in alloc set!\nNOTICE: PortalHeapMemoryFree: 0x81c9550 not in alloc set!\nNOTICE: PortalHeapMemoryFree: 0x81c8de0 not in alloc set!\nNOTICE: PortalHeapMemoryFree: 0x81c8dc0 not in alloc set!\nNOTICE: SIAssignBackendId: discarding tag 2147475715\nFATAL 1: Backend cache invalidation initialization failed\nNOTICE: SIAssignBackendId: discarding tag 2147475715\nFATAL 1: Backend cache invalidation initialization failed\nNOTICE: SIAssignBackendId: discarding tag 2147475715\n\nWithout ipcclean there is no way to re-start the postmaster after this.\n\nAt some point (not always though) some postgres processes start eating the\ncom,plete CPU time. The load goes up way too high so that watchdog kicks in,\nwhich is the reason why I heard about this.\n\nDoes this ring a bell for anyone?\n\nMichael\n\n-- \nDr. Michael Meskes, Manager of the Western Branch Office, Datenrevision GmbH\nwork: Cuxhavener Str. 36, D-21149 Hamburg\nhome: Th.-Heuss-Str. 61, D-41812 Erkelenz, [email protected]\nGo SF49ers! Go Rhein Fire! Use Debian GNU/Linux! Use PostgreSQL!\n", "msg_date": "Sat, 21 Nov 1998 17:00:33 +0100", "msg_from": "Michael Meskes <[email protected]>", "msg_from_op": true, "msg_subject": "Problem with 6.3.2" }, { "msg_contents": "Michael Meskes <[email protected]> writes:\n> I was just told in a completely different subject that someone has problems\n> with 6.3.2 running wild. \n> They use PHP and have many persistant connections. Sometimes the following\n> messages pop up:\n> NOTICE: PortalHeapMemoryFree: 0x81c98e0 not in alloc set!\n> [etc]\n\nI haven't seen this myself, but quite a number of serious backend bugs\nwere fixed between 6.3.2 and 6.4. My advice is not to bother trying to\ndebug this problem unless you can reproduce it after installing 6.4.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 21 Nov 1998 13:46:06 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Problem with 6.3.2 " } ]
[ { "msg_contents": "Dear all,\n\n I have some difficult time in using postgresql 6.4 with chinese BIG5\ncharacters. I am just looking for storing BIG characters in a text field\nand retrieve correctly. I have --enable-mb when I compile. I am on RH5.1\nintel platform, running PG 6.4.\n I just created a testing table test\n create test ( name char(20), age int);\n For most of the characters in BIG5, it works and I can insert\nchinese name into the table, but for some characters, esp my own name,\nit does not work. I have check the problem out . But cannot solve it.\n It is because in my name under BIG5 coding it is \"5cb3 54ab c7b3\"\nor\nin ASCII code \"263 \\ 253 T 263 307\" where two byte is a character.\nThat is \"5cb3\" ('263' '\\' ) is the first character and '54ab' ( '253'\n'T' ) becomes the second character. The problem is that somewhere\nbetween storing the value into database and client frontend (Perl,\nMSAccess) , the '\\' is interpreted and thus the stored value becomes\n\"263 253 T 263 307\" which is distorted.\n I don't know where exactly is the problem as when I use Mysql, it is\nworking fine.\n Could anyone give me some hints or help..\n Your help is really very appreciated!!!!!!!!!!!!!!!!\n\n\nBest Rgds,\nJacky\n\n", "msg_date": "Sun, 22 Nov 1998 02:32:33 +0800", "msg_from": "\"Hui Chun Kit, Jacky\" <[email protected]>", "msg_from_op": true, "msg_subject": "Questions on using multi-byte character in a field of a table (BIG5)" } ]
[ { "msg_contents": "Bruce,\n\nSmall correction to the TODO list. The \"allow ORDER BY a function\"\nfeature already exist in 6.4.\n\n EXAMPLES:\n\n SELECT foo FROM bar ORDER BY upper(foo);\n\n -- or any other expression for that matter\n\n SELECT foo FROM bar ORDER BY att1 || att2;\n\nIt was done as part of the \"allow GROUP BY a function\" feature.\n\n", "msg_date": "Sat, 21 Nov 1998 13:36:23 -0500", "msg_from": "David Hartwig <[email protected]>", "msg_from_op": true, "msg_subject": "Allow ORDER BY a Function" }, { "msg_contents": "Updated.\n\n> Bruce,\n> \n> Small correction to the TODO list. The \"allow ORDER BY a function\"\n> feature already exist in 6.4.\n> \n> EXAMPLES:\n> \n> SELECT foo FROM bar ORDER BY upper(foo);\n> \n> -- or any other expression for that matter\n> \n> SELECT foo FROM bar ORDER BY att1 || att2;\n> \n> It was done as part of the \"allow GROUP BY a function\" feature.\n> \n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 21 Nov 1998 15:10:49 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Allow ORDER BY a Function" } ]
[ { "msg_contents": "Dear all,\n\n I have some difficult time in using postgresql 6.4 with chinese BIG5\n\ncharacters. I am just looking for storing BIG characters in a text field\n\nand retrieve correctly. I have --enable-mb when I compile. I am on RH5.1\n\nintel platform, running PG 6.4.\n I just created a testing table test\n create test ( name char(20), age int);\n For most of the characters in BIG5, it works and I can insert\nchinese name into the table, but for some characters, esp my own name,\nit does not work. I have check the problem out . But cannot solve it.\n It is because in my name under BIG5 coding it is \"5cb3 54ab c7b3\"\nor\nin ASCII code \"263 \\ 253 T 263 307\" where two byte is a character.\nThat is \"5cb3\" ('263' '\\' ) is the first character and '54ab' ( '253'\n'T' ) becomes the second character. The problem is that somewhere\nbetween storing the value into database and client frontend (Perl,\nMSAccess) , the '\\' is interpreted and thus the stored value becomes\n\"263 253 T 263 307\" which is distorted.\n I don't know where exactly is the problem as when I use Mysql, it is\n\nworking fine.\n Could anyone give me some hints or help..\n Your help is really very appreciated!!!!!!!!!!!!!!!!\n\n\nBest Rgds,\nJacky\n\n\n\n\n\n", "msg_date": "Sun, 22 Nov 1998 03:46:27 +0800", "msg_from": "\"Hui Chun Kit, Jacky\" <[email protected]>", "msg_from_op": true, "msg_subject": "Questions on using multi-byte character in a field of a table (BIG5)" }, { "msg_contents": "At 3:46 AM 98.11.22 +0800, Hui Chun Kit, Jacky wrote:\n>Dear all,\n>\n> I have some difficult time in using postgresql 6.4 with chinese BIG5\n>\n>characters. I am just looking for storing BIG characters in a text field\n>\n>and retrieve correctly. I have --enable-mb when I compile. I am on RH5.1\n\nWhat did you choose for an encoding?\nBIG5 is not supported yet in 6.4, sorry.\n\n>intel platform, running PG 6.4.\n> I just created a testing table test\n> create test ( name char(20), age int);\n> For most of the characters in BIG5, it works and I can insert\n>chinese name into the table, but for some characters, esp my own name,\n>it does not work. I have check the problem out . But cannot solve it.\n> It is because in my name under BIG5 coding it is \"5cb3 54ab c7b3\"\n>or\n>in ASCII code \"263 \\ 253 T 263 307\" where two byte is a character.\n>That is \"5cb3\" ('263' '\\' ) is the first character and '54ab' ( '253'\n>'T' ) becomes the second character. The problem is that somewhere\n>between storing the value into database and client frontend (Perl,\n>MSAccess) , the '\\' is interpreted and thus the stored value becomes\n>\"263 253 T 263 307\" which is distorted.\n> I don't know where exactly is the problem as when I use Mysql, it is\n>\n>working fine.\n\nAs you can see the problem is that BIG5 can contain some special characters\nin the second byte that confuse the PostgreSQL parser. We had similar\nexperience with Japanese Shift Jis Code (SJIS). To address the problem\nwe have added a fuctionality to convert between SJIS and EUC_JP (that never\nconfuses the parser thus can be used as one of backend native encoding)\nsomewhere in the backend.\n\nTo solve your problem, there might be 2 solutions:\n\no Use EUC_TW(Chinese EUC Code) instead of BIG5. 6.4 should be happy\n with EUC_TW. To use EUC_TW, just create a new database:\n createdb mydb with encoding='EUC_TW'.\n or do \"configure --with-mb=EUC_TW\" and re-install. then re-create\n the database.\n\n Alternatively, you can use Unicode (UTF-8). Use \"UNICODE\" instead of\n \"EUC_TW\" in this case.\n\no Add an encoding conversion module between BIG5 and EUC_TW to PostgreSQL.\n I wish I could do that, but I have no idea how to write it \n (I don't speak Chinese at all). So your contribution would be welcome!\n\nBTW, you said you use perl. I'm surprised to hear that perl\ncan handle BIG5. Is it a modified version (localized version)?\n\nYou also use M$Access. So you must use ODBC, that make me worry about its\nsupport for BIG5. Here in Japan we are using localized version of\nODBC driver that supports SJIS.\n\nWhat I want to say here is that your problem may not be ony PostgreSQL\nitself. I recommend you make sure that your clients can handle\nBIG5.\n--\nTatsuo Ishii\[email protected]\n\n", "msg_date": "Mon, 23 Nov 1998 23:27:04 +0900", "msg_from": "[email protected] (Tatsuo Ishii)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Questions on using multi-byte character in a field\n\tof a table (BIG5)" }, { "msg_contents": "Dear ,\n\n Really thanks for your reply... I have been waiting for reply for a while.\n I realy want to help out with this but I have some problems.\n 1. I am not familiar with ODBC Standards and internal.\n 2. I am not familiar with Language Coding and Convertion.\n\n But I do used to programming in C, C++, Perl and both under UNIX and VC5\n Maybe we can cooperate with some other East Asian Countries (Korean,\nTaiwan) to create customized ODBC driver for each language coding we have.\n Besides, perl do work with Chinese, in fact, I only have problem with ODBC\nnow. When I use bind variables in DBD:Pg, all things work. I think this is\nbecause when assigning variables in perl using single quote instead of double\nquote $var='sth'; would prevent perl from interpreting the value of the\nvariable and thus everything works. Of course, I am using EUC_TW as my default\nencoding during initdb and createdb.\n Can u tell me where can I find more info on language coding and writing\nODBC dirver. I have read the source of the PsqlODBC and I think they are using\nCrygus GNU toolset. Can u tell me more about what you guys have done.\n Thanks.\n\n\nBest Rgds,\nJacky Hui\n\nTatsuo Ishii wrote:\n\n> At 3:46 AM 98.11.22 +0800, Hui Chun Kit, Jacky wrote:\n> >Dear all,\n> >\n> > I have some difficult time in using postgresql 6.4 with chinese BIG5\n> >\n> >characters. I am just looking for storing BIG characters in a text field\n> >\n> >and retrieve correctly. I have --enable-mb when I compile. I am on RH5.1\n>\n> What did you choose for an encoding?\n> BIG5 is not supported yet in 6.4, sorry.\n>\n> >intel platform, running PG 6.4.\n> > I just created a testing table test\n> > create test ( name char(20), age int);\n> > For most of the characters in BIG5, it works and I can insert\n> >chinese name into the table, but for some characters, esp my own name,\n> >it does not work. I have check the problem out . But cannot solve it.\n> > It is because in my name under BIG5 coding it is \"5cb3 54ab c7b3\"\n> >or\n> >in ASCII code \"263 \\ 253 T 263 307\" where two byte is a character.\n> >That is \"5cb3\" ('263' '\\' ) is the first character and '54ab' ( '253'\n> >'T' ) becomes the second character. The problem is that somewhere\n> >between storing the value into database and client frontend (Perl,\n> >MSAccess) , the '\\' is interpreted and thus the stored value becomes\n> >\"263 253 T 263 307\" which is distorted.\n> > I don't know where exactly is the problem as when I use Mysql, it is\n> >\n> >working fine.\n>\n> As you can see the problem is that BIG5 can contain some special characters\n> in the second byte that confuse the PostgreSQL parser. We had similar\n> experience with Japanese Shift Jis Code (SJIS). To address the problem\n> we have added a fuctionality to convert between SJIS and EUC_JP (that never\n> confuses the parser thus can be used as one of backend native encoding)\n> somewhere in the backend.\n>\n> To solve your problem, there might be 2 solutions:\n>\n> o Use EUC_TW(Chinese EUC Code) instead of BIG5. 6.4 should be happy\n> with EUC_TW. To use EUC_TW, just create a new database:\n> createdb mydb with encoding='EUC_TW'.\n> or do \"configure --with-mb=EUC_TW\" and re-install. then re-create\n> the database.\n>\n> Alternatively, you can use Unicode (UTF-8). Use \"UNICODE\" instead of\n> \"EUC_TW\" in this case.\n>\n> o Add an encoding conversion module between BIG5 and EUC_TW to PostgreSQL.\n> I wish I could do that, but I have no idea how to write it\n> (I don't speak Chinese at all). So your contribution would be welcome!\n>\n> BTW, you said you use perl. I'm surprised to hear that perl\n> can handle BIG5. Is it a modified version (localized version)?\n>\n> You also use M$Access. So you must use ODBC, that make me worry about its\n> support for BIG5. Here in Japan we are using localized version of\n> ODBC driver that supports SJIS.\n>\n> What I want to say here is that your problem may not be ony PostgreSQL\n> itself. I recommend you make sure that your clients can handle\n> BIG5.\n> --\n> Tatsuo Ishii\n> [email protected]\n\n\n\n", "msg_date": "Tue, 24 Nov 1998 12:04:24 +0800", "msg_from": "Jacky Hui Chun Kit <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Questions on using multi-byte character in a field\n\tof a table (BIG5)" }, { "msg_contents": "> Really thanks for your reply... I have been waiting for reply for a while.\n> I realy want to help out with this but I have some problems.\n> 1. I am not familiar with ODBC Standards and internal.\n> 2. I am not familiar with Language Coding and Convertion.\n>\n> But I do used to programming in C, C++, Perl and both under UNIX and VC5\n> Maybe we can cooperate with some other East Asian Countries (Korean,\n>Taiwan) to create customized ODBC driver for each language coding we have.\n> Besides, perl do work with Chinese, in fact, I only have problem with ODBC\n>now. When I use bind variables in DBD:Pg, all things work. I think this is\n>because when assigning variables in perl using single quote instead of double\n>quote $var='sth'; would prevent perl from interpreting the value of the\n>variable and thus everything works. Of course, I am using EUC_TW as my default\n>encoding during initdb and createdb.\n> Can u tell me where can I find more info on language coding and writing\n>ODBC dirver. I have read the source of the PsqlODBC and I think they are using\n>Crygus GNU toolset. Can u tell me more about what you guys have done.\n> Thanks.\n\nI talked to the author of localized version of PostgreSQL ODBC Drive\n(http://www.insightdist.com/download/) and found that he is\ninteresting in supporting Big5/EUC_TW. According to him, that\nshouldn't be very difficult. I think major problems are:\n\n(1) conversion algorism between EUC_TW and Big5\n(2) test data (both Big5 and EUC_TW)\n(3) testing (we do not understand Chinese)\n\nFor (1), we can refer to\nftp://ftp.ora.com/pub/examples/nutshell/ujip/doc/cjk.inf or whatever\nin the Internet, and I believe this would not be a big concern.\n\nSo for us real problems are (2) and (3).\n--\nTatsuo Ishii.\n", "msg_date": "Thu, 03 Dec 1998 10:35:33 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Questions on using multi-byte character in a field of a\n\ttable (BIG5)" } ]
[ { "msg_contents": "> Bruce Momjian wrote:\n> > \n> > >\n> > > thplus=> select count(*) from envelope where state_id='H' or\n> > > state_id='E';\n> > > pqReadData() -- backend closed the channel unexpectedly.\n> > > This probably means the backend terminated abnormally before or\n> > > while processing the request.\n> > > We have lost the connection to the backend, so further processing is\n> > > impossible. Terminating.\n> > \n> > I need help with this one. Attached is a patch that also fails, but it\n> > looks closer than the original code. The problem appears to be that I\n> > can't get a slot that matches the items of the Var node I am trying to\n> > evaluate. If I used one that matches the heap tuple, that fails,\n> > because if the index is on the second column of the tuple, the attnum is\n> > 1, while it is actually 2nd in the tuple slot.\n> > \n> > Does anyone know the executor well enough to find me that slot that\n> > matches the Var node? I can't figure it out.\n> \n> Hi, Bruce!\n> \n> I'll take a look...\n> \n> Vadim\n> \n\nThanks.\n\n\nFirst, let me give you a reproducible example:\n\t\n\tcreate table test (x int, y text);\n\tinsert into test values (1,'fred');\n\tinsert into test values (1,'barney');\n\tinsert into test select * from test;\n\t..repeat the above several times\n\tcreate index i_test on test(y);\n\tvacuum;\n\tselect count(*) from test where y='fred' or y='barney';\n\t..crash\n\t\nThis only crashes if the OR field is not the first field in the table. \nThe error appears to be that the indexqual has a varno of 1, while the\nheap location of the column in the above example is obviously 2. My\nguess is that the varno matches the index varno, and not the heap varno.\n\n\nI not think previous patch is wrong. The original code is better. You\nhave to compare the qualification against the actual heap row, because\nyou could be using a different index on the second OR than the first:\n\n\nThis new patch uses scankeys instead of throwing the indexqual to the\nexecutor. This is probably more efficient, but I am not sure about the\nother ramifications. It still fails.\n\nObviously, the idea that I want to use indexqual to test a heap tuple\nwas never anticipated by the original coders. I wonder why more people\nhave not reported OR problems?\n\n\nGood luck. I am kind of stumped on this.\n\n---------------------------------------------------------------------------\n\n\nIndex: src/backend/executor/nodeIndexscan.c\n===================================================================\nRCS file: /usr/local/cvsroot/pgsql/src/backend/executor/nodeIndexscan.c,v\nretrieving revision 1.27\ndiff -c -r1.27 nodeIndexscan.c\n*** nodeIndexscan.c\t1998/09/01 04:28:32\t1.27\n--- nodeIndexscan.c\t1998/11/22 05:30:40\n***************\n*** 39,44 ****\n--- 39,45 ----\n \n #include \"access/skey.h\"\n #include \"access/heapam.h\"\n+ #include \"access/valid.h\"\n #include \"access/genam.h\"\n #include \"utils/palloc.h\"\n #include \"utils/mcxt.h\"\n***************\n*** 95,101 ****\n \tTupleTableSlot *slot;\n \tBuffer\t\tbuffer = InvalidBuffer;\n \tint\t\t\tnumIndices;\n! \n \t/* ----------------\n \t *\textract necessary information from index scan node\n \t * ----------------\n--- 96,104 ----\n \tTupleTableSlot *slot;\n \tBuffer\t\tbuffer = InvalidBuffer;\n \tint\t\t\tnumIndices;\n! \tint\t\t *numScanKeys;\n! \tScanKey *scanKeys;\n! \t\n \t/* ----------------\n \t *\textract necessary information from index scan node\n \t * ----------------\n***************\n*** 109,114 ****\n--- 112,119 ----\n \theapRelation = scanstate->css_currentRelation;\n \tnumIndices = indexstate->iss_NumIndices;\n \tslot = scanstate->css_ScanTupleSlot;\n+ \tnumScanKeys = indexstate->iss_NumScanKeys;\n+ \tscanKeys = indexstate->iss_ScanKeys;\n \n \t/* ----------------\n \t *\tok, now that we have what we need, fetch an index tuple.\n***************\n*** 153,161 ****\n \t\t\t\tfor (prev_index = 0; prev_index < indexstate->iss_IndexPtr;\n \t\t\t\t\t prev_index++)\n \t\t\t\t{\n! \t\t\t\t\tscanstate->cstate.cs_ExprContext->ecxt_scantuple = slot;\n! \t\t\t\t\tif (ExecQual(nth(prev_index, node->indxqual),\n! \t\t\t\t\t\t\t\t scanstate->cstate.cs_ExprContext))\n \t\t\t\t\t{\n \t\t\t\t\t\tprev_matches = true;\n \t\t\t\t\t\tbreak;\n--- 158,169 ----\n \t\t\t\tfor (prev_index = 0; prev_index < indexstate->iss_IndexPtr;\n \t\t\t\t\t prev_index++)\n \t\t\t\t{\n! \t\t\t\t\tbool result;\n! \t\t\t\t\t\n! \t\t\t\t\tHeapKeyTest(tuple, RelationGetDescr(heapRelation),\n! \t\t\t\t\t\t\t\tnumScanKeys[prev_index],\n! \t\t\t\t\t\t\t\tscanKeys[prev_index], result);\n! \t\t\t\t\tif (result)\n \t\t\t\t\t{\n \t\t\t\t\t\tprev_matches = true;\n \t\t\t\t\t\tbreak;\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n", "msg_date": "Sun, 22 Nov 1998 00:49:57 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Bug in 6.4 release" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> This new patch uses scankeys instead of throwing the indexqual to the\n> executor. This is probably more efficient, but I am not sure about the\n> other ramifications. It still fails.\n\nThis wouldn't handle functional indices in OR...\n\nSo, I added indexqualorig list to the IndexScan node:\n\nindexqual = fix_indxqual_references (indexqualorig)\n\n- indxqualorig' Var-s references heap tuple...\n\nRegression tests are ok.\nPatch made for CURRENT tree - let me know if there will\nbe problems with 6.4...\n\nIt's better to gmake clean in backend dir...\n\nPatch also fixes EXPLAIN for indices in OR: all indices\nused are explained now.\n\nVadim", "msg_date": "Sun, 22 Nov 1998 17:31:49 +0700", "msg_from": "Vadim Mikheev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Bug in 6.4 release" }, { "msg_contents": "Vadim Mikheev wrote:\n> \n> Bruce Momjian wrote:\n> >\n> > This new patch uses scankeys instead of throwing the indexqual to the\n> > executor. This is probably more efficient, but I am not sure about the\n> > other ramifications. It still fails.\n> \n> This wouldn't handle functional indices in OR...\n> \n> So, I added indexqualorig list to the IndexScan node:\n> \n> indexqual = fix_indxqual_references (indexqualorig)\n> \n> - indxqualorig' Var-s references heap tuple...\n> \n> Regression tests are ok.\n\nGreat job! Many thanks.\n\nAnd you helped me to prove to my collegues that usually bugs in\nOpenSource \nsoftware are fixed no later than the next weekend ;)\n\n> Patch made for CURRENT tree - let me know if there will\n> be problems with 6.4...\n\nI did'nt run regress but the patches did apply cleanly on 6.4.0 and \nalso all my apps run fine, even the one that did have the OR problems \nafter upgrading 6.3.2 -> 6.4.\n\nAnd of course they run faster ;)\n\nThank you once more !\n\n-------------\nHannu Krosing\n", "msg_date": "Sun, 22 Nov 1998 21:53:37 +0200", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Bug in 6.4 release - thanks for for fixing it!" }, { "msg_contents": "> Bruce Momjian wrote:\n> > \n> > This new patch uses scankeys instead of throwing the indexqual to the\n> > executor. This is probably more efficient, but I am not sure about the\n> > other ramifications. It still fails.\n> \n> This wouldn't handle functional indices in OR...\n> \n> So, I added indexqualorig list to the IndexScan node:\n> \n> indexqual = fix_indxqual_references (indexqualorig)\n> \n> - indxqualorig' Var-s references heap tuple...\n> \n> Regression tests are ok.\n> Patch made for CURRENT tree - let me know if there will\n> be problems with 6.4...\n> \n> It's better to gmake clean in backend dir...\n> \n> Patch also fixes EXPLAIN for indices in OR: all indices\n> used are explained now.\n> \n> Vadim\n\nThanks very much. I have applied this to the stable branch, because\nwithout it, we get backend crashes.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 23 Nov 1998 22:51:21 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Bug in 6.4 release" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> Thanks very much. I have applied this to the stable branch, because\n> without it, we get backend crashes.\n\nThanks, Bruce! I have only CURRENT tree...\n\nVadim\n", "msg_date": "Tue, 24 Nov 1998 10:53:53 +0700", "msg_from": "Vadim Mikheev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Bug in 6.4 release" }, { "msg_contents": "> Bruce Momjian wrote:\n> > \n> > Thanks very much. I have applied this to the stable branch, because\n> > without it, we get backend crashes.\n> \n> Thanks, Bruce! I have only CURRENT tree...\n\nGlad to hear you are OK about appying it to stable tree.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 23 Nov 1998 23:04:39 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Bug in 6.4 release" } ]
[ { "msg_contents": "Hi all,\n\nI'm having a little problem with CVS at the moment.\n\nWhenever I do a cvs update I get the following:-\n\nS-> unlink(pgtclsh/CVS/Entries.Static)\n -> unlink(CVS/Entries.Static)\ncvs server: Updating src/bin/pgtclsh\nS-> rename(CVS/Entries.Backup,CVS/Entries)\nS-> unlink(CVS/Entries.Log)\nS-> unlink(psql/CVS/Entries.Static)\n -> unlink(CVS/Entries.Static)\ncvs server: Updating src/bin/psql\nTerminated with fatal signal 6\n -> unlink_file_dir(src/bin/pg4_dump)\n -> unlink_file_dir(src/bin/monitor)\n\nI've even tried a fresh checkout and I get the same thing.\n\nAnyone have any ideas?\n\nPlatform SPARC-Solaris-2.6.\n\nCVS is..\n\nConcurrent Versions System (CVS) 1.9 (client/server)\n\nKeith.\n\n", "msg_date": "Sun, 22 Nov 1998 17:46:49 +0000 (GMT)", "msg_from": "Keith Parks <[email protected]>", "msg_from_op": true, "msg_subject": "Problems with CVS." }, { "msg_contents": "Keith Parks <[email protected]> writes:\n> I'm having a little problem with CVS at the moment.\n> CVS is..\n> Concurrent Versions System (CVS) 1.9 (client/server)\n\nFWIW, the current cvs release is 1.10 ... might be worth updating\nbefore you spend too much time chasing what could be an already-fixed\nbug...\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 22 Nov 1998 13:26:36 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Problems with CVS. " } ]
[ { "msg_contents": "I noticed that configure.in is set up to pull a fixed set of symbols\nout of the template files, by means of this rather grotty code:\n\nAROPT=`grep '^AROPT:' template/$TEMPLATE | awk -F: '{print $2}'`\nSHARED_LIB=`grep '^SHARED_LIB:' template/$TEMPLATE | awk -F: '{print $2}'`\nCFLAGS=`grep '^CFLAGS:' template/$TEMPLATE | awk -F: '{print $2}'`\nSRCH_INC=`grep '^SRCH_INC:' template/$TEMPLATE | awk -F: '{print $2}'`\nSRCH_LIB=`grep '^SRCH_LIB:' template/$TEMPLATE | awk -F: '{print $2}'`\nUSE_LOCALE=`grep '^USE_LOCALE:' template/$TEMPLATE | awk -F: '{print $2}'`\nDLSUFFIX=`grep '^DLSUFFIX:' template/$TEMPLATE | awk -F: '{print $2}'`\nDL_LIB=`grep '^DL_LIB:' template/$TEMPLATE | awk -F: '{print $2}'`\nYACC=`grep '^YACC:' template/$TEMPLATE | awk -F: '{print $2}'`\nYFLAGS=`grep '^YFLAGS:' template/$TEMPLATE | awk -F: '{print $2}'`\nCC=`grep '^CC:' template/$TEMPLATE | awk -F: '{print $2}'`\nLIBS=`grep '^LIBS:' template/$TEMPLATE | awk -F: '{print $2}'`\n\nIt seems to me that configure ought to just read the selected template\nfile as a series of shell variable assignments and process whatever it\nfinds. That way, a template file could assign to any shell variable\nwithin the configure run, not just these twelve.\n\nThere is already one case where a template file tries to assign a\nsetting that configure isn't noticing: template/sco contains\n\tLEX:lex\nwhich is quite nonfunctional at the moment. Presumably, whoever\nput that in did so with the expectation that configure would honor it.\n\nAny objection to this change?\n\nI also notice that most of the template files contain settings for\n\"ALL\", which isn't being used by configure either. (It won't be even\nafter my proposed change of the template-file-reading code, because ALL\nisn't referenced anywhere in the configure script.) Most of the\ntemplates just set ALL to empty, but template/svr4 contains\n\tALL:+W0\nAnyone have the foggiest what this item is supposed to do? It's\ndefinitely dead code as things stand, but perhaps at one time it meant\nsomething.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 22 Nov 1998 13:47:57 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Generalizing configure's template-file support" } ]
[ { "msg_contents": "Just noticed that doc/FAQ_Linux and doc/FAQ_Irix have been\nzero-length files in the CVS repository since the end of August.\nThey're also that way in the 6.4 distribution tarball.\n\nThere are still copies on the website (at the URLs mentioned in\ndoc/FAQ), but I dunno where those are stored physically or how\nup-to-date they are.\n\nBTW, the reason I noticed this is that I'm going to try to put together\na FAQ for HPUX installation of Postgres. Is it sufficient to check an\nASCII file into the doc directory, or is there other stuff I should do?\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 22 Nov 1998 16:50:26 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "What happened to FAQ_Linux, FAQ_Irix?" }, { "msg_contents": "On Sun, 22 Nov 1998, Tom Lane wrote:\n\n> Just noticed that doc/FAQ_Linux and doc/FAQ_Irix have been\n> zero-length files in the CVS repository since the end of August.\n> They're also that way in the 6.4 distribution tarball.\n> \n> There are still copies on the website (at the URLs mentioned in\n> doc/FAQ), but I dunno where those are stored physically or how\n> up-to-date they are.\n> \n> BTW, the reason I noticed this is that I'm going to try to put together\n> a FAQ for HPUX installation of Postgres. Is it sufficient to check an\n> ASCII file into the doc directory, or is there other stuff I should do?\n\nNot sure what happened to the ones in the CVS repository, but they\napparently disappeared around the 29th of August...\n\nThe WWW versions are in ~pgsql/ftp/www/html/..\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Sun, 22 Nov 1998 18:44:29 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] What happened to FAQ_Linux, FAQ_Irix?" }, { "msg_contents": "> > Just noticed that doc/FAQ_Linux and doc/FAQ_Irix have been\n> > zero-length files in the CVS repository since the end of August.\n> Not sure what happened to the ones in the CVS repository, but they\n> apparently disappeared around the 29th of August...\n\ndescription:\n----------------------------\nrevision 1.4\ndate: 1998/08/30 01:40:44; author: momjian; state: Exp; lines: +0\n-650\nUpdate INSTALL, etc. for release 6.4. Update pgaccess to 0.88.\n\nHey Bruce! Can you revert the files to the previous versions? Somehow\nthey were zeroed on one of your mega-commits...\n\n - Tom\n", "msg_date": "Tue, 24 Nov 1998 03:17:10 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] What happened to FAQ_Linux, FAQ_Irix?" }, { "msg_contents": "> > > Just noticed that doc/FAQ_Linux and doc/FAQ_Irix have been\n> > > zero-length files in the CVS repository since the end of August.\n> > Not sure what happened to the ones in the CVS repository, but they\n> > apparently disappeared around the 29th of August...\n> \n> description:\n> ----------------------------\n> revision 1.4\n> date: 1998/08/30 01:40:44; author: momjian; state: Exp; lines: +0\n> -650\n> Update INSTALL, etc. for release 6.4. Update pgaccess to 0.88.\n> \n> Hey Bruce! Can you revert the files to the previous versions? Somehow\n> they were zeroed on one of your mega-commits...\n\nDone. Not sure how that happened.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 23 Nov 1998 22:49:51 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] What happened to FAQ_Linux, FAQ_Irix?" }, { "msg_contents": "> Just noticed that doc/FAQ_Linux and doc/FAQ_Irix have been\n> zero-length files in the CVS repository since the end of August.\n> They're also that way in the 6.4 distribution tarball.\n> \n> There are still copies on the website (at the URLs mentioned in\n> doc/FAQ), but I dunno where those are stored physically or how\n> up-to-date they are.\n> \n> BTW, the reason I noticed this is that I'm going to try to put together\n> a FAQ for HPUX installation of Postgres. Is it sufficient to check an\n> ASCII file into the doc directory, or is there other stuff I should do?\n\nFixed. Both trees.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 23 Nov 1998 22:50:42 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] What happened to FAQ_Linux, FAQ_Irix?u" } ]
[ { "msg_contents": "I got a note from someone about problems installing Postgres 6.4\non HPUX, one of which was that HP's yacc wouldn't process gram.y\ndue to table overflow. That's hardly news ... but a distribution\ntarball shouldn't require gram.y to be processed; it should contain\nusable output files, no?\n\nLooking into it, I find that the 6.4 tarball contains files with the\nfollowing timestamps:\n\n1998-10-29 23:54 postgresql-v6.4/src/backend/parser/gram.c\n1998-10-14 11:56 postgresql-v6.4/src/backend/parser/gram.y\n1998-09-30 01:48 postgresql-v6.4/src/backend/parser/parse.h\n\nSince parser/Makefile has the dependency\n\ngram.c parse.h: gram.y\n $(YACC) $(YFLAGS) $<\n\nthe fact that parse.h is back-dated means that installers of 6.4 will\nhave to process gram.y.\n\nIn short, parse.h needs to be 'touch'ed in the repository.\n\nI did that for the REL6_4 branch, but I wanted to raise a flag here\nfor updaters of the grammar: make sure that parse.h gets committed\nwhen gram.c does. You may need to use \"cvs commit -f\" to force a\ncommit even though parse.h hasn't changed ... that looks to be the\ncause of this particular glitch.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 22 Nov 1998 17:17:49 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Timestamp glitch in 6.4 tarball: gram.y dependencies not committed" } ]
[ { "msg_contents": "I've just checked in a rewrite of configure's method for guessing\nthe right template file to use when --with-template is not provided.\n\nThe old method was to look for an exact match to the $host value\n(from config.guess) in template/.similar, and if that wasn't found\nto strip all version information (digits and dots at the end) from\n$host and try to match that against prefixes of entries in .similar.\nThis is clearly broken, because given .similar entries like\n\tsparc-sun-sunos4=sunos4_gcc\n\tsparc-sun-sunos5=solaris_sparc_gcc\nit is incapable of correctly choosing the one to use for a $host\nvalue like \"sparc-sun-sunos4.1.3\". After you strip the \"4.1.3\"\nthere is no basis for choosing the right .similar entry.\n\nWhat I checked in tries for an exact match, and then tries to match\n.similar entries to prefixes of the $host string. This is fairly\nobviously the right way round, IMHO. I was able to remove some\nredundant entries in .similar after making the change; for instance\n\ti386-pc-bsdi3.0=bsdi_2.1\n\ti386-pc-bsdi3.1=bsdi_2.1\ncollapse to\n\ti386-pc-bsdi3=bsdi_2.1\nwhich has some chance of working on bsdi 3.2 as well, if there is such\na thing. \n\n*However*, I'm not too sure that the shell-script constructs I used\nare 100% portable. If you find that configure now goes belly-up when\nyou leave off --with-template, please let me know.\n\nI checked this into the 6.4 tree as well, since it is in response\nto a recent complaint that 6.4 fails to configure correctly on\nSunOS 4.1.3...\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 23 Nov 1998 00:02:18 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "configure template-matching change" }, { "msg_contents": "On Mon, 23 Nov 1998, Tom Lane wrote:\n\n> *However*, I'm not too sure that the shell-script constructs I used\n> are 100% portable. If you find that configure now goes belly-up when\n> you leave off --with-template, please let me know.\n> \n> I checked this into the 6.4 tree as well, since it is in response\n> to a recent complaint that 6.4 fails to configure correctly on\n> SunOS 4.1.3...\n\n\tPlease reverse it from the v6.4 tree until such a point in time\nthat you are certain that it *is* going to work for all platforms. The\nv6.4 is meant to be stable at all times, and, from your para two up, you\naren't certain what platforms this change will break...\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Mon, 23 Nov 1998 01:20:17 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] configure template-matching change" } ]
[ { "msg_contents": "Marc, please apply this to the 6.4 stable tree. I think we need it to\nprevent backend crashes with OR.\n\nMy cvs on stable is not working either.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\nBruce Momjian wrote:\n> \n> This new patch uses scankeys instead of throwing the indexqual to the\n> executor. This is probably more efficient, but I am not sure about the\n> other ramifications. It still fails.\n\nThis wouldn't handle functional indices in OR...\n\nSo, I added indexqualorig list to the IndexScan node:\n\nindexqual = fix_indxqual_references (indexqualorig)\n\n- indxqualorig' Var-s references heap tuple...\n\nRegression tests are ok.\nPatch made for CURRENT tree - let me know if there will\nbe problems with 6.4...\n\nIt's better to gmake clean in backend dir...\n\nPatch also fixes EXPLAIN for indices in OR: all indices\nused are explained now.\n\nVadim", "msg_date": "Mon, 23 Nov 1998 01:12:48 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Bug in 6.4 release (fwd)" } ]
[ { "msg_contents": "I am getting signal 10/bus error on cvs checkouts. Looks like others\nare having the same problem.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 23 Nov 1998 01:14:07 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "cvs problem" }, { "msg_contents": ">\n> I am getting signal 10/bus error on cvs checkouts. Looks like others\n> are having the same problem.\n\n Right. Happens in pgsql/bin/psql and seems to be at psql.c.\n\n There are many rfl files in the repositories directory.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Mon, 23 Nov 1998 12:48:56 +0100 (MET)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] cvs problem" }, { "msg_contents": "\nI'm on top of it...saw the problem last night, haven't been able, yet, to\ndetermine the why...\n\n\nOn Mon, 23 Nov 1998, Bruce Momjian wrote:\n\n> I am getting signal 10/bus error on cvs checkouts. Looks like others\n> are having the same problem.\n> \n> -- \n> Bruce Momjian | http://www.op.net/~candle\n> [email protected] | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> \n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Mon, 23 Nov 1998 08:54:27 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: cvs problem" }, { "msg_contents": "On Mon, 23 Nov 1998, Jan Wieck wrote:\n\n> >\n> > I am getting signal 10/bus error on cvs checkouts. Looks like others\n> > are having the same problem.\n> \n> Right. Happens in pgsql/bin/psql and seems to be at psql.c.\n> \n> There are many rfl files in the repositories directory.\n\nThe rfl files are what is left behind after the core dump...\n\n\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Mon, 23 Nov 1998 08:55:14 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] cvs problem" }, { "msg_contents": " It's very interesting. I've used gdb -core with one of the\n core files on the server. It happens in rcs.c:935, there are\n sanity checks that the pointer is valid. I can't find the\n cvs-1.9.27 sources on hub.org, so I grabbed my private copy\n local.\n\n It happend while cvs parses the head of psql.c,v (the rcsbuf\n contains just the first portion of that file). The file looks\n O.K. for me, and an rlog is happy with it, so I don't think\n the repository file is corrupt.\n\n But it's really funny. The char *ptr is set from rcsbuf->ptr\n 2 lines above. The values in rcsbuf look O.K. (gdb print\n *rcsbuf), but ptr definitely is wrong. I cannot imagine how\n this happens.\n\n Marc, could you take a look at it and eventually upgrade to\n cvs-1.10 on hub.org?\n\n BTW: I removed most of the read lock files in pgsql/bin/psql\n (after being sure they are dead ones). But the problem\n remains.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Mon, 23 Nov 1998 14:22:17 +0100 (MET)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] cvs problem" }, { "msg_contents": "> Right. Happens in pgsql/bin/psql and seems to be at psql.c.\n> There are many rfl files in the repositories directory.\n\nYes. The problem occurred when I was doing a commit toward the end of\nlast week. I got a segfault from cvs when committing a change to psql.c\n(and another file which seems to have been committed OK).\n\nI posted a private message at the time to Marc and Bruce with the\nproblem statement, and have not seen a response. I also didn't know if\nthe problem would impact others trying to checkout. It apparently does\n:(\n\nI got a cvs core dump when trying to commit that file, even after\nblowing away the cvs lock files and retrying. Don't know what to try\nnext, but had hoped that I would hear back from Marc. fwiw that file is\nvery large compared to many of the other files...\n\n - Tom\n\nbtw, would my cvsup'd cvs repository file be an appropriate substitute\nto help fix the server?\n", "msg_date": "Tue, 24 Nov 1998 03:28:39 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] cvs problem" }, { "msg_contents": "On Tue, 24 Nov 1998, Thomas G. Lockhart wrote:\n\n> > Right. Happens in pgsql/bin/psql and seems to be at psql.c.\n> > There are many rfl files in the repositories directory.\n> \n> Yes. The problem occurred when I was doing a commit toward the end of\n> last week. I got a segfault from cvs when committing a change to psql.c\n> (and another file which seems to have been committed OK).\n> \n> I posted a private message at the time to Marc and Bruce with the\n> problem statement, and have not seen a response. I also didn't know if\n> the problem would impact others trying to checkout. It apparently does\n> :(\n> \n> I got a cvs core dump when trying to commit that file, even after\n> blowing away the cvs lock files and retrying. Don't know what to try\n> next, but had hoped that I would hear back from Marc. fwiw that file is\n> very large compared to many of the other files...\n> \n> - Tom\n> \n> btw, would my cvsup'd cvs repository file be an appropriate substitute\n> to help fix the server?\n\n\tAlready fixed...upgraded cvs to 1.10 on the server ...\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Tue, 24 Nov 1998 00:22:36 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] cvs problem" }, { "msg_contents": "The Hermit Hacker wrote:\n> \n> On Tue, 24 Nov 1998, Thomas G. Lockhart wrote:\n> \n> Already fixed...upgraded cvs to 1.10 on the server ...\n\nDo I need to do anything to get my psql patch committed? How about the\nleft-over lock files? Or does cvs think it is already in and everything\nOK?\n\n - Tom\n", "msg_date": "Tue, 24 Nov 1998 05:41:59 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] cvs problem" }, { "msg_contents": "> Do I need to do anything to get my psql patch committed? How about the\n> left-over lock files? Or does cvs think it is already in and \n> everything OK?\n\nI checked the cvs server, the locks were already cleaned up and I\nre-committed my changes (successfully this time :) So things seem to be\nback to normal. Thanks for the help scrappy...\n\n - Tom\n", "msg_date": "Tue, 24 Nov 1998 05:57:12 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] cvs problem" } ]
[ { "msg_contents": "\nThings should be okay now, just tried it on two different\nmachines...upgraded to cvs 1.10 ..\n\nLet me know of any further problems...\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Mon, 23 Nov 1998 09:36:43 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "CVS problem ... fixed..." }, { "msg_contents": "> \n> \n> Things should be okay now, just tried it on two different\n> machines...upgraded to cvs 1.10 ..\n> \n> Let me know of any further problems...\n\n Just removed the last stale #cvs.rfl.hub.org.9687 (from anoncvs).\n Looks O.K. now - thanks.\n\n\nJan\n\n-- \n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n", "msg_date": "Mon, 23 Nov 1998 15:44:41 +0100 (MET)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] CVS problem ... fixed..." } ]
[ { "msg_contents": "Hello!\n\n=> SELECT datetime(current_date, '11:00');\ndatetime \n----------------------------\nMon 23 Nov 11:00:00 1998 MSK\n(1 row)\n\n I got correct result (11:00:00 MSK) on Sparc Solaris 2.5.1 and RedHat 4.2.\nI got incorrect result (17:00:00 MSK) on RedHat 5.1 and Debian 2.0. (All\npostgres 6.4)\n\n It looks like problem with libc5-to-glibc2 transition, but I don't see\nhow to test who cause the problem - Postgres or Linux. How can I test?\n\nOleg.\n---- \n Oleg Broytmann National Research Surgery Centre http://sun.med.ru/~phd/\n Programmers don't die, they just GOSUB without RETURN.\n\n\n", "msg_date": "Mon, 23 Nov 1998 18:17:43 +0300 (MSK)", "msg_from": "Oleg Broytmann <[email protected]>", "msg_from_op": true, "msg_subject": "Date/time problems on Linux" } ]
[ { "msg_contents": "Hi all,\n\nI would like to greet averybody on the list; this is my first mail.\n\nAnyway... I need to use the UPDATE ... WHERE CURRENT OF ... clausule,\nbut it doesn't work:\n\noeptest=> update shorttest set b='blahblah' where current of c1;\nERROR: CURRENT used in non-rule query\n\nWhat does it mean? IT would be urgent...\n\nMy other problem is with the views: it seems not work if there is an\naggregate command in the SELECT. It's only my experience, or others\nnoticed it also? (The details are on the pgsql-novice (subject: view\nand aggregate command problem))\n\nThanks, and sorry, if the questions were answered an hour ago; I have\nsubscribed to the list 5 minutes ago... :-)\n\nCircum\n\n __ @\n/ \\ _ _ Engard Ferenc\nl | ( \\ / | | (\\/) mailto:[email protected]\n\\__/ | | \\_ \\_/ I I http://pons.sote.hu/~s-fery\n\n\n", "msg_date": "Mon, 23 Nov 1998 20:20:56 +0100 (CET)", "msg_from": "Engard Ferenc <[email protected]>", "msg_from_op": true, "msg_subject": "cursor and update + view" }, { "msg_contents": "> My other problem is with the views: it seems not work if there is an\n> aggregate command in the SELECT. It's only my experience, or others\n> noticed it also? (The details are on the pgsql-novice (subject: view\n> and aggregate command problem))\n\n Aggregates in view's are still buggy. It is possible to\n change the grouping when doing a join from such a view and\n another relation in a way, that crashes the backend.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Tue, 24 Nov 1998 10:31:38 +0100 (MET)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: [SQL] cursor and update + view" }, { "msg_contents": "Jan Wieck wrote:\n> \n> > My other problem is with the views: it seems not work if there is an\n> > aggregate command in the SELECT. It's only my experience, or others\n> > noticed it also? (The details are on the pgsql-novice (subject: view\n> > and aggregate command problem))\n> \n> Aggregates in view's are still buggy. It is possible to\n> change the grouping when doing a join from such a view and\n> another relation in a way, that crashes the backend.\n\nWe'll have to implement subqueries in FROM someday - this would\nallow handle aggregates in views...\n\nVadim\n", "msg_date": "Tue, 24 Nov 1998 17:33:29 +0700", "msg_from": "Vadim Mikheev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [SQL] cursor and update + view" }, { "msg_contents": "Vadim wrote:\n\n>\n> Jan Wieck wrote:\n> >\n> > > My other problem is with the views: it seems not work if there is an\n> > > aggregate command in the SELECT. It's only my experience, or others\n> > > noticed it also? (The details are on the pgsql-novice (subject: view\n> > > and aggregate command problem))\n> >\n> > Aggregates in view's are still buggy. It is possible to\n> > change the grouping when doing a join from such a view and\n> > another relation in a way, that crashes the backend.\n>\n> We'll have to implement subqueries in FROM someday - this would\n> allow handle aggregates in views...\n\n You're right. In the current implementation, any rangetable\n entry (RTE) that is finally (after rewrite) used in any Var\n node of the querytree results in a scan of that relation.\n Having an RTE that contains its own subquery would\n materialize the view internally and help us out.\n\n This kind of subquery RTE will also be the base for functions\n that return result sets (SETOF complex type). These are\n broken too.\n\n It will be a little tricky to pull out the qualifications\n that restrict the subquery RTE, so the view must not entirely\n get materialized. At least any constant expression compared\n against one of the views attributes must go into.\n\n There is another thing that I would like to have. The current\n rule system tries to turn a qualification that uses an\n aggregate column of a view into a subquery. This is because\n the planner doesn't support plain aggregate expressions in\n the qualification. If it would be possible to have another\n type of Var node that points to a targetlist entry, we could\n put the aggregates from the qualification into junk TLE's.\n TGL - are you listening - I think it's your code I'm ulyfying\n here :-).\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Tue, 24 Nov 1998 13:04:57 +0100 (MET)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [SQL] cursor and update + view" }, { "msg_contents": "Jan Wieck wrote:\n> \n> There is another thing that I would like to have. The current\n> rule system tries to turn a qualification that uses an\n> aggregate column of a view into a subquery. This is because\n> the planner doesn't support plain aggregate expressions in\n> the qualification. If it would be possible to have another\n> type of Var node that points to a targetlist entry, we could\n> put the aggregates from the qualification into junk TLE's.\n\nThis thing would also handled by subqueries in FROM!\nHaving support in planner/executor for queries like this:\n\nselect * from A, (select c, max(d) as m from B group by c) SQ\nwhere SQ.c = A.x and SQ.m = A.y\n\nrule system will be able to put _any_ VIEW' query into\nFROM clause...\n\nVadim\n", "msg_date": "Tue, 24 Nov 1998 21:00:39 +0700", "msg_from": "Vadim Mikheev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [SQL] cursor and update + view" }, { "msg_contents": "Vadim wrote:\n\n> This thing would also handled by subqueries in FROM!\n> Having support in planner/executor for queries like this:\n>\n> select * from A, (select c, max(d) as m from B group by c) SQ\n> where SQ.c = A.x and SQ.m = A.y\n>\n> rule system will be able to put _any_ VIEW' query into\n> FROM clause...\n\n Possible - but IMHO the wrong thing to do. As it is now for a\n view that has no aggregate, the rule system rewrites the\n query to something that is the same as if the user resolved\n the view definition by hand and used all the real tables\n instead. Have the following:\n\n CREATE TABLE t1 (a int4, b int4);\n CREATE TABLE t2 (a int4, c int4);\n CREATE TABLE t3 (a int4);\n CREATE VIEW v1 AS SELECT t1.a, t1.b, t2.c\n FROM t1, t2 WHERE t1.a = t2.a;\n\n Now do a\n\n SELECT t3.a, v1.b, v1.c FROM t3, v1\n WHERE t3.a = v1.a;\n\n The current rewrite system builds a querytree that is exactly\n that what would have been produced by the parser if you had\n typed\n\n SELECT t3.a, t1.b, t2.c FROM t3, t1, t2\n WHERE t3.a = t1.a AND t1.a = t2.a;\n\n Now the planner/optimizer has _ALL_ the tables that need to\n be scanned and _ALL_ the qualifications in _ONE_ querytree.\n It is the job of the optimizer to decide which is the best\n join path for this access. To make a good decision, it needs\n all this information plus the VACUUM statistics.\n\n If we put any view into a subquery RTE, we force the planner\n to materialize the view and do a nestloop over t3 and\n materialized v1 where possibly using t1 or t2 as the\n outermost scanrelation would be better.\n\n Stonebraker & Co where absolutely right when they spoke about\n productional rule systems. And what PostgreSQL does now is\n how I understood them.\n\n \"Production rule systems are conceptually simple, but\n there are many subtle points involved in actually using\n them.\"\n -- Stonebraker\n\n I think the different grouping requirements for subsets of\n data when using aggregate columns in views is just one of the\n the problems he addressed with the above statement.\n\n We should build a subquery RTE only if Query->hasAggs is\n true.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Tue, 24 Nov 1998 16:50:11 +0100 (MET)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [SQL] cursor and update + view" }, { "msg_contents": "Jan Wieck wrote:\n> \n> > rule system will be able to put _any_ VIEW' query into\n> > FROM clause...\n> \n> Possible - but IMHO the wrong thing to do. As it is now for a\n\n\"Will be able\" doesn't mean \"will DO\" -:))\nI just said that subqueries in FROM will resolve all\nproblems with aggregates in VIEWs.\n\n> If we put any view into a subquery RTE, we force the planner\n> to materialize the view and do a nestloop over t3 and\n ^^^^^^^^^^^\nDo you mean creating some tmp table etc?\nNo - it's not required.\n\n> materialized v1 where possibly using t1 or t2 as the\n> outermost scanrelation would be better.\n\n SELECT t3.a, v1.b, v1.c \n FROM t3, \n (SELECT t1.a, t1.b, t2.c FROM t1, t2 WHERE t1.a = t2.a) v1\n WHERE t3.a = v1.a;\n\ncan be planned as\n\n Nestloop\n\tSubPlan\n ...what is costless for subquery...\n\tSeq/Index scan on t3\n\n- no materialization...\n\nOn the other hand, as we talk about query optimization - why\nrule system should do optimizer' work? Why not just put\n_any_ VIEW' query into FROM and let optimizer decide\ncould query be rewritten as join or not? Ppl do strange\nthings sometimes -:) Sometimes they use subqueries in\nWHERE while joins could be used and our optimizer don't \ntry to catch this. I know that Sybase does.\nAnd, imho, we should implement this ... sometime -:))\n\nVadim\n", "msg_date": "Wed, 25 Nov 1998 10:26:42 +0700", "msg_from": "Vadim Mikheev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [SQL] cursor and update + view" }, { "msg_contents": "Vadim wrote:\n\n>\n> Jan Wieck wrote:\n> >\n> > If we put any view into a subquery RTE, we force the planner\n> > to materialize the view and do a nestloop over t3 and\n> ^^^^^^^^^^^\n> Do you mean creating some tmp table etc?\n> No - it's not required.\n\n Sometimes a sortset is required (grouping, nesting etc.).\n With materialize I meant the same thing the executor does for\n a scan, merge or iter node. They return in memory tuples from\n a relation or a temp file. In our new case it's mostly the\n same as a scan node that does the view selection inside. And\n it returs the same tuples as a SELECT * from the view would.\n That's internal, on the fly materialization of the view.\n\n> On the other hand, as we talk about query optimization - why\n> rule system should do optimizer' work? Why not just put\n> _any_ VIEW' query into FROM and let optimizer decide\n> could query be rewritten as join or not? Ppl do strange\n> things sometimes -:) Sometimes they use subqueries in\n> WHERE while joins could be used and our optimizer don't\n> try to catch this. I know that Sybase does.\n> And, imho, we should implement this ... sometime -:))\n\n Depends on where the optimization is done. If we do it on the\n parsetree (Query struct), it's the job of the rule system.\n The optimizer does not have to modify the parsetree. If it is\n done on the way from the parsetree to the plan, it is the job\n of the optimizer.\n\n If it is possible to do it on the parsetree, I would do it\n there.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Wed, 25 Nov 1998 11:37:04 +0100 (MET)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [SQL] cursor and update + view" }, { "msg_contents": "Jan Wieck wrote:\n> \n> > On the other hand, as we talk about query optimization - why\n> > rule system should do optimizer' work? Why not just put\n> > _any_ VIEW' query into FROM and let optimizer decide\n> > could query be rewritten as join or not? Ppl do strange\n> > things sometimes -:) Sometimes they use subqueries in\n> > WHERE while joins could be used and our optimizer don't\n> > try to catch this. I know that Sybase does.\n> > And, imho, we should implement this ... sometime -:))\n> \n> Depends on where the optimization is done. If we do it on the\n> parsetree (Query struct), it's the job of the rule system.\n> The optimizer does not have to modify the parsetree. If it is\n> done on the way from the parsetree to the plan, it is the job\n> of the optimizer.\n> \n> If it is possible to do it on the parsetree, I would do it\n> there.\n\nSubquery --> Join transformation/optimization implemented in\nrule system will be used for Views only. Being implemented\nin optimizer it will be used in all cases.\n\nVadim\n", "msg_date": "Thu, 26 Nov 1998 10:27:44 +0700", "msg_from": "Vadim Mikheev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [SQL] cursor and update + view" }, { "msg_contents": "Vadim wrote:\n\n> Subquery --> Join transformation/optimization implemented in\n> rule system will be used for Views only. Being implemented\n> in optimizer it will be used in all cases.\n\n Right for the current rule system, because it looks only for\n pg_rewrite entries to apply. Since it is called for every\n optimizable statement, it could do this as a last step on the\n querylist to be returned. Even if there where no rules to\n apply.\n\n I still think that it's the right place to do. Transforming a\n subselect into a join means to modify the users input, doing\n something different finally. This is kind of rewriting like\n for view rules. Reading the debug output \"After rewriting\"\n someone should be able to see which relations get scanned,\n where and which of their attributes are used for what.\n\n \"On the other hand\" I thought a little deeper about the\n transformation itself. On the first thought it looked so easy\n but on the third I confused myself a little. Let's take an\n easy subquery\n\n SELECT A.f1 FROM A\n WHERE A.f2 IN (SELECT B.f1 FROM B WHERE B.f2 = 'x');\n\n This will return any A.f1 where f2 is referenced by a B.f1\n WHERE B.f2 = 'x'. Regardless how often it is referenced, it\n will only be returned once. I cannot think of a join that\n can do this. The join\n\n SELECT A.f1 FROM A, B\n WHERE A.f2 = B.f1 AND B.f2 = 'x';\n\n will return A.f1 as many times as there are duplicates in B\n that match. And DISTINCT doesn't help here because it would\n remove duplicate A.f1's too (what isn't the same as the\n initial subselect does).\n\n Could you give me an example where a subquery could get\n translated into a join that produces exactly the same output,\n no matter if there are duplicates or not?\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Thu, 26 Nov 1998 11:37:28 +0100 (MET)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [SQL] cursor and update + view" }, { "msg_contents": "Jan Wieck wrote:\n> \n> Vadim wrote:\n> \n> > Subquery --> Join transformation/optimization implemented in\n> > rule system will be used for Views only. Being implemented\n> > in optimizer it will be used in all cases.\n> \n> Right for the current rule system, because it looks only for\n> pg_rewrite entries to apply. Since it is called for every\n> optimizable statement, it could do this as a last step on the\n> querylist to be returned. Even if there where no rules to\n> apply.\n> \n> I still think that it's the right place to do. Transforming a\n> subselect into a join means to modify the users input, doing\n> something different finally. This is kind of rewriting like\n> for view rules. Reading the debug output \"After rewriting\"\n> someone should be able to see which relations get scanned,\n> where and which of their attributes are used for what.\n...\n> Could you give me an example where a subquery could get\n> translated into a join that produces exactly the same output,\n> no matter if there are duplicates or not?\n\n\nSybase' example:\n\nselect title, price \nfrom titles \nwhere price = \n (select price \n from titles \n where title = \"Straight Talk About Computers\")\n\nselect t1.title, t1.price \nfrom titles t1, titles t2\nwhere t1.price = t2.price and \n t2.title = \"Straight Talk About Computers\"\n\n- yes, executor should ensure that there was only one\nrecord with t2.title equal \"Straight Talk About Computers\",\nthis could be flagged in some way...\n\nOn the other hand, I'm not sure that we have to concern\nabout such cases and about \"unwise\" queries at all -:)\nSo, forgive me for noise -:)\n\nThere are many other things to do and handling subqueries\nin FROM is one of them.\n\nVadim\n", "msg_date": "Tue, 01 Dec 1998 11:26:18 +0700", "msg_from": "Vadim Mikheev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [SQL] cursor and update + view" } ]
[ { "msg_contents": "All,\n I've got a small problem.\n\nSay you have tables A and B. They both have a userid column. Table B was \nselected and previously filled with entries from table A. Lets say about \n2000 out of 40,000. Now \nI want to select everything from A that isn't in B, so about 38,000 \nentries.\n\nI can't seem to get the MINUS to work within a select statement all I \never get are \nparse errors. Is this even implemented yet?\n\nI then tried using a 'not in' clause.\n\nselect * from A where user_id not in (select * from B);\n\nThis is VERY slow, and examining the explain output tells me that it will \nuse the user_id index for table B, but a sequential scan of A even though \nA has an index for the user_id column.\n\nAm I missing something? Does anyone have any ideas?\n\nThanks for any help.\n\n-=pierre\n", "msg_date": "Mon, 23 Nov 1998 22:53:57 -0600", "msg_from": "pierre <[email protected]>", "msg_from_op": true, "msg_subject": "MINUS and slow 'not in'" }, { "msg_contents": "At 6:53 +0200 on 24/11/98, pierre wrote:\n\n\n> I then tried using a 'not in' clause.\n>\n> select * from A where user_id not in (select * from B);\n>\n> This is VERY slow, and examining the explain output tells me that it will\n> use the user_id index for table B, but a sequential scan of A even though\n> A has an index for the user_id column.\n\nFirst, I assume you meant \"select user_id from B\", not \"select *\", or\nsomething is very strange here.\n\nAnyway, suppose you had two tables. How would you go about doing this while\nusing *both* indices? I don't think it's possible. You have a condition\nthat says: include each row which doesn't meet a certain criteria. The only\nway to do it is to scan each row, get the value of its user_id, and then go\nto be, and use its index to find if the user_id we already have is NOT\nthere.\n\nYou can use an index only when you have a specific value to search for. A\nNOT IN clause doesn't supply a specific value, so you can't use the outer\nindex.\n\nYou may try to convert the NOT IN to a NOT EXISTS clause, and see if it\nimproves anything, but it will still require a sequential search.\n\nIf I needed this query often, I'd try to optimize it by adding a column to\ntable A, marking the records that match, and then selecting all the records\nwhich don't match. I'm not sure whether one can index a boolean field in\nthe current version of PostgreSQL, but if not, you can probably use a char\nfield instead. I suppose you can make sure this column stays up-to-date\nwith rules, or do the update as a preparatory step: Update all to \"N\", then\nupdate all the fields that match to \"Y\" with a join. VACUUM, ANALYZE, and\nthen you can start selecting.\n\nThis requires two sequential scans plus vacuums before you start selecting,\nso it may not be worth it if you only select once by this criteria... I'd\ngo with the NOT IN or NOT EXISTS solution, which gives you a sequential\nscan with minimal search over the index of table B.\n\nSELECT * FROM A\nWHERE NOT EXISTS (\n SELECT * FROM B\n WHERE B.user_id = A.user_id\n);\n\nBy the way, if you have any specific criteria on A, besides the NOT EXISTS\nor NOT IN, they may cause an index scan on A as well.\n\nHerouth\n\n--\nHerouth Maoz, Internet developer.\nOpen University of Israel - Telem project\nhttp://telem.openu.ac.il/~herutma\n\n\n", "msg_date": "Tue, 24 Nov 1998 11:14:42 +0200", "msg_from": "Herouth Maoz <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [SQL] MINUS and slow 'not in'" }, { "msg_contents": "> \n> At 6:53 +0200 on 24/11/98, pierre wrote:\n> \n> > I then tried using a 'not in' clause.\n> >\n> > select * from A where user_id not in (select * from B);\n> >\n> > This is VERY slow, and examining the explain output tells me that it will\n> > use the user_id index for table B, but a sequential scan of A even though\n> > A has an index for the user_id column.\n> \n> First, I assume you meant \"select user_id from B\", not \"select *\", or\n> something is very strange here.\n> \n> You may try to convert the NOT IN to a NOT EXISTS clause, and see if it\n> improves anything, but it will still require a sequential search.\n> \n> SELECT * FROM A\n> WHERE NOT EXISTS (\n> SELECT * FROM B\n> WHERE B.user_id = A.user_id\n> );\n> \n> By the way, if you have any specific criteria on A, besides the NOT EXISTS\n> or NOT IN, they may cause an index scan on A as well.\n> \n\nOk...remember that I have table A with 40k rows, and B with 2k. What I \nwant to really get out of the query are 2k rows from A that are not contained\nin B. After reading your email, I thought about using a cursor and only \nfetching the first 2k rows that match the query. This helped tremendously\nin that it didn't try and return all 38k rows. However I now need to\ntake the results of the fetch and dump it into table B. \n\nHow can one use fetch to insert?\n\nI've tried...\n\ninsert into B\nfetch 2000 from fubar;\n\nWhich just gives a parser error. There is very little documentation on \ncursors written up that I can find. I've even searched the email archives.\nIdeas?\n\n-=pierre\n", "msg_date": "24 Nov 1998 07:57:56 -0700", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: [SQL] MINUS and slow 'not in'" }, { "msg_contents": "At 16:57 +0200 on 24/11/98, [email protected] wrote:\n\n\n> I've tried...\n>\n> insert into B\n> fetch 2000 from fubar;\n>\n> Which just gives a parser error. There is very little documentation on\n> cursors written up that I can find. I've even searched the email archives.\n> Ideas?\n\nWell, this usage of a cursor is not supported, as far as I know. However,\nif you have 6.4 (do you?), you can use SET QUERY_LIMIT to limit the number\nof rows fetched from the SELECT. I suppose you can set it back after you do\nthe INSERT.\n\nHerouth\n\n--\nHerouth Maoz, Internet developer.\nOpen University of Israel - Telem project\nhttp://telem.openu.ac.il/~herutma\n\n\n", "msg_date": "Tue, 24 Nov 1998 17:08:28 +0200", "msg_from": "Herouth Maoz <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [SQL] MINUS and slow 'not in'" }, { "msg_contents": "> \n> At 16:57 +0200 on 24/11/98, [email protected] wrote:\n> \n> \n> > I've tried...\n> >\n> > insert into B\n> > fetch 2000 from fubar;\n> >\n> > Which just gives a parser error. There is very little documentation on\n> > cursors written up that I can find. I've even searched the email archives.\n> > Ideas?\n> \n> Well, this usage of a cursor is not supported, as far as I know. However,\n> if you have 6.4 (do you?), you can use SET QUERY_LIMIT to limit the number\n> of rows fetched from the SELECT. I suppose you can set it back after you do\n> the INSERT.\n> \n\nYeah I've got 6.4. I tried:\n\nset query_limit to 2000; \n\nand got:\n\nERROR: parser: parse error at or near \"2000\"\n\nIdeas?\n\n-=pierre\n", "msg_date": "24 Nov 1998 08:31:01 -0700", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: [SQL] MINUS and slow 'not in'" }, { "msg_contents": "At 17:31 +0200 on 24/11/98, [email protected] wrote:\n\n\n>\n> Yeah I've got 6.4. I tried:\n>\n> set query_limit to 2000;\n>\n> and got:\n>\n> ERROR: parser: parse error at or near \"2000\"\n>\n> Ideas?\n\nWell, I don't have 6.4 as yet. However, reading the manpage, I surmise that\nthe value passed is a string value (To support \"SET QUERY_LIMIT TO\n'unlimited'\"). Thus, try quoting the 2000...\n\nHerouth\n\n--\nHerouth Maoz, Internet developer.\nOpen University of Israel - Telem project\nhttp://telem.openu.ac.il/~herutma\n\n\n", "msg_date": "Tue, 24 Nov 1998 17:32:59 +0200", "msg_from": "Herouth Maoz <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [SQL] MINUS and slow 'not in'" }, { "msg_contents": "> Yeah I've got 6.4. I tried:\n>\n> set query_limit to 2000;\n>\n> and got:\n>\n> ERROR: parser: parse error at or near \"2000\"\n>\n> Ideas?\n\n I think you must use '2000' instead.\n\n Anyway, the \"set query_limit\" will disappear again in v6.5\n (at least) because it potentially can break rewrite rule\n system semantics.\n\n Instead you might want to use the LIMIT/OFFSET patch I've\n created in the v6.4 feature patch. This is what will be in\n v6.5. I'll take a look how I can put it onto the server and\n drop a note here after.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Tue, 24 Nov 1998 17:07:06 +0100 (MET)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: [SQL] MINUS and slow 'not in'" }, { "msg_contents": ">\n> > Yeah I've got 6.4. I tried:\n> >\n> > set query_limit to 2000;\n> >\n> > and got:\n> >\n> > ERROR: parser: parse error at or near \"2000\"\n> >\n> > Ideas?\n>\n> I think you must use '2000' instead.\n>\n> Anyway, the \"set query_limit\" will disappear again in v6.5\n> (at least) because it potentially can break rewrite rule\n> system semantics.\n>\n> Instead you might want to use the LIMIT/OFFSET patch I've\n> created in the v6.4 feature patch. This is what will be in\n> v6.5. I'll take a look how I can put it onto the server and\n> drop a note here after.\n\n Done. URL is:\n\n ftp://ftp.postgresql.org/pub/patches/v6.4-feature-patch.tar.gz\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Wed, 25 Nov 1998 12:08:46 +0100 (MET)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "LIMIT patch available (was: Re: [SQL] MINUS and slow 'not in')" }, { "msg_contents": "Hi everybody.\n\nI've been trying to connect a win-machine-toaster-oven to my\npostgresql server, across the psqlodbc ( from Insight )\n... but when I get to the \"File DSN\" configuration, everything\nfails: it tells me that I have a user authentication problem on\nthe settings. But my pg_hba.conf doesn't have restrictions\nabout the users or host that can connect.\n\nI'm using ver 6.3.2 with the latest patched psqlodbc,\nwhat do you think I should do ???\n\nDo I have to get Win 98 ??? : )\n\nMerry .Xmas\n\n--\n\"Cuando Microdog ladra, es porque vamos caminando...\"\n M.C.S. et al\n\nDavid Martinez Cuevas\n Office 622-60-72 @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@\n Home 565-25-17 \"Eat Linux, Drink Linux... SMOKE LINUX \"\n @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@\n\n\n\n", "msg_date": "Tue, 15 Dec 1998 02:24:24 +0000", "msg_from": "David Martinez Cuevas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [SQL] MINUS and slow 'not in'" }, { "msg_contents": "Have you checked out:\n\n http://www.insightdist.com/psqlodbc/psqlodbc_faq.html#dsnsetup\n\nHave you established any connections from a remote machine?\n\nIf not, be sure to use the \"-i\" option for postmaster startup.\n\nIf this does not work double check the entries in pg_hba.conf.\n\nAs far as OS's go; Win(95)|(98)|(NT) are all verified as well as ports to\nsome Unix boxes. Win3.1 is right out. (Please, no requests to port to\n16 bit)\n\nDavid Martinez Cuevas wrote:\n\n> Hi everybody.\n>\n> I've been trying to connect a win-machine-toaster-oven to my\n> postgresql server, across the psqlodbc ( from Insight )\n> ... but when I get to the \"File DSN\" configuration, everything\n> fails: it tells me that I have a user authentication problem on\n> the settings. But my pg_hba.conf doesn't have restrictions\n> about the users or host that can connect.\n>\n> I'm using ver 6.3.2 with the latest patched psqlodbc,\n> what do you think I should do ???\n>\n> Do I have to get Win 98 ??? : )\n>\n> Merry .Xmas\n>\n> --\n> \"Cuando Microdog ladra, es porque vamos caminando...\"\n> M.C.S. et al\n>\n> David Martinez Cuevas\n> Office 622-60-72 @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@\n> Home 565-25-17 \"Eat Linux, Drink Linux... SMOKE LINUX \"\n> @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@\n\n", "msg_date": "Tue, 15 Dec 1998 09:55:24 -0500", "msg_from": "David Hartwig <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [SQL] MINUS and slow 'not in'" } ]
[ { "msg_contents": "Huh,\n\nI am creating a big view and then I get:\n\nPostgresSQL error message: ERROR:\nDefineQueryRewrite: rule plan string too big.\n\nThis is on 6.3.2.\n\nAny hints?\n\nDirk", "msg_date": "Tue, 24 Nov 1998 10:04:47 +0100 (CET)", "msg_from": "Dirk Lutzebaeck <[email protected]>", "msg_from_op": true, "msg_subject": "Fw: rule plan string too big." } ]
[ { "msg_contents": "\nUmm, I am creating a big view and then I get:\n\nPostgresSQL error message: ERROR:\nDefineQueryRewrite: rule plan string too big.\n\nThis is on 6.3.2.\n\nAny hints?\n\nDirk\n\n", "msg_date": "Tue, 24 Nov 1998 10:08:18 +0100 (CET)", "msg_from": "Dirk Lutzebaeck <[email protected]>", "msg_from_op": true, "msg_subject": "rule plan string too big." } ]
[ { "msg_contents": "> \n> I use postgres v6.4. My standard database is in /usr/local/pgsql/data\n> directory and the PGDATA environment variable points to this database\n> directory. But I'd like to create another database into another\n> directory. So I tried this:\n> killall postmaster\n> initdb --pgdata=/home/postgres/database --that's OK\n> postmaster -i -D /home/postgres/database -S -o -F\n> createdb -D /home/postgres/database gazmuvek --that's not good\n> the answer is:\n> ERROR: Unable to locate path '/home/postgres/database/gazmuvek'\n> This may be due to a missing environment variable in\n> the server\n> \n> After this I tried to type another command:\n> createdb gazmuvek\n> It was successfull, but the database 'gazmuvek' was created into\n> $PGDATA. But it wasn't my desire. I need the database 'gazmuvek' in the\n> directory: /home/postgres/database!\n> \n> \n> What is the solution?\n\n man initlocation\n\n\nJan\n\n\n-- \n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n", "msg_date": "Tue, 24 Nov 1998 10:54:15 +0100 (MET)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": true, "msg_subject": "Re: [HACKERS] createdb problem" }, { "msg_contents": "I use postgres v6.4. My standard database is in /usr/local/pgsql/data\ndirectory and the PGDATA environment variable points to this database\ndirectory. But I'd like to create another database into another\ndirectory. So I tried this:\nkillall postmaster\ninitdb --pgdata=/home/postgres/database --that's OK\npostmaster -i -D /home/postgres/database -S -o -F\ncreatedb -D /home/postgres/database gazmuvek --that's not good\n the answer is:\nERROR: Unable to locate path '/home/postgres/database/gazmuvek'\n This may be due to a missing environment variable in\nthe server\n\nAfter this I tried to type another command:\ncreatedb gazmuvek\nIt was successfull, but the database 'gazmuvek' was created into\n$PGDATA. But it wasn't my desire. I need the database 'gazmuvek' in the\ndirectory: /home/postgres/database!\n\n\nWhat is the solution?\n\n\n\n", "msg_date": "Tue, 24 Nov 1998 10:13:58 +0000", "msg_from": "Lendvary Gyorgy <[email protected]>", "msg_from_op": false, "msg_subject": "createdb problem" } ]
[ { "msg_contents": ">I use postgres v6.4. My standard database is in /usr/local/pgsql/data\n>directory and the PGDATA environment variable points to this database\n>directory. But I'd like to create another database into another\n>directory. So I tried this:\n>killall postmaster\n>initdb --pgdata=/home/postgres/database --that's OK\n>postmaster -i -D /home/postgres/database -S -o -F\n>createdb -D /home/postgres/database gazmuvek --that's not good\n> the answer is:\n>ERROR: Unable to locate path '/home/postgres/database/gazmuvek'\n> This may be due to a missing environment variable in\n>the server\n>\n>After this I tried to type another command:\n>createdb gazmuvek\n>It was successfull, but the database 'gazmuvek' was created into\n>$PGDATA. But it wasn't my desire. I need the database 'gazmuvek' in the\n>directory: /home/postgres/database!\n>\n>\n>What is the solution?\n\ni'm a bit confused; it seems like you have a secondary storage area\n(/home/postgres/database) and a primary storage area\n(/usr/local/pgsql/data). i think if you have a PGDATA environment variable,\nyou'll be stuck running that instance of the postmaster with a primary\nstorage location given in the evironment variable, even if you try to\noverride it with an absolute path (at least from my limited testing - i\ntried exactly what you did with no success, even though logically it should\nresult in \"a clean slate\" running from the secondary location). and i\nhaven't been able to create a database in a secondary location by specifying\nan absolute path, only with environment variables (as of 6.4, it worked OK\nin 6.3.2, though)\n\nso for you hackers out there: what's going on? it seems that environment\nvariables are the _one true way_ for 6.4, which is a change from 6.3.2. if\nthat's the case, most docs don't reflect it, and if not, then there very\nwell could be some bugs somewhere.\n\non to the solution, though. what you probably want to do is kill the\nserver, set a PGDATA2 environment variable to /home/postgres/database,\nrestart the server (without the -D /home/postgres/database, just let it use\nthe PGDATA location) and then set up the secondary storage area, e.g.:\n\n$ initlocation $PGDATA2\n$ createdb -D PGDATA2 gazmuvek\n\nthis is really the correct way of doing things anyway, if you think through\nit a little bit.\n\n", "msg_date": "Tue, 24 Nov 1998 08:49:15 -0600", "msg_from": "\"Jeff Hoffmann\" <[email protected]>", "msg_from_op": true, "msg_subject": "what's going on? (was: Re: [HACKERS] createdb problem)" }, { "msg_contents": "> ... and i haven't been able to create a database in a secondary \n> location by specifying an absolute path, only with environment \n> variables (as of 6.4, it worked OK in 6.3.2, though)\n> ... it seems that environment variables are the _one true way_ for \n> 6.4, which is a change from 6.3.2. if that's the case, most docs \n> don't reflect it, and if not, then there very well could be some bugs \n> somewhere.\n\nYes, you are right that the default behavior has changed. Allowing\nabsolute path names exposes the Postgres server to security and\nintegrity risks (which may not be entirely alleviated by using\nenvironment variables, but imho it does help). The old behavior is\nrecoverable by specifying\n\n #define ALLOW_ABSOLUTE_DBPATHS 1\n\nin your config.h or by specifying\n\n CFLAGS+= -DALLOW_ABSOLUTE_DBPATHS\n\nin your Makefile.custom.\n\nYou are also correct in that the docs don't seem to explicitly discuss\nthis issue though it is hinted at in the Admin Guide chapter on Disk\nManagement (no mention of ALLOW_ABSOLUTE_DBPATHS though). Similar words\nappear in the User's Guide chapter on Database Management. I would have\nguessed that I had added something at the time, but...\n\nIf you are annoyed enough by the lack of information to write some docs\nthen the files to modify are manage.sgml (for the UG) and start-ag.sgml\n(for the AG). Patches gladly accepted :) I can give you back formatted\nversions so you can see how it looks.\n\n> ... this is really the correct way of doing things anyway, if you \n> think through it a little bit.\n\nYour solution is correct.\n\nRegards.\n\n - Tom\n", "msg_date": "Wed, 25 Nov 1998 02:13:08 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: what's going on? (was: Re: [HACKERS] createdb problem)" } ]
[ { "msg_contents": "\n\tI isolated a code problem (sunos4.1.X does not have a fflush(NULL)),\nTom Lane is adding a portable workaround. Yes this is an ANSI required\nbut either forget about Sunos4.1.X or use more general code. fflush(NULL)\ncore dumps sunos4.1.X\n\n\tEven with a non core dumping build I am calling on the group\nmemory to suggest what to try next. Sunos 4.1.X can be built and\npostmaster started and a first psql started. On the most simple\nrequest\n\n\\l\n\nthings just hang. With debug on I see a final\n\nenter CommitTransactionCommand()\n\n\tI added a few more diagnostic prints and CommitTransactionCommand\nruns with a\n case TBLOCK_DEFAULT\nand says it is exiting...\n\n\tSuggestions where to look next - if this a known problem let me\nknow where to look.\n\tI can test fixes on both BSD/OS 3.X and BSD/OS 4.0 so I can't go too\nfar from an ANSI/POSIX solution before those builds will break.\n\n\tThe regression test also hang with the CommitTransactionCommand.\n\n\\q works and psql exits and \\? gives the help list\n\nbut...\n\ntemplate1=> \\df\nBackend sent D message without prior T\n\nand we are hung\n\n\tThanks\n\n-- \n\tStephen N. Kogge\n\[email protected]\n\thttp://www.uimage.com\n\n\n", "msg_date": "Tue, 24 Nov 1998 13:47:38 -0500", "msg_from": "Stephen Kogge <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] 6.4.1 schedule SUNOS4.1.X build " }, { "msg_contents": "> \tSuggestions where to look next - if this a known problem let me\n> know where to look.\n> \tI can test fixes on both BSD/OS 3.X and BSD/OS 4.0 so I can't go too\n> far from an ANSI/POSIX solution before those builds will break.\n\nAny chance of testing bsd 3.x and 4.0 to figure out why plpgsql fails\nregression tests on 3.x and not on 4.0.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 12 Dec 1998 22:37:41 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] 6.4.1 schedule SUNOS4.1.X build" } ]
[ { "msg_contents": "The original message was received at Tue, 24 Nov 1998 16:05:20 -0800 (PST)\nfrom hotzmac [137.78.84.130]\n\n ----- The following addresses have delivery notifications -----\n<[email protected]> (unrecoverable error)\n\n ----- Transcript of session follows -----\n... while talking to postgresql.org.:\n>>> RCPT To:<[email protected]>\n<<< 550 <[email protected]>... User unknown\n550 <[email protected]>... User unknown\n\nReporting-MTA: dns; hotzsun.jpl.nasa.gov\nReceived-From-MTA: DNS; hotzmac\nArrival-Date: Tue, 24 Nov 1998 16:05:20 -0800 (PST)\n\nFinal-Recipient: RFC822; [email protected]\nAction: failed\nStatus: 5.2.0\nRemote-MTA: DNS; postgresql.org\nDiagnostic-Code: SMTP; 550 <[email protected]>... User unknown\nLast-Attempt-Date: Tue, 24 Nov 1998 16:05:40 -0800 (PST)\n\nReturn-Path: [email protected]\nReceived: from [137.78.84.130] (hotzmac [137.78.84.130]) by\nhotzsun.jpl.nasa.gov (8.7.6/8.7.3) with ESMTP id QAA16406 for\n<[email protected]>; Tue, 24 Nov 1998 16:05:20 -0800 (PST)\nX-Sender: [email protected]\nMessage-Id: <v0313030cb280f439fbdd@[137.78.84.130]>\nMime-Version: 1.0\nContent-Type: text/plain; charset=\"us-ascii\"\nDate: Tue, 24 Nov 1998 16:01:42 -0800\nTo: [email protected]\nFrom: \"Henry B. Hotz\" <[email protected]>\nSubject: Kerberos Conflict\n\n============================================================================\n POSTGRESQL BUG REPORT TEMPLATE\n============================================================================\n\n\nYour name\t\t:\tHenry B. Hotz\nYour email address\t:\[email protected]\n\n\nSystem Configuration\n---------------------\n Architecture (example: Intel Pentium) \t:SPARC\n\n Operating System (example: Linux 2.0.26 ELF) \t:Solaris 2.5\n\n PostgreSQL version (example: PostgreSQL-6.4) : PostgreSQL-6.4\n\n Compiler used (example: gcc 2.8.0)\t\t:gcc 2.7.2.2\n\n\nPlease enter a FULL description of your problem:\n------------------------------------------------\nWhen the KTH-KRB package is installed and Postgres compiled with kerberos\nenabled the include files conflict with the system include files for crypt.\nSpecifically the declarations for des_encrypt() conflict between\n/usr/athena/krb.h and /usr/include/crypt.h in interfaces/libpq/fe-auth.c.\n\n\n\n\nPlease describe a way to repeat the problem. Please try to provide a\nconcise reproducible example, if at all possible:\n----------------------------------------------------------------------\nInstall KTH-KRB, and set KRBVERS=4 in Makefile.global. Then do a normal build.\n\n\n\n\nIf you know how this problem might be fixed, list the solution below:\n---------------------------------------------------------------------\nSeems like if I can disable HAVE_CRYPT_H somehow then it might work, but I\nmay loose the crypt and password authentication mechanisms. Still need to\nworry about conflicting library versions of same-named routines.\n\nI haven't actually done this so any suggestions would be welcome.\n\nIf I get a working kerberos IV mechanism then I don't care if I crypt and\npassword don't work.\n\nSignature failed Preliminary Design Review.\nFeasibility of a new signature is currently being evaluated.\[email protected], or [email protected]\n\n\n", "msg_date": "Tue, 24 Nov 1998 17:25:26 -0800", "msg_from": "Mail Delivery Subsystem <[email protected]> (by way\n\tof Henry B. Hotz)", "msg_from_op": true, "msg_subject": "Returned mail: User unknown" } ]
[ { "msg_contents": "Here's a simple constraint check that crashes the 6.4 backend.\n\nFixes [rattling tin cup]?\n\n\tThanx,\n\t<mike\n\ncreate table splatter (\n a int,\n b int,\n check (not (a is null and b is null))\n", "msg_date": "Tue, 24 Nov 1998 19:58:29 -0800", "msg_from": "Mike Meyer <[email protected]>", "msg_from_op": true, "msg_subject": "Constraint check that crashes backend..." } ]
[ { "msg_contents": "I have not been able to resolve the problem with creating triggers on tables \nthat have mixed case names due to a lack of both time and knowledge of the \ncode involved. Can someone familar with that area of code assist by fixing \nthe problem or giving me pointers as to where I should look?\n\nAny help would be appreciated.\n-- \n____ | Billy G. Allie | Domain....: [email protected]\n| /| | 7436 Hartwell | Compuserve: 76337,2061\n|-/-|----- | Dearborn, MI 48126| MSN.......: [email protected]\n|/ |LLIE | (313) 582-1540 |", "msg_date": "Wed, 25 Nov 1998 01:23:34 -0500", "msg_from": "\"Billy G. Allie\" <[email protected]>", "msg_from_op": true, "msg_subject": "Help with MiXeD cAsE table names and triggers." } ]
[ { "msg_contents": "I apologize. pgsql-questions has disappeared? (out of the loop, I am)\n\n> \n> Hello,\n> \n> My questions are:\n> \t(1) does 6.4 offer speed improvements over 6.3.2?\n> \t(2) does 6.4 offer stability improvements over 6.3.2?\n> \t(3) does 6.4 support query lengths > 8192, or data blocks > 8192 (other than large objects)?\n> \t\n> If anyone knows, I thank you.\n> \n> Eddie\n> [email protected]\n> \n", "msg_date": "Wed, 25 Nov 1998 01:37:32 -0500 (EST)", "msg_from": "Integration <[email protected]>", "msg_from_op": true, "msg_subject": "6.4.x" }, { "msg_contents": "On Wed, 25 Nov 1998, Integration wrote:\n\n> I apologize. pgsql-questions has disappeared? (out of the loop, I am)\n\n\tDisappeared about 6 months ago or so...\n\n> \n> > \n> > Hello,\n> > \n> > My questions are:\n> > \t(1) does 6.4 offer speed improvements over 6.3.2?\n> > \t(2) does 6.4 offer stability improvements over 6.3.2?\n> > \t(3) does 6.4 support query lengths > 8192, or data blocks > 8192 (other than large objects)?\n> > \t\n> > If anyone knows, I thank you.\n> > \n> > Eddie\n> > [email protected]\n> > \n> \n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Wed, 25 Nov 1998 10:12:38 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] 6.4.x" }, { "msg_contents": "On Wed, 25 Nov 1998, Integration wrote:\n\n> I apologize. pgsql-questions has disappeared? (out of the loop, I am)\n> \n> > \n> > Hello,\n> > \n> > My questions are:\n> > \t(1) does 6.4 offer speed improvements over 6.3.2?\n\n\tYes...\n\n> > \t(2) does 6.4 offer stability improvements over 6.3.2?\n\n\tYes...\n\n> > (3) does 6.4 support query lengths > 8192, or data blocks > 8192\n> (other than large objects)?\n\n\tNot yet, but Bruce has/had some ideas for v6.5 for doing this\nusing a row-spanning method...theory sounded great :)\n\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Wed, 25 Nov 1998 10:13:41 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] 6.4.x" }, { "msg_contents": "> > (1) does 6.4 offer speed improvements over 6.3.2?\n\nYes, though probably not as noticable as for the previous release.\n\n> > (2) does 6.4 offer stability improvements over 6.3.2?\n\nYes.\n\n> > (3) does 6.4 support query lengths > 8192, or data blocks > 8192 \n> > (other than large objects)?\n\nSometime in the past Darren K. worked to parameterize this limit. I\nbelieve that this is in the code, but you will have to bump up the limit\nand see if it works for you. The downside to having larger data blocks\nis that the database size will be somewhat larger.\n\n - Tom\n", "msg_date": "Wed, 25 Nov 1998 14:19:53 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] 6.4.x" }, { "msg_contents": ">>>> (2) does 6.4 offer stability improvements over 6.3.2?\n\n> Yes.\n\nFWIW, 6.4 is noticeably more stable than 6.3.2 in my company's\napplication involving concurrent users of a shared database.\nWe have not seen a backend crash or data corruption since installing\na pre-alpha-6.4 server in mid-September. We had several such problems\nin the preceding couple of months with 6.3.2.\n\n\n>>>> (3) does 6.4 support query lengths > 8192, or data blocks > 8192 \n>>>> (other than large objects)?\n\n> Sometime in the past Darren K. worked to parameterize this limit.\n\nThere has been some discussion of allowing tuples to span multiple\ndisk blocks, which would remove the problem entirely, but it hasn't\nhappened yet. Maybe for 6.5?\n\nThe limit on the textual length of a query is an unrelated quantity\nthat by coincidence has the same value. (Well, maybe not total\ncoincidence... probably someone wanted to be sure they could INSERT\nan 8K text string... but the code doesn't know there's a connection.)\nI am planning to modify libpq and the backend to eliminate fixed-size\nquery text buffers, so this limit should go away for 6.5.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 25 Nov 1998 11:09:08 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] 6.4.x " }, { "msg_contents": "Tom Lane wrote:\n> \n> >>>> (3) does 6.4 support query lengths > 8192, or data blocks > 8192\n> >>>> (other than large objects)?\n> \n> > Sometime in the past Darren K. worked to parameterize this limit.\n> \n> There has been some discussion of allowing tuples to span multiple\n> disk blocks, which would remove the problem entirely, but it hasn't\n> happened yet. Maybe for 6.5?\n\nRight now I'm rewriting HeapTuple structure and functions - for\nmulti-version concurrency control (MVCC). New HeapTuple:\n\ntypedef struct HeapTupleData\n{ \n uint32 t_len; /* length of *t_data */\n ItemPointerData t_self; /* SelfItemPointer */\n HeapTupleHeader t_data; /* */\n ^^^^^^^^^^^^^^^^^^^^^^^^^^\n this is what known as HeapTuple in < 6.5\n} HeapTupleData;\n\nI assume that one, who would like implement blocks spanning, \nwill add something to this new structure. \nI need in ~ one week, please wait.\n\nVadim\n", "msg_date": "Thu, 26 Nov 1998 10:11:17 +0700", "msg_from": "Vadim Mikheev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] 6.4.x" }, { "msg_contents": "> Tom Lane wrote:\n> > \n> > >>>> (3) does 6.4 support query lengths > 8192, or data blocks > 8192\n> > >>>> (other than large objects)?\n> > \n> > > Sometime in the past Darren K. worked to parameterize this limit.\n> > \n> > There has been some discussion of allowing tuples to span multiple\n> > disk blocks, which would remove the problem entirely, but it hasn't\n> > happened yet. Maybe for 6.5?\n> \n> Right now I'm rewriting HeapTuple structure and functions - for\n> multi-version concurrency control (MVCC). New HeapTuple:\n> \n> typedef struct HeapTupleData\n> { \n> uint32 t_len; /* length of *t_data */\n> ItemPointerData t_self; /* SelfItemPointer */\n> HeapTupleHeader t_data; /* */\n> ^^^^^^^^^^^^^^^^^^^^^^^^^^\n> this is what known as HeapTuple in < 6.5\n> } HeapTupleData;\n> \n> I assume that one, who would like implement blocks spanning, \n> will add something to this new structure. \n> I need in ~ one week, please wait.\n\nBlock spanning was only an idea. No idea how to code it. Yet.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 1 Dec 1998 18:15:52 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] 6.4.x" } ]
[ { "msg_contents": "Hi all,\n\nMy database is full of tables... What's that ? Temporary tables ?\nMay I drop safety these tables ?\n\n | Owner | Relation | Type |\n +------------------+----------------------------------+----------+\n | postgres | _rpt16567_r1 | table |\n | postgres | _rpt16574_r1 | table |\n | postgres | _rpt18155_r1 | table |\n | postgres | _rpt18207_r1 | table |\n | postgres | _rpt18256_r1 | table |\n | postgres | _rpt18309_r1 | table |\n | postgres | _rpt21859_r1 | table |\n | postgres | _rpt21861_r1 | table |\n | postgres | _rpt21865_r1 | table |\n | postgres | _rpt21869_r1 | table |\n | postgres | _rpt22065_r1 | table |\n | postgres | _rpt22067_r1 | table |\n | postgres | _rpt22069_r1 | table |\n | postgres | _rpt22345_r1 | table |\n | postgres | _rpt22353_r1 | table |\n | postgres | _rpt22568_r1 | table |\n | postgres | _rpt22574_r1 | table |\n | postgres | _rpt26970_r1 | table |\n | postgres | _rpt27142_r1 | table |\n | postgres | _rpt27144_r1 | table |\n | postgres | _rpt27146_r1 | table |\n | postgres | _rpt27154_r1 | table |\n:\n\n-Jose'-\n\n\n", "msg_date": "Wed, 25 Nov 1998 16:30:36 +0100", "msg_from": "Sferacarta Software <[email protected]>", "msg_from_op": true, "msg_subject": "temporary tables ?" }, { "msg_contents": "\nVersion of Postgresql?\n\nOn Wed, 25 Nov 1998, Sferacarta Software wrote:\n\n> Hi all,\n> \n> My database is full of tables... What's that ? Temporary tables ?\n> May I drop safety these tables ?\n> \n> | Owner | Relation | Type |\n> +------------------+----------------------------------+----------+\n> | postgres | _rpt16567_r1 | table |\n> | postgres | _rpt16574_r1 | table |\n> | postgres | _rpt18155_r1 | table |\n> | postgres | _rpt18207_r1 | table |\n> | postgres | _rpt18256_r1 | table |\n> | postgres | _rpt18309_r1 | table |\n> | postgres | _rpt21859_r1 | table |\n> | postgres | _rpt21861_r1 | table |\n> | postgres | _rpt21865_r1 | table |\n> | postgres | _rpt21869_r1 | table |\n> | postgres | _rpt22065_r1 | table |\n> | postgres | _rpt22067_r1 | table |\n> | postgres | _rpt22069_r1 | table |\n> | postgres | _rpt22345_r1 | table |\n> | postgres | _rpt22353_r1 | table |\n> | postgres | _rpt22568_r1 | table |\n> | postgres | _rpt22574_r1 | table |\n> | postgres | _rpt26970_r1 | table |\n> | postgres | _rpt27142_r1 | table |\n> | postgres | _rpt27144_r1 | table |\n> | postgres | _rpt27146_r1 | table |\n> | postgres | _rpt27154_r1 | table |\n> :\n> \n> -Jose'-\n> \n> \n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Wed, 25 Nov 1998 22:17:58 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] temporary tables ?" } ]
[ { "msg_contents": "Hi,\n\nThe November 16, 1998 issue of \"Information Week\" mentions Postgres in\nan article on page 186 called \"Red Hat Linux: Almost Free, And Full of\nFunctions\".\n\nQuoting from the article...\n\n\"Red Hat even includes an open-source SQL database known as PostgreSQL\nDBMS, based on the Postgres and Postgres95 databases originally\ndeveloped at the University of California at Berkeley. There's a freely\ndownloadable ODBC driver for PostgreSQL at\nwww.insightdist.com/psqlodbc/psqlodbc_download.html, which will let\nWindows applications access the server, and a host of other tools as\nwell\".\n\n\nPretty cool, heh?\n\n\nByron\n\nP.S. It even goes on to say that the ODBC driver is the best odbc\ndriver they had ever seen on any system ever, in the whole\nworld.....umm, just kidding :-)\n\n\n\n", "msg_date": "Wed, 25 Nov 1998 10:42:29 -0500", "msg_from": "Byron Nikolaidis <[email protected]>", "msg_from_op": true, "msg_subject": "Postgres mentioned in Information Week" }, { "msg_contents": "Hello,\n\nAt 10.42 25/11/98 -0500, Byron Nikolaidis wrote:\n>P.S. It even goes on to say that the ODBC driver is the best odbc\n>driver they had ever seen on any system ever, in the whole\n>world.....umm, just kidding :-)\n\nwell, don't know if it is the best in the world but sure is one of the best\nI tried ;)\n\nbye!\n\n\tSbragion Denis\n\tInfoTecna\n\tTel, Fax: +39 039 2324054\n\tURL: http://space.tin.it/internet/dsbragio\n", "msg_date": "Wed, 25 Nov 1998 18:40:44 +0100", "msg_from": "Sbragion Denis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [INTERFACES] Postgres mentioned in Information Week" } ]
[ { "msg_contents": "Hi all\n\nCREATE RULE \"_RETmessages\" AS ON SELECT TO \"messages\" DO INSTEAD SELECT\n\"title\", \"mess\", \"iurl\", \"lurl\", \"posted\", \"fname\", \"lname\", \"email\",\n\"uid\", \"ppid\", \"pid\", \"bid\" FROM \"post\", \"users\" WHERE \"uid\" = \"uid\";\nERROR: Column uid is ambiguous\n\nIt left off the table names from 'WHERE \"uid\" = \"uid\";'\n\nI do not profess to understand all this rule stuff, but I don't understand\nwhy when I create a view using 'create view ....', then why does pg_dump\nneed to create the view as a table, then later create a rule to make the\ntable into a view? Why not just dump a 'create view ....' command\ninstead? Not that it matters, as long as it works, just seems confusing\nis all.\n\nThanks, have a great day.\nTerry Mackintosh <[email protected]> http://www.terrym.com\nsysadmin/owner Please! No MIME encoded or HTML mail, unless needed.\n\nProudly powered by R H Linux 4.2, Apache 1.3, PHP 3, PostgreSQL 6.4\n-------------------------------------------------------------------\nSuccess Is A Choice ... book by Rick Patino, get it, read it!\n\n", "msg_date": "Wed, 25 Nov 1998 12:26:29 -0500 (EST)", "msg_from": "Terry Mackintosh <[email protected]>", "msg_from_op": true, "msg_subject": "pg_dump(all) and views, broke" }, { "msg_contents": ">\n> Hi all\n>\n> CREATE RULE \"_RETmessages\" AS ON SELECT TO \"messages\" DO INSTEAD SELECT\n> \"title\", \"mess\", \"iurl\", \"lurl\", \"posted\", \"fname\", \"lname\", \"email\",\n> \"uid\", \"ppid\", \"pid\", \"bid\" FROM \"post\", \"users\" WHERE \"uid\" = \"uid\";\n> ERROR: Column uid is ambiguous\n>\n> It left off the table names from 'WHERE \"uid\" = \"uid\";'\n>\n> I do not profess to understand all this rule stuff, but I don't understand\n> why when I create a view using 'create view ....', then why does pg_dump\n> need to create the view as a table, then later create a rule to make the\n> table into a view? Why not just dump a 'create view ....' command\n> instead? Not that it matters, as long as it works, just seems confusing\n> is all.\n\n Creating a table first and turn it later into view by CREATE\n RULE is just for simplification of pg_dump. It does not need\n to make a difference between those rules that are handmade\n production rules and those that came in due to CREATE VIEW.\n\n But the above is a bug and I'll fix it soon.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Wed, 25 Nov 1998 18:31:22 +0100 (MET)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] pg_dump(all) and views, broke" }, { "msg_contents": "Hi Jan\n\nOK, thanks for your reply.\n\nA followup, I noticed that if the table names have been aliased, then it\nputs in the alias name, it's only when not aliased that it leaves out the\nnames\n\nHere is a much more complicated one that works fine, but uses aliased\ntable names:\n\nCREATE RULE \"_RETkeywcatlist\" AS ON SELECT TO \"keywcatlist\" DO INSTEAD\nSELECT \"l\".\"title\", \"l\".\"discription\", \"l\".\"url\", \"l\".\"lanme\",\n\"l\".\"fname\", \"l\".\"email\",\n\"l\".\"ent_date\", \"l\".\"mod_date\", \"l\".\"approved\", \"l\".\"item_id\",\n\"k\".\"keyword\", \"c\".\"category\" FROM \"listings\" \"l\", \"keywords\" \"k\",\n\"keyw2list\" \"k2l\", \"categories\" \"c\", \"cat2list\" \"c2l\" WHERE\n(((\"l\".\"item_id\" = \"k2l\".\"item_id\") AND (\"k\".\"keyw_id\" = \"k2l\".\"keyw_id\"))\nAND (\"l\".\"item_id\" = \"c2l\".\"item_id\")) AND (\"c\".\"category\" =\n\"c2l\".\"category\");\n\nI know the prefixes could have been left off of unique names, but put them\non for clarity in the future. ... or for confusion, however it works out:)\n\nThanks\nTerry Mackintosh <[email protected]> http://www.terrym.com\nsysadmin/owner Please! No MIME encoded or HTML mail, unless needed.\n\nProudly powered by R H Linux 4.2, Apache 1.3, PHP 3, PostgreSQL 6.4\n-------------------------------------------------------------------\nSuccess Is A Choice ... book by Rick Patino, get it, read it!\n\n", "msg_date": "Wed, 25 Nov 1998 12:56:50 -0500 (EST)", "msg_from": "Terry Mackintosh <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] pg_dump(all) and views, broke" }, { "msg_contents": ">\n> Hi Jan\n>\n> Also, it leaves off the table names in the select clause, as well as the\n> where clause. I just noticed that. (when it still did not work after hand\n> editing the where clause :)\n>\n> I hope I'm being helpful, and not being a pest.\n\n Bug reports are a kind of helpful pest :-)\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Wed, 25 Nov 1998 19:06:18 +0100 (MET)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] pg_dump(all) and views, broke" }, { "msg_contents": "Hi Jan\n\nAlso, it leaves off the table names in the select clause, as well as the\nwhere clause. I just noticed that. (when it still did not work after hand\nediting the where clause :)\n\nI hope I'm being helpful, and not being a pest.\n\nHave a great day, and thanks\nTerry Mackintosh <[email protected]> http://www.terrym.com\nsysadmin/owner Please! No MIME encoded or HTML mail, unless needed.\n\nProudly powered by R H Linux 4.2, Apache 1.3, PHP 3, PostgreSQL 6.4\n-------------------------------------------------------------------\nSuccess Is A Choice ... book by Rick Patino, get it, read it!\n\n", "msg_date": "Wed, 25 Nov 1998 13:09:55 -0500 (EST)", "msg_from": "Terry Mackintosh <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] pg_dump(all) and views, broke" } ]
[ { "msg_contents": "First, of course, thanks for all the work on postgresql.\n\nI've been using 6.3 on two SGI O2's (irix 6.3) for a about six\nmonths. I tried to make 6.4, and came across bunches of errors,\nnot in the build, but in making a new database on it, and running\nthe regression tests.\n\nLike 6.3, I fixed up backend/Makefile, and compiled with CC -n32 and \nLD -n32. When I try make a simple view (which worked on 6.3) I get\n\"ERROR: cannot create v_story_info\". The select query, entered\nas just query, does work.\n\ncreate view\nv_story_info as select\n stories.story_id, is_current, pub_date,\n size, paper_page, grouping_id,\n slug, headline, subhead,\n dateline, series.series_id, series_name,\n ed_note, correction, summary\nfrom\n stories, series, story_attrib\nwhere\n story_attrib.story_id = stories.story_id and\n series.series_id = stories.grouping_id\n;\n\nI think there are several other errors in the regression tests,\nbut I'm still sorting through which errors are trivial. The SELECT\nDISTINCT seems to fetch no rows. If there's no other volunteer,\nI'll make an output file for the distribution, if I can get postgres\nworking. :)\n\nThanks for any help, or any places to start looking for a cure.\nBen\n\nHere is some of the regression output:\n\n*** expected/create_type.out Tue Apr 22 12:33:34 1997\n--- results/create_type.out Wed Nov 25 01:25:56 1998\n***************\n*** 4,9 ****\n--- 4,10 ----\n output = widget_out,\n alignment = double\n );\n+ ERROR: TypeCreate: function 'widget_in(opaque)' does not exist\n QUERY: CREATE TYPE city_budget (\n internallength = 16,\n input = int44in,\n\n for each row\n execute procedure\n check_primary_key ('fkey1', 'fkey2', 'pkeys', 'pkey1', 'pkey2');\n+ ERROR: CreateTrigger: function check_primary_key () does not exist\n\n+ ERROR: CreateTrigger: function funny_dup17 () does not exist\n\n*** expected/create_view.out Sun Apr 6 00:05:32 1997\n--- results/create_view.out Wed Nov 25 01:26:24 1998\n*************** \n*** 7,12 **** \n--- 7,13 ---- \n interpt_pp(ih.thepath, r.thepath) AS exit\n FROM ihighway ih, ramp r \n WHERE ih.thepath ## r.thepath; \n+ ERROR: No such function 'interpt_pp' with the specified attributes\n QUERY: CREATE VIEW toyemp AS \n SELECT name, age, location, 12*salary AS annualsal\n FROM emp; \n\nIn results/select.out, one of many:\n QUERY: SELECT onek.unique1, onek.stringu1 \n WHERE onek.unique1 < 20 \n ORDER BY unique1 using >; \n unique1|stringu1 \n -------+-------- \n! (0 rows)\n\n\n", "msg_date": "Wed, 25 Nov 1998 13:21:56 -0600 (CST)", "msg_from": "\"Ben S.\" <[email protected]>", "msg_from_op": true, "msg_subject": "Problems getting a usable pgsql 6.4 on irix 6.3" } ]
[ { "msg_contents": "This was reported as a bug with the Debian package of 6.3.2; the same\nbehaviour is still present in 6.4. \n\nbray=> create table foo ( t text[]);\nCREATE\nbray=> insert into foo values ( '{\"a\"}');\nINSERT 201354 1\nbray=> insert into foo values ( '{\"a\",\"b\"}');\nINSERT 201355 1\nbray=> insert into foo values ( '{\"a\",\"b\",\"c\"}');\nINSERT 201356 1\nbray=> select * from foo;\nt \n-------------\n{\"a\"} \n{\"a\",\"b\"} \n{\"a\",\"b\",\"c\"}\n(3 rows)\n\nbray=> select t[1] from foo;\nERROR: type name lookup of t failed\nbray=> select * from foo;\nt \n-------------\n{\"a\"} \n{\"a\",\"b\"} \n{\"a\",\"b\",\"c\"}\n(3 rows)\n\nbray=> select foo.t[1] from foo;\nt\n-\na\na\na\n(3 rows)\n\nbray=> select count(foo.t[1]) from foo;\npqReadData() -- backend closed the channel unexpectedly.\n\n-- \nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\n PGP key from public servers; key ID 32B8FAA1\n ========================================\n \"Let us therefore come boldly unto the throne of grace,\n that we may obtain mercy, and find grace to help in \n time of need.\" Hebrews 4:16 \n\n\n", "msg_date": "Wed, 25 Nov 1998 23:48:51 +0000", "msg_from": "\"Oliver Elphick\" <[email protected]>", "msg_from_op": true, "msg_subject": "Failures with arrays" }, { "msg_contents": "New TODO list item:\n\n * select t[1] from foo fails, select count(foo.t[1]) from foo crashes\n\n\n> This was reported as a bug with the Debian package of 6.3.2; the same\n> behaviour is still present in 6.4. \n> \n> bray=> create table foo ( t text[]);\n> CREATE\n> bray=> insert into foo values ( '{\"a\"}');\n> INSERT 201354 1\n> bray=> insert into foo values ( '{\"a\",\"b\"}');\n> INSERT 201355 1\n> bray=> insert into foo values ( '{\"a\",\"b\",\"c\"}');\n> INSERT 201356 1\n> bray=> select * from foo;\n> t \n> -------------\n> {\"a\"} \n> {\"a\",\"b\"} \n> {\"a\",\"b\",\"c\"}\n> (3 rows)\n> \n> bray=> select t[1] from foo;\n> ERROR: type name lookup of t failed\n> bray=> select * from foo;\n> t \n> -------------\n> {\"a\"} \n> {\"a\",\"b\"} \n> {\"a\",\"b\",\"c\"}\n> (3 rows)\n> \n> bray=> select foo.t[1] from foo;\n> t\n> -\n> a\n> a\n> a\n> (3 rows)\n> \n> bray=> select count(foo.t[1]) from foo;\n> pqReadData() -- backend closed the channel unexpectedly.\n> \n> -- \n> Oliver Elphick [email protected]\n> Isle of Wight http://www.lfix.co.uk/oliver\n> PGP key from public servers; key ID 32B8FAA1\n> ========================================\n> \"Let us therefore come boldly unto the throne of grace,\n> that we may obtain mercy, and find grace to help in \n> time of need.\" Hebrews 4:16 \n> \n> \n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 18 Dec 1998 13:06:18 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Failures with arrays" }, { "msg_contents": "I can confirm that this is fixed in 7.0, I believe by Tom Lane.\n\n\n> This was reported as a bug with the Debian package of 6.3.2; the same\n> behaviour is still present in 6.4. \n> \n> bray=> create table foo ( t text[]);\n> CREATE\n> bray=> insert into foo values ( '{\"a\"}');\n> INSERT 201354 1\n> bray=> insert into foo values ( '{\"a\",\"b\"}');\n> INSERT 201355 1\n> bray=> insert into foo values ( '{\"a\",\"b\",\"c\"}');\n> INSERT 201356 1\n> bray=> select * from foo;\n> t \n> -------------\n> {\"a\"} \n> {\"a\",\"b\"} \n> {\"a\",\"b\",\"c\"}\n> (3 rows)\n> \n> bray=> select t[1] from foo;\n> ERROR: type name lookup of t failed\n> bray=> select * from foo;\n> t \n> -------------\n> {\"a\"} \n> {\"a\",\"b\"} \n> {\"a\",\"b\",\"c\"}\n> (3 rows)\n> \n> bray=> select foo.t[1] from foo;\n> t\n> -\n> a\n> a\n> a\n> (3 rows)\n> \n> bray=> select count(foo.t[1]) from foo;\n> pqReadData() -- backend closed the channel unexpectedly.\n> \n> -- \n> Oliver Elphick [email protected]\n> Isle of Wight http://www.lfix.co.uk/oliver\n> PGP key from public servers; key ID 32B8FAA1\n> ========================================\n> \"Let us therefore come boldly unto the throne of grace,\n> that we may obtain mercy, and find grace to help in \n> time of need.\" Hebrews 4:16 \n> \n> \n> \n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 31 May 2000 17:17:55 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Failures with arrays" } ]
[ { "msg_contents": "Hello!\n\nReleasing 6.4.1 is a good news.\nBut would you confirm the following \"memory leak\" problem?\nIt is reproducable on 6.4 (FreeBSD 2.2.7-RELEASE).\n\nRegards\n\n[On Nov 13, [email protected] (SHIOZAKI Takehiko) writes:]\n>Hello!\n>\n>A background postgres porocess gets larger with Abort Transactions.\n>Memory seems to leak.\n>This occurs also on 6.3.2.\n>\n>You can see it with user aborts:\n>\n>========================================================================\n>#!/bin/sh\n>\n>yes 'begin;\n>abort;' | psql regression\n>========================================================================\n>\n>And with internal aborts:\n>\n>========================================================================\n>#!/bin/sh\n>\n>yes \"insert into Room (roomno) values ('000');\" | psql regression\n>========================================================================\n>\n>Regards\n>\n>-- \n>ASCII CORPORATION\n>Technical Center\n>SHIOZAKI Takehiko\n><[email protected]>\n\n\n-- \nASCII CORPORATION\nTechnical Center\nSHIOZAKI Takehiko\n<[email protected]>\n", "msg_date": "Thu, 26 Nov 1998 21:40:19 +0900 (JST)", "msg_from": "SHIOZAKI Takehiko <[email protected]>", "msg_from_op": true, "msg_subject": "Re: memory leak with Abort Transaction" }, { "msg_contents": "SHIOZAKI Takehiko wrote:\n\n>\n> Hello!\n>\n> Releasing 6.4.1 is a good news.\n> But would you confirm the following \"memory leak\" problem?\n> It is reproducable on 6.4 (FreeBSD 2.2.7-RELEASE).\n\n It's an far too old problem. And as far as I remember, there\n are different locations in the code causing it.\n\n One place I remember well. It's in the tcop mainloop in\n PostgresMain(). The querytree list is malloc()'ed (there and\n in the parser) and free()'d after the query is processed -\n except the processing of the queries bails out with elog().\n In that case it never runs over the free() because the\n longjmp() kick's it back to the beginning of the loop.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Thu, 26 Nov 1998 14:01:42 +0100 (MET)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: memory leak with Abort Transaction" }, { "msg_contents": "Added to TODO list.\n\n> Hello!\n> \n> Releasing 6.4.1 is a good news.\n> But would you confirm the following \"memory leak\" problem?\n> It is reproducable on 6.4 (FreeBSD 2.2.7-RELEASE).\n> \n> Regards\n> \n> [On Nov 13, [email protected] (SHIOZAKI Takehiko) writes:]\n> >Hello!\n> >\n> >A background postgres porocess gets larger with Abort Transactions.\n> >Memory seems to leak.\n> >This occurs also on 6.3.2.\n> >\n> >You can see it with user aborts:\n> >\n> >========================================================================\n> >#!/bin/sh\n> >\n> >yes 'begin;\n> >abort;' | psql regression\n> >========================================================================\n> >\n> >And with internal aborts:\n> >\n> >========================================================================\n> >#!/bin/sh\n> >\n> >yes \"insert into Room (roomno) values ('000');\" | psql regression\n> >========================================================================\n> >\n> >Regards\n> >\n> >-- \n> >ASCII CORPORATION\n> >Technical Center\n> >SHIOZAKI Takehiko\n> ><[email protected]>\n> \n> \n> -- \n> ASCII CORPORATION\n> Technical Center\n> SHIOZAKI Takehiko\n> <[email protected]>\n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 12 Dec 1998 22:45:27 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: memory leak with Abort Transaction" } ]
[ { "msg_contents": "After having a talk with an analyst of META GROUP I believe that adding\nCORBA support is THE most important addition we can make. It really seems a\nlot is developing there. The one big problem CORBA still has of course is\nperformance.\n\nAlso they told me that there is no ORB that implements the full set of\nfunctions, not even a commercial one. I'm a little bit suprised by that.\n\nMichael\n-- \nDr. Michael Meskes, Manager of the Western Branch Office, Datenrevision GmbH\nwork: Cuxhavener Str. 36, D-21149 Hamburg, [email protected]\nhome: Th.-Heuss-Str. 61, D-41812 Erkelenz, [email protected]\nGo SF49ers! Go Rhein Fire! Use Debian GNU/Linux! Use PostgreSQL!\n", "msg_date": "Thu, 26 Nov 1998 21:25:39 +0100", "msg_from": "Michael Meskes <[email protected]>", "msg_from_op": true, "msg_subject": "corba" }, { "msg_contents": "On Thu, 26 Nov 1998, Michael Meskes wrote:\n\n> After having a talk with an analyst of META GROUP I believe that adding\n> CORBA support is THE most important addition we can make. It really seems a\n> lot is developing there. The one big problem CORBA still has of course is\n> performance.\n\nI've been reading up on CORBA over the last week, and I think it should be\na major goal to support it as well.\n\n> Also they told me that there is no ORB that implements the full set of\n> functions, not even a commercial one. I'm a little bit suprised by that.\n\nI was at the Java 98 show at Olympia,London on Wednesday, and they had a\nfew CORBA based setups there. The only database system I recognised there\nwas Oracle.\n\nAlso, as I said on an earlier post, I've just discovered that Java 1.2\nincludes an orb as well, so we'll see more people having access to corba a\nlot.\n\n-- \nPeter Mount, IT Section\[email protected]\nAnything I write here are my own views, and cannot be taken as being the\nofficial words of Maidstone Borough Council\n\n\n", "msg_date": "Fri, 27 Nov 1998 12:36:11 +0000 (GMT)", "msg_from": "Peter T Mount <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] corba" } ]
[ { "msg_contents": "Looking for a 6.4 RPM for RH Linux - is one available yet? I don't know\nhow it's done (does Redhat do it?) but I'd really appreciate any effort\non this front.\n\nAlso, someone on the SQL list said they thought a developer is working\non joining indexes. I'm looking for improvements for joins between huge\ntables (millions of records in each of two tables).\n\nthanks!\nMichael\n", "msg_date": "Thu, 26 Nov 1998 20:09:58 -0800", "msg_from": "Michael Olivier <[email protected]>", "msg_from_op": true, "msg_subject": "Linux RPM for 6.4? also join on indexes?" } ]
[ { "msg_contents": "Hi...\nPostgreSQL v6.4 have fatal error!\n\nMy machine is Digital Alpha 1000.\ni attempt to install PostgreSQL version 6.4..\nBut i face in of compile error!! \n\nhu.....\n\nSome body know..why..??\n\n", "msg_date": "Fri, 27 Nov 1998 13:50:04 +0900", "msg_from": "\"Ki won, Song\" <[email protected]>", "msg_from_op": true, "msg_subject": "Compile error ... in Digital Alpha 1000 machine!!" } ]
[ { "msg_contents": "\nHas anyone been getting my posts about Corba, Java & JDBC over the last\ncouple of days?\n\nI'm subscribed both at home (retep.org.uk) and at work (maidstone.gov.uk)\nbut I'm not seeing any of my posts to the list (which I normally do).\n\nIt's just that the message included some problem areas with the JDBC\ndriver, and I wanted to see what everyone thought about possible ways to\nfix it.\n\n-- \nPeter Mount, IT Section\[email protected]\nAnything I write here are my own views, and cannot be taken as being the\nofficial words of Maidstone Borough Council\n\n\n\n", "msg_date": "Fri, 27 Nov 1998 12:38:50 +0000 (GMT)", "msg_from": "Peter T Mount <[email protected]>", "msg_from_op": true, "msg_subject": "The Interfaces list" }, { "msg_contents": "Hello!\n\nOn Fri, 27 Nov 1998, Peter T Mount wrote:\n> I'm subscribed both at home (retep.org.uk) and at work (maidstone.gov.uk)\n> but I'm not seeing any of my posts to the list (which I normally do).\n\n I don't remember your posts, but I am pretty sure \"me-too\" was disabled\nin all postgres lists - I stopped receiving my messages through the list\nafter September or August.\n If you want to make sure - search for your messages in mail archive on\nhttp://postgresql.org. I found most of my messages there, so the lists had\nhad them. (Although my recent post to pgsql-patches was not applyed nor\nrejected).\n\nOleg.\n---- \n Oleg Broytmann National Research Surgery Centre http://sun.med.ru/~phd/\n Programmers don't die, they just GOSUB without RETURN.\n\n", "msg_date": "Fri, 27 Nov 1998 16:02:15 +0300 (MSK)", "msg_from": "Oleg Broytmann <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] The Interfaces list" } ]
[ { "msg_contents": "On Fri, 27 Nov 1998, Andrew McNaughton wrote:\n\n> It's possibly stronger on features, but it's slower than mysql. It is\n> speed he's emphasizing.\n\n\tI've never actually installed mysql, so can't really compare the\ntwo, but I've been using PostgreSQL for everything I need an RDBMS for\nsince I first took on the project 3 years ago now (wow, time flies)...each\nrelease has gotten progressively faster, but we've pretty much hit a limit\nas far as optimizations are concerned, there probably isn't a *noticeable*\ndifference between v6.3.2 and v6.4...\n\n\tWe are hoping to have the PREPARE statement put into v6.5, which\nshould give a performance improvement in \"repeatative queries\", as the\nplanning for the query can be done beforehand, taking out a step...\n\n> there was some discussion earlier this year on this list about adding\n> indexes suitable for fulltext searching to PostgreSQL. Did anything\n> happen in the end? It's the one feature I'd really like to have. I\n> suspect it would be an important one to James also.\n\n\tWhat do you mean by \"fulltext searching\"?\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Fri, 27 Nov 1998 08:54:57 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Mysql 321 - Mysql 322 - msql" }, { "msg_contents": "The Hermit Hacker <[email protected]> writes:\n\n> \tWhat do you mean by \"fulltext searching\"?\n\nHe's talking about inverted text indices, where text is indexed such\nthat a word is the key, and the index returns pointers to all the\nplaces where that word occurs. Knowledge of word structure is usually\nbuilt in, so that \"hacks\", \"hacker\", \"hackers\", \"hacking\" and so on\nare known to be derivatives of \"hack\", and can match it if requested.\nNoise words such as \"a\", \"the\" and so forth are usually not indexed.\n\nInverted indexed text storage tends to take up much space, but there\nare ways to reduce this, and the best implementations do it remarkably\nwell. A simple example: it is not really necessary to actually store\nthe original text; it can instead be a sequence of links to the store\nof all individual words in the text database.\n\nSee http://glimpse.cs.arizona.edu/ for a powerful inverted indexing\nengine and various related software.\n\n-tih\n-- \nPopularity is the hallmark of mediocrity. --Niles Crane, \"Frasier\"\n", "msg_date": "27 Nov 1998 14:25:00 +0100", "msg_from": "Tom Ivar Helbekkmo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: Mysql 321 - Mysql 322 - msql" }, { "msg_contents": "On 27 Nov 1998, Tom Ivar Helbekkmo wrote:\n\n> The Hermit Hacker <[email protected]> writes:\n> \n> > \tWhat do you mean by \"fulltext searching\"?\n> \n> He's talking about inverted text indices, where text is indexed such\n> that a word is the key, and the index returns pointers to all the\n> places where that word occurs. Knowledge of word structure is usually\n> built in, so that \"hacks\", \"hacker\", \"hackers\", \"hacking\" and so on\n> are known to be derivatives of \"hack\", and can match it if requested.\n> Noise words such as \"a\", \"the\" and so forth are usually not indexed.\n> \n> Inverted indexed text storage tends to take up much space, but there\n> are ways to reduce this, and the best implementations do it remarkably\n> well. A simple example: it is not really necessary to actually store\n> the original text; it can instead be a sequence of links to the store\n> of all individual words in the text database.\n> \n> See http://glimpse.cs.arizona.edu/ for a powerful inverted indexing\n> engine and various related software.\n\n\tJust curious, but other then specialized applications like\nGlimpse, does anyone actually support/do this?\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Fri, 27 Nov 1998 09:37:51 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Re: Mysql 321 - Mysql 322 - msql" }, { "msg_contents": "The Hermit Hacker <[email protected]> writes:\n> On 27 Nov 1998, Tom Ivar Helbekkmo wrote:\n>> See http://glimpse.cs.arizona.edu/ for a powerful inverted indexing\n>> engine and various related software.\n\n> \tJust curious, but other then specialized applications like\n> Glimpse, does anyone actually support/do this?\n\nI dearly love Glimpse. (Sample things I use it for: rooting through\nnearly 10 years worth of archived email; finding all references to a\nparticular name in the Postgres sources, almost instantly; ditto for the\neven larger Ptolemy sources; looking for files that I can't remember\nwhere I put ... it's great. And aren't the Postgres mailing list\narchive indexes Glimpse-driven?)\n\nI don't currently have any databases that could benefit from full-text\nindexes. But I can think of applications where it'd be important,\nparticularly after we get rid of the limit on tuple sizes so that it\nbecomes reasonable to put fair-size chunks of text into database\nentries. For example: would it be useful to put my email archive into\na Postgres database, one message per tuple? Maybe ... but if I can't\nglimpse it afterwards, forgetaboutit.\n\nYou could probably glue something like this together from existing\nspare parts, say by running a nightly cron job that dumps out the\ntext fields of your database for indexing by Glimpse. But it wouldn't\nbe integrated into SQL --- you'd have to query the index separately\noutside of SQL, then use the results to drive a query to fetch the\nselected records.\n\nA seamless integration would make Glimpse indexes be a new type of\nindex associated with a new match operator, something like\n\tcreate index index1 on table using glimpse (text_field);\n\tselect * from table where glimpse(text_field, 'pattern');\nI have no idea how hard that would be...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 27 Nov 1998 11:53:45 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: Mysql 321 - Mysql 322 - msql " }, { "msg_contents": "On Fri, 27 Nov 1998, Tom Lane wrote:\n\n> The Hermit Hacker <[email protected]> writes:\n> > On 27 Nov 1998, Tom Ivar Helbekkmo wrote:\n> >> See http://glimpse.cs.arizona.edu/ for a powerful inverted indexing\n> >> engine and various related software.\n> \n> > \tJust curious, but other then specialized applications like\n> > Glimpse, does anyone actually support/do this?\n> \n> I dearly love Glimpse. (Sample things I use it for: rooting through\n> nearly 10 years worth of archived email; finding all references to a\n> particular name in the Postgres sources, almost instantly; ditto for the\n> even larger Ptolemy sources; looking for files that I can't remember\n> where I put ... it's great. And aren't the Postgres mailing list\n> archive indexes Glimpse-driven?)\n\n\tNope, I use ht/Dig for it...\n\n> A seamless integration would make Glimpse indexes be a new type of\n> index associated with a new match operator, something like\n> \tcreate index index1 on table using glimpse (text_field);\n> \tselect * from table where glimpse(text_field, 'pattern');\n> I have no idea how hard that would be...\n\n\tAnyone? This one I'd love to see...\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Fri, 27 Nov 1998 13:13:43 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Re: Mysql 321 - Mysql 322 - msql " }, { "msg_contents": "\nOn Fri, 27 Nov 1998, The Hermit Hacker wrote:\n\n> \tJust curious, but other then specialized applications like\n> Glimpse, does anyone actually support/do this?\n\n Well, Oracle has their ConText option that does stuff like this.\n\n> Marc G. Fournier \n> Systems Administrator @ hub.org \n> primary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\nTom\n\n", "msg_date": "Fri, 27 Nov 1998 09:47:55 -0800 (PST)", "msg_from": "Tom <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: Mysql 321 - Mysql 322 - msql" }, { "msg_contents": "Tom Lane <[email protected]> writes:\n\n> I don't currently have any databases that could benefit from full-text\n> indexes. But I can think of applications where it'd be important,\n> particularly after we get rid of the limit on tuple sizes so that it\n> becomes reasonable to put fair-size chunks of text into database\n> entries. For example: would it be useful to put my email archive into\n> a Postgres database, one message per tuple? Maybe ... but if I can't\n> glimpse it afterwards, forgetaboutit.\n\nAnother very important application is the keeping of structured\ndocuments in a database system. Advanced SGML environments do this,\nand Philip Greenspun of MIT, the author of the excellent book\n\"Database Backed Web Sites\" (see http://photo.net/ for information)\nrecommends doing it for HTML and other data for web publishing. The\nweb server AOLserver is just one example of an application that can do\nthis -- and if I'm not mistaken, AOLserver can even use PostgreSQL.\n\nAnyway, once the data is in the database, and much of it is text, it\nbecomes very interesting to be able to efficiently index and search.\n\n-tih\n-- \nPopularity is the hallmark of mediocrity. --Niles Crane, \"Frasier\"\n", "msg_date": "27 Nov 1998 20:20:16 +0100", "msg_from": "Tom Ivar Helbekkmo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: Mysql 321 - Mysql 322 - msql" }, { "msg_contents": "> I don't currently have any databases that could benefit from full-text\n> indexes. But I can think of applications where it'd be important,\n> particularly after we get rid of the limit on tuple sizes so that it\n> becomes reasonable to put fair-size chunks of text into database\n> entries. For example: would it be useful to put my email archive into\n> a Postgres database, one message per tuple? Maybe ... but if I can't\n> glimpse it afterwards, forgetaboutit.\n> \n> You could probably glue something like this together from existing\n> spare parts, say by running a nightly cron job that dumps out the\n> text fields of your database for indexing by Glimpse. But it wouldn't\n> be integrated into SQL --- you'd have to query the index separately\n> outside of SQL, then use the results to drive a query to fetch the\n> selected records.\n\nWe do have contrib/fulltextindex.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 1 Dec 1998 18:27:41 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: Mysql 321 - Mysql 322 - msql" } ]
[ { "msg_contents": "Hi all,\n\nI have a strange behavior while copying from a text file, I don't know\nif this will be considered a bug or a feature.\n\nI have this text file:\n------------------------\nXXX|QWERTYUIOPASDFGHJKLA\nA01|BAIO\nA02|BAIO CHIARO\nA03|BAIO OSCURO\n------------------------\n\nand this table:\n\nTable = mantelli\n+----------------------------------+----------------------------------+-------+\n| Field | Type | Length|\n+----------------------------------+----------------------------------+-------+\n| m_codice | char() | 3 |\n| m_descr | char() | 20 |\n+----------------------------------+----------------------------------+-------+\nIndex: i5_mantelli\n\nprova=> copy mantelli from '/tmp/mantelli.load' using delimiters '|';\nCOPY\nprova=> select * from mantelli;\nm_codice|m_descr\n--------+--------------------\nXXX |QWERTYUIOPASDFGHJKLA\n\n |BAIO CHIARO\n |BAIO OSCURO\n(4 rows)\n\nprova=> select m_descr,m_codice from mantelli ;\nm_descr |m_codice\n--------------------+--------\nQWERTYUIOPASDFGHJKLA|XXX \n |A01 \n |A02 \n |A03 \n(4 rows)\n\nprova=> select m_codice from mantelli ;\nm_codice\n--------\nXXX \nA01 \nA02 \nA03 \n(4 rows)\n\nprova=> select m_descr from mantelli ;\nm_descr \n--------------------\nQWERTYUIOPASDFGHJKLA\n \n ARO\n URO\n(4 rows)\n\nSeems that COPY expects that last field is 20 char long and if it is less than 20\nit becomes crazy.\n\n-Jose'-\n\n\n", "msg_date": "Fri, 27 Nov 1998 14:46:45 +0100", "msg_from": "Sferacarta Software <[email protected]>", "msg_from_op": true, "msg_subject": "copy" }, { "msg_contents": "Sferacarta Software <[email protected]> writes:\n> I have a strange behavior while copying from a text file, I don't know\n> if this will be considered a bug or a feature.\n> Seems that COPY expects that last field is 20 char long and if it is\n> less than 20 it becomes crazy.\n\nI cannot replicate this bug with 6.4. What version are you using?\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 27 Nov 1998 11:26:04 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] copy " }, { "msg_contents": ">\n> Sferacarta Software <[email protected]> writes:\n> > I have a strange behavior while copying from a text file, I don't know\n> > if this will be considered a bug or a feature.\n> > Seems that COPY expects that last field is 20 char long and if it is\n> > less than 20 it becomes crazy.\n>\n> I cannot replicate this bug with 6.4. What version are you using?\n\n I can, but it isn't a bug! At least not one in the backend.\n\n The file must contain CRLF at the end of a line instead of a\n single LF.\n\n The data is absolutely intact in the database including the\n CR for those lines, where the second fields value is shorter\n than 20 chars. It's just psql(1) who's output get's mangled\n up.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Fri, 27 Nov 1998 19:56:03 +0100 (MET)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] copy" }, { "msg_contents": "[email protected] (Jan Wieck) writes:\n> I can, but it isn't a bug! At least not one in the backend.\n> The file must contain CRLF at the end of a line instead of a\n> single LF.\n\nGood eye! I had noticed the odd formatting of Jose's output, but\ndidn't draw the right conclusion --- I thought it just got messed\nup in the preparation of his mail message.\n\nI guess the next question is whether we like this behavior or not.\n\nI could see an argument for stripping CR if it's not quoted somehow...\nbut this ought to be considered in the context of the whole is-COPY-\n8-bit-clean issue that keeps coming up.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 27 Nov 1998 14:16:33 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] copy " } ]
[ { "msg_contents": "Hi,\n\nI have run into a problem dropping and re-creating tables with\ntype SERIAL:\n\nCREATE TABLE t ( s SERIAL);\nDROP TABLE t;\nCREATE TABLE t ( s SERIAL);\n\ngives\nERROR: t_s_seq relation already exists\n\nThis looks like the implicitly created sequence t_s_seq is not dropped\ntogether with the table.\n\nI am running a current (?) cvs snapshot from [email protected].\n\nJan\n", "msg_date": "Fri, 27 Nov 1998 16:20:56 +0100", "msg_from": "Jan Iven <[email protected]>", "msg_from_op": true, "msg_subject": "DROPping tables with SERIALs" }, { "msg_contents": ">\n> Hi,\n>\n> I have run into a problem dropping and re-creating tables with\n> type SERIAL:\n>\n> CREATE TABLE t ( s SERIAL);\n> DROP TABLE t;\n> CREATE TABLE t ( s SERIAL);\n>\n> gives\n> ERROR: t_s_seq relation already exists\n>\n> This looks like the implicitly created sequence t_s_seq is not dropped\n> together with the table.\n>\n> I am running a current (?) cvs snapshot from [email protected].\n>\n> Jan\n>\n>\n\n Yepp. The serial type is implemented as an integer with a\n default of nextval('tab_attr_seq') and the sequence itself\n created on the fly.\n\n I think we should have an additional oid field in\n pg_attribute that holds the oid of the created sequence and\n that is examined at drop table time to drop the serials too.\n\n TODO for v6.5 ?\n\n\nJan :-)\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Fri, 27 Nov 1998 17:22:52 +0100 (MET)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] DROPping tables with SERIALs" }, { "msg_contents": "Jan Wieck wrote:\n> \n> Yepp. The serial type is implemented as an integer with a\n> default of nextval('tab_attr_seq') and the sequence itself\n> created on the fly.\n> \n> I think we should have an additional oid field in\n> pg_attribute that holds the oid of the created sequence and\n> that is examined at drop table time to drop the serials too.\n> \n> TODO for v6.5 ?\n\nThere is another way: let's define special SERIAL type\n(actually - int4) and in DeletePgAttributeTuples()\ncheck if atttype == SERIALOID and drop sequence.\n\nAlso note that currently SERIAL doesn't work as\nppl expect - \n1. SERIAL should generate value if input value\n is NULL or 0;\n2. value generated should be max(this_field) + 1\n\nWe should add builtin trigger function for SERIAL...\nActually, having this function we can avoid \nSERIALOID: we could check in RelationRemoveTriggers\nif tgfoid == ThisFuncOID and drop sequence.\nOn the other hand SERIALOID looks cleaner.\n\nVadim\n", "msg_date": "Sat, 28 Nov 1998 01:39:09 +0700", "msg_from": "Vadim Mikheev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] DROPping tables with SERIALs" }, { "msg_contents": "Vadim Mikheev <[email protected]> writes:\n> Also note that currently SERIAL doesn't work as\n> ppl expect - \n> 1. SERIAL should generate value if input value\n> is NULL or 0;\n\nNo, I think it should *only* substitute for NULL. Why assume\nzero is special?\n\n> 2. value generated should be max(this_field) + 1\n\nThat's not quite right. If current max(serial_field) is 100, and \nI INSERT a tuple that gets serial 101, and then I DELETE that tuple,\nshould the next insertion be given serial 101 again? No. You do need\nthe separate sequence object as a record of the highest serial number\never assigned, regardless of whether that value is still present in the\ntable.\n\nWhat you really want is that if a non-null value is inserted into the\nserial field, and it is larger than the current readout of the\nassociated sequence generator, then the sequence should be advanced to\nequal that inserted value.\n\nAnother question is whether a SERIAL field should automatically be\nUNIQUE (ie, create a unique index on it to prevent mistakes in manual\ninsertion of values for the field). I'm not sure that that should be\nforced to happen, but I think that most users would want the uniqueness\nconstraint. Maybe this just means a documentation change, with \"SERIAL\nUNIQUE\" being shown as the typical usage.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 28 Nov 1998 15:30:34 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] DROPping tables with SERIALs " }, { "msg_contents": "Tom Lane wrote:\n> \n> Vadim Mikheev <[email protected]> writes:\n> > Also note that currently SERIAL doesn't work as\n> > ppl expect -\n> > 1. SERIAL should generate value if input value\n> > is NULL or 0;\n> \n> No, I think it should *only* substitute for NULL. Why assume\n> zero is special?\n\nAs I remember this is how SERIAL works in Informix. \nCompatibility is good thing... but I have no objections.\nNevertheless, currently SERIAL doesn't work if input\nvalue is NULL, only is not specified in INSERT:\nDEFAULT is not appropriate for SERIAL in any case.\n\n> \n> > 2. value generated should be max(this_field) + 1\n> \n> That's not quite right. If current max(serial_field) is 100, and\n> I INSERT a tuple that gets serial 101, and then I DELETE that tuple,\n> should the next insertion be given serial 101 again? No. You do need\n> the separate sequence object as a record of the highest serial number\n> ever assigned, regardless of whether that value is still present in the\n> table.\n> \n> What you really want is that if a non-null value is inserted into the\n> serial field, and it is larger than the current readout of the\n> associated sequence generator, then the sequence should be advanced to\n> equal that inserted value.\n\nYes - this is what I meant...\n\n> \n> Another question is whether a SERIAL field should automatically be\n> UNIQUE (ie, create a unique index on it to prevent mistakes in manual\n> insertion of values for the field). I'm not sure that that should be\n> forced to happen, but I think that most users would want the uniqueness\n> constraint. Maybe this just means a documentation change, with \"SERIAL\n> UNIQUE\" being shown as the typical usage.\n\nOnce again - I would like to see SERIAL compatible with\nSERIAL/IDENTY in other RDBMSes.\n\nVadim\n", "msg_date": "Sun, 29 Nov 1998 15:24:37 +0700", "msg_from": "Vadim Mikheev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] DROPping tables with SERIALs" }, { "msg_contents": "Vadim Mikheev <[email protected]> writes:\n> Tom Lane wrote:\n>> No, I think it should *only* substitute for NULL. Why assume\n>> zero is special?\n\n> As I remember this is how SERIAL works in Informix. \n\nAh. OK, if that's what they do then I agree we ought to act the same.\n\n>> Another question is whether a SERIAL field should automatically be\n>> UNIQUE (ie, create a unique index on it to prevent mistakes in manual\n>> insertion of values for the field).\n\n> Once again - I would like to see SERIAL compatible with\n> SERIAL/IDENTY in other RDBMSes.\n\nYes, and? What do the other ones do?\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 29 Nov 1998 12:07:11 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] DROPping tables with SERIALs " }, { "msg_contents": "Tom Lane wrote:\n> \n> >> Another question is whether a SERIAL field should automatically be\n> >> UNIQUE (ie, create a unique index on it to prevent mistakes in manual\n> >> insertion of values for the field).\n> \n> > Once again - I would like to see SERIAL compatible with\n> > SERIAL/IDENTY in other RDBMSes.\n> \n> Yes, and? What do the other ones do?\n\nOk, Sybase:\n\nhttp://sybooks.sybase.com:80/dynaweb/group4/srg1100e/sqlug/@Generic__BookTextView/16622;pt=15743;lang=ru\n\nEach table can include a single IDENTITY column. \nIDENTITY columns store sequential numbers such as invoice numbers, \nemployee numbers, or record numbers that are generated automatically \nby SQL Server. The value of the IDENTITY column uniquely\nidentifies each row in a table.\n\nInformix confuses me:\n\nhttp://www.informix.com/answers/english/pdf_docs/gn7382/4365.pdf\n\nThe SERIAL data type is not automatically a unique column. \nYou must apply a unique index to this column to prevent \nduplicate serial numbers. If you use the interactive schema \neditor in DB-Access to define the table, a unique index is \napplied automatically to a SERIAL column.\n\nhttp://www.informix.com/answers/english/pdf_docs/gn7382/4366.pdf\n\nYou can specify a nonzero value for a serial column \n(as long as it does not duplicate any existing value in that column), ...\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\n?!!!\n\nVadim\n", "msg_date": "Mon, 30 Nov 1998 10:40:43 +0700", "msg_from": "Vadim Mikheev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] DROPping tables with SERIALs" }, { "msg_contents": "> The SERIAL data type is not automatically a unique column. \n> You must apply a unique index to this column to prevent \n> duplicate serial numbers. If you use the interactive schema \n> editor in DB-Access to define the table, a unique index is \n> applied automatically to a SERIAL column.\n> \n> http://www.informix.com/answers/english/pdf_docs/gn7382/4366.pdf\n> \n> You can specify a nonzero value for a serial column \n> (as long as it does not duplicate any existing value in that column), ...\n> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n> \n> ?!!!\n\nYou can assign a value to a serial column, as long as it is unique.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 1 Dec 1998 20:21:35 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] DROPping tables with SERIALs" }, { "msg_contents": "> Hi,\n> \n> I have run into a problem dropping and re-creating tables with\n> type SERIAL:\n> \n> CREATE TABLE t ( s SERIAL);\n> DROP TABLE t;\n> CREATE TABLE t ( s SERIAL);\n> \n> gives\n> ERROR: t_s_seq relation already exists\n> \n> This looks like the implicitly created sequence t_s_seq is not dropped\n> together with the table.\n> \n> I am running a current (?) cvs snapshot from [email protected].\n\nAdded to TODO:\n\t\n\t* auto-destroy sequence on SERIAL removal\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 12 Dec 1998 22:49:23 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] DROPping tables with SERIALs" } ]
[ { "msg_contents": "\nsubscripe\n\n\n", "msg_date": "Fri, 27 Nov 1998 13:29:56 -0200 (EDT)", "msg_from": "Gilmar Ribeiro da Rosa <[email protected]>", "msg_from_op": true, "msg_subject": "None" } ]
[ { "msg_contents": "Hi,\n\n here are some details I would like to discuss before beeing\n too deep in the implementation that it's hard to change them\n later.\n\n Point 1 is an extension of the pg_database relation that is\n required to see the actual redolog state of a database. New\n fields are:\n\n lastbackup datetime\n redomode int4\n redoseq1 int4\n\n lastbackup is the time, the last successful full backup was\n taken. More precise, the time when pg_dump switched the\n backend into online backup mode.\n\n redomode is defined as 0=none, 1=async-logging, 2=sync-\n logging, 4=restore, 5=recover, 6=error.\n\n redoseq1 is the sequence number of the redolog file began\n when pg_dump switched to online backup mode (this command\n implies a logfile switch).\n\n Point 2 is the extension of the querylanguage. All the\n statements are restricted to superusers or the owner of the\n database. The SQL statements to control the whole process\n are:\n\n ALTER DATABASE REDOMODE {NONE | ASYNCHRONOUS | SYNCHRONOUS};\n\n Turns logging for the database on or off. Database must\n be in normal operation mode for it (not restore or\n recover mode).\n\n ALTER DATABASE BEGIN BACKUP;\n\n Issued by pg_dump before doing anything else.\n\n The command stops ALL other activity in the database, so\n pg_dump has time to pull out at least the information\n about sequences (actually it does this while getting\n tables, might require some changes there so the database\n get's back accessible soon).\n\n ALTER DATABASE ONLINE BACKUP;\n\n Issued by pg_dump when it finished the things that\n require total exclusive database access.\n\n At this time, a logfile switch is done (only if the\n actual database is really logged) and the sequence number\n of the new logfile plus the current datetime remembered.\n The behaviour of pg_dump's backend changes. It will see a\n snapshot of this time (implemented in tqual code) in any\n subsequent command and it is totally unable to do\n anything that would update the database.\n\n Until the final END BACKUP is given, no VACUUM or DROP\n TABLE etc. commands can be run. If they are issued, the\n command will be delayed until pg_dump finished.\n\n ALTER DATABASE END BACKUP;\n\n This turns back the special behaviour of pg_dump's\n backend. Additionally the remembered time and redolog\n sequence are stored in pg_database. pg_dump can read\n them out for the final statement in the dump output (see\n below).\n\n ALTER DATABASE BEGIN RESTORE;\n\n This command checks that the actual database is just\n created and not one single command has been executed\n before. It is the first command in pg_dump's output if\n the database beeing dumped is a logged one.\n\n It switches the database into restore mode. In this mode,\n the first command on a new database connection must be\n the special command\n\n RECOVER DATABASE AS USER 'uname'\n\n or an\n\n ALTER DATABASE END RESTORE ...;\n\n When doing the ACL stuff, pg_dump must output a reconnect\n (\\c) to the database without the additional username and\n then issue the special command.\n\n ALTER DATABASE END RESTORE [RECOVERY FROM redolog_seq];\n\n This ends the restore mode. The additional RECOVERY FROM\n is put into by pg_dump for logged databases only. It\n reads out this information after END BACKUP. If not\n given, the database is switched into normal operation\n mode without logging. But if given, the sequence number\n is stored in pg_database and the database is put into\n recover mode. In that mode, only RECOVER commands can be\n issued.\n\n RECOVER DATABASE {ALL | UNTIL 'datetime' | RESET};\n\n The database must be in recover mode. If RESET is given,\n the mode is switched to ASYNC logging, The lastbackup\n field is set to 'epoch' and redoseq1 set to 0. It resets\n the database to the state at the backup snapshot time.\n\n For the others, the backend starts the recovery program\n which reads the redolog files, establishes database\n connections as required and reruns all the commands in\n them. If a required logfile isn't found, it tells the\n backend and waits for the reply. The backend tells the\n user what happened on error (redolog file with seq n\n required but not found ...). So the user can put back the\n required redolog files and let recover resume (actually\n the user pgsql or root must put them back :-).\n\n If the recovery is interrupted (controlling backend\n terminates), the database is set into error mode and only\n a RECOVER DATABASE RESET will help.\n\n If the recovery finally succeeds, the same as for RESET\n happens. The database is online in async logmode.\n\n Since the \"destroydb\" is also remembered in the redolog,\n recovery will stop at least if it hit's that for the\n database actually recoverd. This is to prevent faulty\n recovery which could occure if someone destroy's one\n database, creates a new one with the same name but\n different contents that is logged, destroy's it again and\n then want's to restore and recover the first.\n\n RECOVER DATABASE CONTINUE;\n\n After beeing told to restore some more redolog files,\n this command let's the recovery resume.\n\n RECOVER DATABASE AS USER 'uname';\n\n A special command used in restore and recover mode only.\n This is restricted to superusers with usecatupd right\n (not db owner) and modifies the current username in the\n backend. It's ugly, but the problem is that ordinary\n users should not be able to use the database while it is\n in restore or recover mode. So the connection cannot be\n established like with '\\c - username'.\n\n For the restore and recover this means, that a real\n superuser with unrestricted access is needed to restore a\n database that was dumped with ACL info. But otherwise\n one with createdb but not superuser rights could put a\n CREATE USER into a file, create a new database and\n \"restore\" that as user pgsql. I'm sure we don't want\n this.\n\n ###\n\n Whow - hopefully I didn't forgot anything.\n\n All that might look very complicated, but the only commands\n someone would really use manually will be\n\n ALTER DATABASE REDOMODE ...;\n\n and\n\n RECOVER DATABASE ...;\n\n Anything else is used by pg_dump and the recovery program.\n\n What I'm trying to implement with it is a behaviour that\n makes it possible to backup, restore and recover a database\n on a production system without running closed shop. Doing so\n for one database will not affect the others in the instance,\n since no modification of the hba conf or anything else will\n be required. Only the database actually beeing restored is\n closed for normal usage.\n\n Hmmm - just have the next idea right now. One more field in\n pg_database could tell that the db is shut down or restricted\n Someone could disable a single database or restrict usage to\n superusers/dbowner for some time, make database readonly etc.\n\n Anyway - does anybody see problems with the above? Do we need\n more functionality? Oh yeah - another utility that follows\n log and replicates databases onto other systems on the fly.\n But let me get this all running first please :-).\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Fri, 27 Nov 1998 22:17:46 +0100 (MET)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": true, "msg_subject": "redolog - for discussion" }, { "msg_contents": "Jan Wieck wrote:\n> \n> ALTER DATABASE BEGIN BACKUP;\n> \n> Issued by pg_dump before doing anything else.\n> \n> The command stops ALL other activity in the database, so\n> pg_dump has time to pull out at least the information\n> about sequences (actually it does this while getting\n> tables, might require some changes there so the database\n> get's back accessible soon).\n> \n> ALTER DATABASE ONLINE BACKUP;\n> \n> Issued by pg_dump when it finished the things that\n> require total exclusive database access.\n> \n> At this time, a logfile switch is done (only if the\n> actual database is really logged) and the sequence number\n> of the new logfile plus the current datetime remembered.\n> The behaviour of pg_dump's backend changes. It will see a\n> snapshot of this time (implemented in tqual code) in any\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\nNote, that I'm implementing multi-version concurrency control \n(MVCC) for 6.5: pg_dump will have to run all queries\nin one transaction in SERIALIZED mode to get snapshot of\ntransaction' begin time...\n\n> subsequent command and it is totally unable to do\n> anything that would update the database.\n> \n> Until the final END BACKUP is given, no VACUUM or DROP\n> TABLE etc. commands can be run. If they are issued, the\n> command will be delayed until pg_dump finished.\n\nVacuum will not be delete records in which any active\nbackend is interested - don't worry.\n\n...\n\n> \n> All that might look very complicated, but the only commands\n ^^^^^^^^^^^^^^^^\nYes -:)\nWe could copy/move pg_dump' stuff into backend...\nThis way pg_dump will just execute one command\n\nALTER DATABASE ONLINE BACKUP; -- as I understand\n\n- backend will do all what it need and pg_dump just\nwrite backend' output to a file.\n\nI think that it would be nice to have code in backend to \ngenerate CREATE statements from catalog and extend EXPLAIN\nto handle something like EXPLAIN TABLE xxx etc.\nWe could call EXPLAIN for all \\dXXXX in psql and\nwhen dumping schema in pg_dump.\n\nComments?\n\nVadim\n", "msg_date": "Wed, 02 Dec 1998 11:18:42 +0700", "msg_from": "Vadim Mikheev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] redolog - for discussion" }, { "msg_contents": "Vadim wrote:\n\n>\n> Jan Wieck wrote:\n> >\n> > At this time, a logfile switch is done (only if the\n> > actual database is really logged) and the sequence number\n> > of the new logfile plus the current datetime remembered.\n> > The behaviour of pg_dump's backend changes. It will see a\n> > snapshot of this time (implemented in tqual code) in any\n> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n> Note, that I'm implementing multi-version concurrency control\n> (MVCC) for 6.5: pg_dump will have to run all queries\n> in one transaction in SERIALIZED mode to get snapshot of\n> transaction' begin time...\n\n Sounds good and would make things easier. I'll keep my hands\n off from the tqual code and wait for that.\n\n But what about sequence values while in SERIALIZED\n transaction mode. Sequences get overwritten in place! And for\n a dump/restore/recover it is important, that the sequences\n get restored ALL at once in the state they where.\n\n>\n> > subsequent command and it is totally unable to do\n> > anything that would update the database.\n> >\n> > Until the final END BACKUP is given, no VACUUM or DROP\n> > TABLE etc. commands can be run. If they are issued, the\n> > command will be delayed until pg_dump finished.\n>\n> Vacuum will not be delete records in which any active\n> backend is interested - don't worry.\n\n That's the vacuum part, but I still need to delay DROP\n TABLE/VIEW/SEQUENCE until the backup is complete.\n\n>\n> ...\n>\n> >\n> > All that might look very complicated, but the only commands\n> ^^^^^^^^^^^^^^^^\n> Yes -:)\n> We could copy/move pg_dump' stuff into backend...\n> This way pg_dump will just execute one command\n>\n> ALTER DATABASE ONLINE BACKUP; -- as I understand\n>\n> - backend will do all what it need and pg_dump just\n> write backend' output to a file.\n>\n> I think that it would be nice to have code in backend to\n> generate CREATE statements from catalog and extend EXPLAIN\n> to handle something like EXPLAIN TABLE xxx etc.\n> We could call EXPLAIN for all \\dXXXX in psql and\n> when dumping schema in pg_dump.\n>\n> Comments?\n\n Indeed :-)\n\n If we have serialized transaction that covers sequences, only\n BEGIN and END BACKUP must remain. BEGIN to force the logfile\n switch and END to flag that dump is complete and backend can\n update pg_database.\n\n So you want to put major parts of pg_dump's functionality\n into the backend. Hmmm - would be cool. And it would give us\n a chance to include tests for most of the dump related code\n in regression.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Wed, 2 Dec 1998 18:11:50 +0100 (MET)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": true, "msg_subject": "Re: [HACKERS] redolog - for discussion" }, { "msg_contents": "Jan Wieck wrote:\n> \n> > >\n> > > At this time, a logfile switch is done (only if the\n> > > actual database is really logged) and the sequence number\n> > > of the new logfile plus the current datetime remembered.\n> > > The behaviour of pg_dump's backend changes. It will see a\n> > > snapshot of this time (implemented in tqual code) in any\n> > ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n> > Note, that I'm implementing multi-version concurrency control\n> > (MVCC) for 6.5: pg_dump will have to run all queries\n> > in one transaction in SERIALIZED mode to get snapshot of\n> > transaction' begin time...\n> \n> Sounds good and would make things easier. I'll keep my hands\n> off from the tqual code and wait for that.\n> \n> But what about sequence values while in SERIALIZED\n> transaction mode. Sequences get overwritten in place! And for\n> a dump/restore/recover it is important, that the sequences\n> get restored ALL at once in the state they where.\n\nIt's time to re-implement sequences! When they were implemented\n~ 1.5 year ago there was no GRANT/REVOKE on VIEWs and so\nI had to create table for each sequence.\nThere should be one system table - pg_sequence. One record\nfor each sequence will be inserted into this table and\none VIEW will be created: \n\nCREATE VIEW _seqname_ AS\nSELECT * FROM pg_sequence WHERE sequence_name = '_seqname_';\n\nGRANT/REVOKE on sequnece' VIEW will control rights to read sequence\nusing SELECT and rights to change sequence using nextval/setval.\n\nHaving _one_ sequences table there will be easy to lock\nall sequences at once and read all values.\n\n> >\n> > Vacuum will not be delete records in which any active\n> > backend is interested - don't worry.\n> \n> That's the vacuum part, but I still need to delay DROP\n> TABLE/VIEW/SEQUENCE until the backup is complete.\n\nYes. And ALTER too.\n\nVadim\n", "msg_date": "Sat, 05 Dec 1998 19:19:50 +0700", "msg_from": "Vadim Mikheev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] redolog - for discussion" }, { "msg_contents": "> It's time to re-implement sequences! When they were implemented\n> ~ 1.5 year ago there was no GRANT/REVOKE on VIEWs and so\n> I had to create table for each sequence.\n> There should be one system table - pg_sequence. One record\n> for each sequence will be inserted into this table and\n> one VIEW will be created: \n\nI thought you wanted a single table to prevent concurrent access/update\ncontension?\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 12 Dec 1998 23:28:22 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] redolog - for discussion" }, { "msg_contents": "On Sat, 12 Dec 1998, Bruce Momjian wrote:\n\n> > It's time to re-implement sequences! When they were implemented\n> > ~ 1.5 year ago there was no GRANT/REVOKE on VIEWs and so\n> > I had to create table for each sequence.\n> > There should be one system table - pg_sequence. One record\n> > for each sequence will be inserted into this table and\n> > one VIEW will be created: \n> \n> I thought you wanted a single table to prevent concurrent access/update\n> contension?\n\n\tlet's revise what vadim stated *grin*\n\n\t\"we should re-implement sequences for v6.5, since row-level\nlocking will exist at that time\"? *rofl*\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Sun, 13 Dec 1998 17:04:03 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] redolog - for discussion" }, { "msg_contents": "> On Sat, 12 Dec 1998, Bruce Momjian wrote:\n> \n> > > It's time to re-implement sequences! When they were implemented\n> > > ~ 1.5 year ago there was no GRANT/REVOKE on VIEWs and so\n> > > I had to create table for each sequence.\n> > > There should be one system table - pg_sequence. One record\n> > > for each sequence will be inserted into this table and\n> > > one VIEW will be created: \n> > \n> > I thought you wanted a single table to prevent concurrent access/update\n> > contension?\n> \n> \tlet's revise what vadim stated *grin*\n> \n> \t\"we should re-implement sequences for v6.5, since row-level\n> locking will exist at that time\"? *rofl*\n\nOh, now I understand. :-) Vadim, don't be so modest. :-)\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 13 Dec 1998 18:57:28 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] redolog - for discussion" }, { "msg_contents": "Jan Wieck wrote:\n> \n> RECOVER DATABASE {ALL | UNTIL 'datetime' | RESET};\n> \n...\n>\n> For the others, the backend starts the recovery program\n> which reads the redolog files, establishes database\n> connections as required and reruns all the commands in\n ^^^^^^^^^^^^^^^^^^^^^^^^^^\n> them. If a required logfile isn't found, it tells the\n ^^^^^\n\nI foresee problems with using _commands_ logging for\nrecovery/replication -:((\n\nLet's consider two concurrent updates in READ COMMITTED mode:\n\nupdate test set x = 2 where y = 1;\n\n\tand\n\nupdate test set x = 3 where y = 1;\n\nThe result of both committed transaction will be x = 2\nif the 1st transaction updated row _after_ 2nd transaction\nand x = 3 if the 2nd transaction gets row after 1st one.\nOrder of updates is not defined by order in which commands\nbegun and so order in which commands should be rerun\nwill be unknown...\n\nComments?\n\nVadim\n", "msg_date": "Wed, 16 Dec 1998 20:35:25 +0700", "msg_from": "Vadim Mikheev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] redolog - for discussion" }, { "msg_contents": "Vadim wrote:\n\n>\n> Jan Wieck wrote:\n> >\n> > RECOVER DATABASE {ALL | UNTIL 'datetime' | RESET};\n> >\n> ...\n> >\n> > For the others, the backend starts the recovery program\n> > which reads the redolog files, establishes database\n> > connections as required and reruns all the commands in\n> ^^^^^^^^^^^^^^^^^^^^^^^^^^\n> > them. If a required logfile isn't found, it tells the\n> ^^^^^\n>\n> I foresee problems with using _commands_ logging for\n> recovery/replication -:((\n>\n> Let's consider two concurrent updates in READ COMMITTED mode:\n>\n> update test set x = 2 where y = 1;\n>\n> and\n>\n> update test set x = 3 where y = 1;\n>\n> The result of both committed transaction will be x = 2\n> if the 1st transaction updated row _after_ 2nd transaction\n> and x = 3 if the 2nd transaction gets row after 1st one.\n> Order of updates is not defined by order in which commands\n> begun and so order in which commands should be rerun\n> will be unknown...\n\n Yepp, the order in which commands begun is absolutely not of\n interest. Locking could already delay the execution of one\n command until another one started later has finished and\n released the lock. It's a classic race condition.\n\n Thus, my plan was to log the queries just before the call to\n CommitTransactionCommand() in tcop. This has the advantage,\n that queries which bail out with errors don't get into the\n log at all and must not get rerun. And I can set a static\n flag to false before starting the command, which is set to\n true in the buffer manager when a buffer is written (marked\n dirty), so filtering out queries that do no updates at all is\n easy.\n\n Unfortunately query level logging get's hit by the current\n implementation of sequence numbers. If a query that get's\n aborted somewhere in the middle (maybe by a trigger) called\n nextval() for rows processed earlier, the sequence number\n isn't advanced at recovery time, because the query is\n suppressed at all. And sequences aren't locked, so for\n concurrently running queries getting numbers from the same\n sequence, the results aren't reproduceable. If some\n application selects a value resulting from a sequence and\n uses that later in another query, how could the redolog know\n that this has changed? It's a Const in the query logged, and\n all that corrupts the whole thing.\n\n All that is painful and I don't see another solution yet than\n to hook into nextval(), log out the numbers generated in\n normal operation and getting back the same numbers in redo\n mode.\n\n The whole thing gets more and more complicated :-(\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Wed, 16 Dec 1998 21:07:00 +0100 (MET)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": true, "msg_subject": "Re: [HACKERS] redolog - for discussion" }, { "msg_contents": "Jan Wieck wrote:\n> \n> >\n> > I foresee problems with using _commands_ logging for\n> > recovery/replication -:((\n> >\n...\n> \n> Yepp, the order in which commands begun is absolutely not of\n> interest. Locking could already delay the execution of one\n> command until another one started later has finished and\n> released the lock. It's a classic race condition.\n> \n> Thus, my plan was to log the queries just before the call to\n> CommitTransactionCommand() in tcop. This has the advantage,\n\nOh, I see - you right!\n\n...\n> \n> Unfortunately query level logging get's hit by the current\n> implementation of sequence numbers. If a query that get's\n...\n> \n> All that is painful and I don't see another solution yet than\n> to hook into nextval(), log out the numbers generated in\n\nNot so bad, having buffering these numbers in memory...\n\n> normal operation and getting back the same numbers in redo\n> mode.\n> \n> The whole thing gets more and more complicated :-(\n\nAs usual -:))\n\nVadim\n", "msg_date": "Thu, 17 Dec 1998 15:03:18 +0700", "msg_from": "Vadim Mikheev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] redolog - for discussion" }, { "msg_contents": "\n> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]]On Behalf Of Vadim Mikheev\n> Sent: Thursday, December 17, 1998 5:03 PM\n> To: Jan Wieck\n> Cc: [email protected]\n> Subject: Re: [HACKERS] redolog - for discussion\n> \n> \n> Jan Wieck wrote:\n> > \n> > >\n> > > I foresee problems with using _commands_ logging for\n> > > recovery/replication -:((\n> > >\n> ...\n> > \n> > Yepp, the order in which commands begun is absolutely not of\n> > interest. Locking could already delay the execution of one\n> > command until another one started later has finished and\n> > released the lock. It's a classic race condition.\n> > \n> > Thus, my plan was to log the queries just before the call to\n> > CommitTransactionCommand() in tcop. This has the advantage,\n> \n> Oh, I see - you right!\n>\n\nIf image level logging is used,probably it's OK.\nBut if query(command) level logging is used ???\n\nIf the isolation level of all transactions is SERIARIZABLE,it's probably \nOK because they are serializable order by the time when they are \ncommitted.\nBut if there are transactions whose isolation level is READ COMMITTED,\nthey are not serializable.\nSo commands must be issued according to the original order when they \nwere issued ?\n\nIf the same mechanism of locking is used at recovery time,the order \nof locks caused by commands(rerun) will be same ?????\nI'm not confident. \n \nThanks.\n\nHiroshi inoue\[email protected]\n", "msg_date": "Fri, 18 Dec 1998 09:56:49 +0900", "msg_from": "\"Hiroshi Inoue\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: [HACKERS] redolog - for discussion" } ]
[ { "msg_contents": "Just done in CURRENT tree.\nRegression tests are OK.\n\nVadim\n", "msg_date": "Sat, 28 Nov 1998 05:51:26 +0700", "msg_from": "Vadim Mikheev <[email protected]>", "msg_from_op": true, "msg_subject": "HeapTuple changed" } ]
[ { "msg_contents": "I changed some minor stuff in Thomas Good's man page for ecpg and would like\nto see this one replace the man page for ecpg we currently have in CVS. It's\nattached. Or should this one go to patches too?\n\nThanks\n\nMichael\n-- \nDr. Michael Meskes, Manager of the Western Branch Office, Datenrevision GmbH\nwork: Cuxhavener Str. 36, D-21149 Hamburg, [email protected]\nhome: Th.-Heuss-Str. 61, D-41812 Erkelenz, [email protected]\nGo SF49ers! Go Rhein Fire! Use Debian GNU/Linux! Use PostgreSQL!", "msg_date": "Sat, 28 Nov 1998 15:18:43 +0100", "msg_from": "Michael Meskes <[email protected]>", "msg_from_op": true, "msg_subject": "ecpg man page" }, { "msg_contents": "> I changed some minor stuff in Thomas Good's man page for ecpg and would like\n> to see this one replace the man page for ecpg we currently have in CVS. It's\n> attached. Or should this one go to patches too?\n> \n> Thanks\n\nApplied to both trees.\n\nSeems this manual page has not been converted to sgml yet. Thomas, is\nthat true? If not, please see the attached file.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n.TH ECPG UNIX 11/28/98 PostgreSQL \\fIPostgreSQL\\fP\n.SH NAME\necpg - embedded SQL preprocessor for C / PostgreSQL\n.SH SYNOPSIS\n.\\\" \\fBecpg\\fR [-v ] [-t] [-I include-path ] [-o outfile ] file1 [ file2 ] [ ... ]\n\\fBecpg\\fR [-v ] [-t] [-I include-path ] [-o outfile ] file1 [ file2 ] [ ... ]\n.SH DESCRIPTION\n.B \\fIecpg\\fP\nis an embedded SQL preprocessor for C / PostgreSQL. It\nenables development of C programs with embedded SQL code.\n.PP\n.B \\fIecpg\\fP\nis ultimately intended to be as compliant as possible with the \nANSI SQL-2 standard and existing commercial ESQL/C packages. \n.SH OPTIONS\n.B \\fIecpg\\fP\ninterprets the following flags when it is invoked \non the command line:\n.PP\n.PD 0\n.TP 10\n.BI \\-v \nPrint version information. \n.PD\n.TP\n.B \\-t\nTurn off auto-transactin mode.\n.PD\n.TP\n.PD\n.TP\n.B \\-I include-path\nSpecify additional include path. Defaults are \\.,\n/usr/local/include, the PostgreSQL include path which is defined at compile\ntime (default: /usr/local/pgsql/lib), /usr/include\n.PD\n.TP\n.B \\-o\nSpecifies that ecpg should write all its output to outfile.\nIf no such option is given the output is written to foo.c\n(if the input file was named foo.pgc.)\nIf the input file was named foo.bar the output file will be\nnamed foo.bar.c. \n.PD\n.TP\n.B file1, file2...\nThe files to be processed.\n.\\\" \n.SH INSTALLATION\nThe\n.B \\fIecpg\\fP\npreprocessor is built during the PostgreSQL installation. Binaries and\nlibraries are installed into the PGBASE (i.e., /usr/local/pgsql/... ) \nsubdirectories.\n.SH PREPROCESSING FOR COMPILATION\n.B \\fIecpg\\fP\n.\\\" (-d ) (-o file) file.pgc ( 2> ecpf.log)\n(-o file) file.pgc \n.LP\n.\\\" The optional \\-d flag turns on debugging and 2> ecpg.log\n.\\\" redirects the debug output. The .pgc extension is an \n.\\\" arbitrary means of denoting ecpg source.\nThe .pgc extension is an arbitrary means of denoting ecpg source.\n.SH COMPILING AND LINKING\nAssuming the \\fIPostgreSQL\\fP binaries are in /usr/local/pgsql:\n.LP\ngcc -g -i /usr/local/pgsql/include (-o file) file.c \n-L /usr/local/pgsql/lib -lecpg -lpq\n.SH ECPG GRAMMAR\n.LP\n.SH LIBRARIES\n.LP\nThe preprocessor will prepend two directives to the source:\n.LP\n\\fI#include <ecpgtype.h>\\fP and \\fI#include <ecpglib.h>\\fP\n.SH VARIABLE DECLARATION \nVariables declared within ecpg source code must be prepended with:\n.LP\nEXEC SQL BEGIN DECLARE SECTION; \n.LP \nSimilarly, variable declaration sections must terminate with:\n.LP\nEXEC SQL END DECLARE SECTION;\n.LP \nNOTE: prior to version 2.1.0, each variable had to be declared \non a separate line. As of version 2.1.0 multiple variables may\nbe declared on a single line:\n.LP\nchar foo(16), bar(16);\n.LP \n.SH ERROR HANDLING\nThe SQL communication area is defined with:\n.LP\nEXEC SQL INCLUDE sqlca;\n.LP\nNOTE: the lowercase `sqlca'. While SQL convention may be \nfollowed, i.e., using uppercase to separate embedded SQL \nfrom C statements, sqlca (which includes the sqlca.h \nheader file) MUST be lowercase. This is because the EXEC SQL\nprefix indicates that this INCLUDE will be parsed by ecpg.\necpg observes case sensitivity (SQLCA.h will not be found.)\nEXEC SQL INCLUDE can be used to include other header files\nas long as case sensitivity is observed.\n.LP\nThe sqlprint command is used with the EXEC SQL WHENEVER\nstatement to turn on error handling throughout the \nprogram:\n.LP\nEXEC SQL WHENEVER sqlerror sqlprint;\n.LP\nEXEC SQL WHENEVER not found sqlprint;\n.LP\nPLEASE NOTE: this is *not* an exhaustive example of usage for\nthe EXEC SQL WHENEVER statement. Further examples of usage may\nbe found in SQL manuals (e.g., `The LAN TIMES Guide to SQL' by\nGroff and Weinberg.)\n.LP\n.SH CONNECTING TO THE DATABASE SERVER\nPrior to version 2.1.0 the database name was single quoted:\n.RS\nEXEC SQL CONNECT 'test1';\n.RE\n.LP\nAs of version 2.1.0, the syntax has been simplified:\n.LP\n.RS\nEXEC SQL CONNECT test1;\n.RE\n(The database name is no longer quoted.)\n.LP\nSpecifying a server and port name in the connect statement is also possible\nas of version 6.4. of PostgreSQL. The syntax is:\n.LP\n.RS\ndbname[@server][:port]\n.RE\n.LP\nor\n.LP\n.RS\n<tcp|unix>:postgresql://server[:port][/dbname][?options]\n.RE\n.SH QUERIES\n.LP\n.SS Create Table:\n.LP\nEXEC SQL CREATE TABLE foo (number int4, ascii char(16)); \n.RS\nEXEC SQL CREATE UNIQUE index num1 on foo(number); \n.RE\nEXEC SQL COMMIT;\n.LP \n.SS Insert:\n.LP\nEXEC SQL INSERT INTO foo (number, ascii)\n.RS\nVALUES (9999, 'doodad');\n.RE\nEXEC SQL COMMIT;\n.LP\n.SS Delete:\n.LP\nEXEC SQL DELETE FROM foo\n.RS\nWHERE number = 9999; \n.RE\nEXEC SQL COMMIT;\n.LP\n.SS Singleton Select:\n.LP\nEXEC SQL SELECT foo INTO :FooBar FROM table1\n.RS\nWHERE ascii = 'doodad'; \n.RE\n.LP\n.SS Select using Cursors:\n.LP\nEXEC SQL DECLARE foo_bar CURSOR FOR \n.RS\nSELECT number, ascii FROM foo \n.RS\nORDER BY ascii;\n.RE\n.RE\nEXEC SQL FETCH foo_bar INTO :FooBar, DooDad;\n.LP\n...\nEXEC SQL CLOSE foo_bar;\n.RS\nEXEC SQL COMMIT;\n.RE\n.LP\n.SS Updates\n.LP\nEXEC SQL UPDATE foo\n.RS\nSET ascii = 'foobar'\n.RE\n.RS\nWHERE number = 9999;\n.RE\nEXEC SQL COMMIT;\n.LP\n.SH BUGS\n.LP\nThe is no EXEC SQL PREPARE statement.\n.LP\nThe complete structure definition MUST be listed\ninside the declare section.\n.LP\nSee the TODO file in the source for some more missing features.\n.LP\n.SH \"RETURN VALUE\"\n.LP\necpg returns 0 to the shell on successful completion, -1\nfor errors.\n.LP\n.SH \"SEE ALSO\"\n.PD 0\n.TP\n\\fIcc\\fP(1), \\fIpgintro\\fP(l), \\fIcommit\\fP(l), \\fIdelete\\fP(l)\n.TP\n\\fIfetch\\fP(l), \\fIselect\\fP(l), \\fIsql\\fP(l) , \\fIupdate\\fP(l)\n.PD\n.SH FILES\n.PD 0\n.TP\n.B /usr/src/pgsql/postgresql-${ver}/src/interfaces...\n ./ecpg/include.......source for \\fIecpg\\fP header files.\n ./ecpg/lib...........source for \\fIecpg\\fP libraries.\n ./ecpg/preproc.......source for \\fIecpg\\fP header files.\n ./ecpg/test..........source for \\fIecpg\\fP libraries.\n (test contains examples of syntax for ecpg SQL-C.)\n.PD\n.TP\n.B /usr/local/pgsql/bin \n\\fIPostgreSQL\\fP binaries including \\fIecpg\\fP.\n.PD\n.TP\n.B /usr/local/pgsql/include \n\\fIPostgreSQL\\fP headers including \\fIecpglib.h\\fP \\fIecpgtype.h\\fP \nand \\fIsqlca.h\\fP.\n.PD\n.TP\n.B /usr/local/pgsql/lib \n\\fIPostgreSQL\\fP libraries including \\fIlibecpg.a\\fP and \n\\fIlibecpg.so\\fP.\n.SH AUTHORS\nLinus Tolke \\fI<[email protected]>\\fP\n- original author of ECPG (up to version 0.2).\n.br\n.PP\nMichael Meskes \\fI<[email protected]>\\fP\n- actual author and maintainer of ECPG.\n.br\n.PP\nThomas Good \\fI<[email protected]>\\fP\n- author of this revision of the ecpg man page.\n.br\n.zZ", "msg_date": "Sat, 12 Dec 1998 22:55:27 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] ecpg man page" }, { "msg_contents": "> > I changed some minor stuff (for) ecpg and would like\n> > to ... replace the man page for ecpg we currently have in CVS.\n> Seems this manual page has not been converted to sgml yet. Thomas, is\n> that true? If not, please see the attached file.\n\nMichael, the ecpg author, contributed ecpg.sgml several months ago. I\nwould think that he was just reconciling the information in the man page\nwith the existing sgml-based information, but don't know that for sure.\nMichael?\n\n - Tom\n", "msg_date": "Sun, 13 Dec 1998 05:11:04 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] ecpg man page" }, { "msg_contents": "On Sun, 13 Dec 1998, Thomas G. Lockhart wrote:\n\n> > > I changed some minor stuff (for) ecpg and would like\n> > > to ... replace the man page for ecpg we currently have in CVS.\n> > Seems this manual page has not been converted to sgml yet. Thomas, is\n> > that true? If not, please see the attached file.\n> \n> Michael, the ecpg author, contributed ecpg.sgml several months ago. I\n> would think that he was just reconciling the information in the man page\n> with the existing sgml-based information, but don't know that for sure.\n> Michael?\n\nThomas,\n\nI rewrote the man page for Michael after he was of great assistance to\nme getting ecpg going. This magnum opus was omitted from 6.4 and I was\nof course devastated. I mentioned it to Michael M and he removed my\nerrata and then sent it on to the HACKERS list...so I expect the earlier\nversion differs from what Michael recently cleaned up.\n\n Cheers,\n Tom\n\n ----------- Sisters of Charity Medical Center ----------\n Department of Psychiatry\n ---- \n Thomas Good, System Administrator <[email protected]>\n North Richmond CMHC/Residential Services Phone: 718-354-5528\n 75 Vanderbilt Ave, Quarters 8 Fax: 718-354-5056\n Staten Island, NY 10304 www.panix.com/~ugd\n ---- \n Powered by PostgreSQL 6.3.2 / Perl 5.004 / DBI-0.91::DBD-PG-0.69 \n\n", "msg_date": "Sun, 13 Dec 1998 07:34:01 -0500 (EST)", "msg_from": "Thomas Good <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] ecpg man page" }, { "msg_contents": "> ...so I expect the earlier\n> version differs from what Michael recently cleaned up.\n\nHmm. If you have time and inclination, it would be great if you could\nlook at ecpg.sgml and update it too. I'd hate to see the two get out of\nsync in information content. Eventually the man pages will be generated\nfrom the sgml files so man-only info may get lost.\n\n - Tom\n", "msg_date": "Sun, 13 Dec 1998 13:22:54 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] ecpg man page" }, { "msg_contents": "On Sun, Dec 13, 1998 at 05:11:04AM +0000, Thomas G. Lockhart wrote:\n> Michael, the ecpg author, contributed ecpg.sgml several months ago. I\n\nI thought you did transform it to sgml. \n\n> would think that he was just reconciling the information in the man page\n> with the existing sgml-based information, but don't know that for sure.\n> Michael?\n\nMy problem is that I do not speak sgml. Therefore I only updated the man\npage from Tom Good's version but not the sgml part. Is there a tool I could\nuse to transform it? I try to remember that the next time and will only\nupdate the sgml file which should be much easier than trasnforming the\ncurrent man page.\n\nMichael\n\nP.S.: If this doesn't go through on the mailing list please forward it there\nagain. I'm currently in the process of changing some email addresses and\nhave no idea which message will apeear there first.\n-- \nMichael Meskes | Go SF 49ers!\nTh.-Heuss-Str. 61, D-41812 Erkelenz | Go Rhein Fire!\nTel.: (+49) 2431/72651 | Use Debian GNU/Linux!\nEmail: [email protected] | Use PostgreSQL!\n", "msg_date": "Mon, 14 Dec 1998 11:17:15 +0100", "msg_from": "Michael Meskes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] ecpg man page" } ]
[ { "msg_contents": "\n On FreeBSD 3.0 Nov. 1998 - ELF\n \n The snapshots for 981128, 981127, 981114 and 981106 all\n test the same on the regression tests.\n\n The backend dumps core on the regression tests 3 times when running\n create_function_2, triggers, and alter_table.\n In addition to these 3, it also fails the int8, float8, geometry,\nsanity_check, \n and plpgsql regression tests with minor errors in some tests. \n\n On FreeBSD 3.0 - snapshot - 980598 - aout\n\n Core is not dumped by the backend during the regression tests.\n The same tests as above fail, however.\n\n The database in actual use performs OK.\n I have not had any core dumps in my use of the database.\n\n Thank you,\n\n jim\n [email protected] \n \n", "msg_date": "Sat, 28 Nov 1998 16:38:06 EST", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Backend crashes on Regression tests" } ]
[ { "msg_contents": "Hello,\n\nI am attaching here my humble patch for pg_dump utility modified by me\nfor readability purposes.\n\nThe main (and single) thing that is doing is changing the layout of\npg_dump'ed table structure by writing a single field per line so viewing\nand changing the structure will be easier.\n\nIf you consider it useful, let it be included !\n\nWith my best regards,\n-- \nConstantin Teodorescu\nFLEX Consulting Braila, ROMANIA", "msg_date": "Sat, 28 Nov 1998 18:42:46 +0200", "msg_from": "Constantin Teodorescu <[email protected]>", "msg_from_op": true, "msg_subject": "My humble patch for pg_dump :-)" } ]
[ { "msg_contents": "On Sat, 28 Nov 1998, John Fieber wrote:\n\n> In working with the two, I've also found a couple complicated\n> join queries where I just couldn't get the optimizer in\n> PostgreSQL (6.3.2 and 6.4) to do the right thing, resulting in\n> several minutes of processing per query, while mySQL did the same\n> query in the blink of an eye.\n\n\tYou mention v6.4 above, so could you provide us with a way of\n\"reproducing\" the bug?\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Sat, 28 Nov 1998 15:54:46 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Mysql 321 - Mysql 322 - msql" }, { "msg_contents": "On Sat, 28 Nov 1998, The Hermit Hacker wrote:\n\n> On Sat, 28 Nov 1998, John Fieber wrote:\n> \n> > In working with the two, I've also found a couple complicated\n> > join queries where I just couldn't get the optimizer in\n> > PostgreSQL (6.3.2 and 6.4) to do the right thing, resulting in\n> > several minutes of processing per query, while mySQL did the same\n> > query in the blink of an eye.\n> \n> \tYou mention v6.4 above, so could you provide us with a way of\n> \"reproducing\" the bug?\n\nAttached is the database scheme from pg_dump (there are a bunch\nof extraneous tables in the context of this query). I make no\nclaims at all about the quality of the database design or the\nquery, but both Sybase and mySQL execute it very quickly while\nPostgreSQL refuses to use the index on the codes table.\n\n$sheet in the query is the \"user input\" to and is an integer.\n\n(For the curious, this is part of\nhttp://fallout.campusview.indiana.edu/mapfinder. I can supply\ndata if needed.)\n\n-john\n\n \tSELECT\n\t sheet.sheet_id,\n\t sheet.name,\n \t sheet.number,\n \t sheet.note,\n \t cat.call,\n \t cat.series,\n \t cat.main_entry,\n \t sheet.scale,\n \t ca.name as mtype,\n \t cb.name as prod,\n \t cc.name as proj,\n \t cd.name as pm,\n \t ce.name as format,\n \t sheet.coords\n \tFROM\n\t sheet,\n\t cat,\n\t codes ca,\n\t codes cb,\n\t codes cc,\n\t codes cd,\n\t codes ce\n \tWHERE\n\t sheet.sheet_id = $sheet \n \t AND sheet.cat_id = cat.cat_id\n\t AND sheet.mtype = ca.code_id\n\t AND sheet.prod = cb.code_id\n\t AND sheet.proj = cc.code_id\n\t AND sheet.pm = cd.code_id\n\t AND sheet.format = ce.code_id", "msg_date": "Sat, 28 Nov 1998 15:10:54 -0500 (EST)", "msg_from": "John Fieber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Mysql 321 - Mysql 322 - msql" } ]
[ { "msg_contents": "\n\nOn Sat, 28 Nov 1998, The Hermit Hacker wrote:\n\n> On Sat, 28 Nov 1998, John Fieber wrote:\n> \n> > In working with the two, I've also found a couple complicated\n> > join queries where I just couldn't get the optimizer in\n> > PostgreSQL (6.3.2 and 6.4) to do the right thing, resulting in\n> > several minutes of processing per query, while mySQL did the same\n> > query in the blink of an eye.\n> \n> \tYou mention v6.4 above, so could you provide us with a way of\n> \"reproducing\" the bug?\n\nAttached is the database scheme from pg_dump (there are a bunch\nof extraneous tables in the context of this query). I make no\nclaims at all about the quality of the database design or the\nquery, but both Sybase and mySQL execute it very quickly while\nPostgreSQL refuses to use the index on the codes table.\n\n$sheet in the query is the \"user input\" to and is an integer.\n\n(For the curious, this is part of\nhttp://fallout.campusview.indiana.edu/mapfinder. I can supply\ndata if needed.)\n\n-john\n\n \tSELECT\n\t sheet.sheet_id,\n\t sheet.name,\n \t sheet.number,\n \t sheet.note,\n \t cat.call,\n \t cat.series,\n \t cat.main_entry,\n \t sheet.scale,\n \t ca.name as mtype,\n \t cb.name as prod,\n \t cc.name as proj,\n \t cd.name as pm,\n \t ce.name as format,\n \t sheet.coords\n \tFROM\n\t sheet,\n\t cat,\n\t codes ca,\n\t codes cb,\n\t codes cc,\n\t codes cd,\n\t codes ce\n \tWHERE\n\t sheet.sheet_id = $sheet \n \t AND sheet.cat_id = cat.cat_id\n\t AND sheet.mtype = ca.code_id\n\t AND sheet.prod = cb.code_id\n\t AND sheet.proj = cc.code_id\n\t AND sheet.pm = cd.code_id\n\t AND sheet.format = ce.code_id\n\n\n--0-1134614595-912283854=:795\nContent-Type: TEXT/PLAIN; charset=US-ASCII; name=\"mf.schema\"\nContent-Transfer-Encoding: BASE64\nContent-ID: <[email protected]>\nContent-Description: mapfinder schema\nContent-Disposition: attachment; filename=\"mf.schema\"\n\nQ1JFQVRFIFRBQkxFICJvcGVuZmllbGRzIiAoInNoZWV0X2lkIiAiaW50NCIs\nICJjb2RlIiAiaW50MiIsICJ2YWx1ZSIgImludDQiKTsNCkNSRUFURSBUQUJM\nRSAiY29kZXMiICgiY29kZV9pZCIgImludDQiLCAibmFtZSIgInRleHQiKTsN\nCkNSRUFURSBUQUJMRSAiY2F0IiAoImNhdF9pZCIgImludDQiLCAiY2FsbCIg\nInRleHQiLCAibWFpbl9lbnRyeSIgInRleHQiLCAic2VyaWVzIiAidGV4dCIp\nOw0KQ1JFQVRFIFRBQkxFICJzaGVldCIgKCJmaWxfbm8iICJpbnQ0IiwgInNo\nZWV0X2lkIiAiaW50NCIsICJuYW1lIiAidGV4dCIsICJudW1iZXIiICJ0ZXh0\nIiwgIm5vdGUiICJ0ZXh0IiwgImhvbGRpbmdzIiAiaW50MiIsICJjYXRfaWQi\nICJpbnQ0IiwgIm10eXBlIiAiaW50MiIsICJwcm9kIiAiaW50MiIsICJwcm9q\nIiAiaW50MiIsICJwbSIgImludDIiLCAiZm9ybWF0IiAiaW50MiIsICJzY2Fs\nZSIgImludDQiLCAiY29vcmRzIiAiYm94Iik7DQpDUkVBVEUgVEFCTEUgImdu\naXNfc3RhdGUiICgiaWQiICJpbnQ0IiwgImFiYnJldiIgdmFyY2hhcig0KSwg\nIm5hbWUiICJ0ZXh0Iik7DQpDUkVBVEUgVEFCTEUgImduaXNfZnR5cGUiICgi\naWQiICJpbnQ0IiwgImFiYnJldiIgY2hhcig4KSwgIm5hbWUiICJ0ZXh0Iik7\nDQpDUkVBVEUgVEFCTEUgImduaXNfY291bnR5IiAoImlkIiAiaW50NCIsICJz\ndGF0ZSIgImludDQiLCAibmFtZSIgInRleHQiKTsNCkNSRUFURSBUQUJMRSAi\nZ25pcyIgKCJmbmFtZSIgInRleHQiLCAiZm5hbWVfbGMiICJ0ZXh0IiwgImZ0\neXBlIiAiaW50NCIsICJzdGF0ZSIgImludDQiLCAiY291bnR5IiAiaW50NCIs\nICJlbGV2YXRpb24iICJpbnQ0IiwgInBvcHVsYXRpb24iICJpbnQ0IiwgImxv\nY2F0aW9uIiAicG9pbnQiKTsNCkNSRUFURSBGVU5DVElPTiAiY29kZXRleHQi\nICgiaW50MiIgKSBSRVRVUk5TICJ0ZXh0IiBBUyAnU0VMRUNUIGNvZGVzLm5h\nbWUgd2hlcmUgY29kZXMuY29kZV9pZCA9ICQxOycgTEFOR1VBR0UgJ1NRTCc7\nDQpDUkVBVEUgIElOREVYICJpX29wZW5maWVsZHMiIG9uICJvcGVuZmllbGRz\nIiB1c2luZyBidHJlZSAoICJzaGVldF9pZCIgImludDRfb3BzIiApOw0KQ1JF\nQVRFICBJTkRFWCAiaV9jb2RlcyIgb24gImNvZGVzIiB1c2luZyBidHJlZSAo\nICJjb2RlX2lkIiAiaW50NF9vcHMiICk7DQpDUkVBVEUgIElOREVYICJpX2Nh\ndCIgb24gImNhdCIgdXNpbmcgaGFzaCAoICJjYXRfaWQiICJpbnQ0X29wcyIg\nKTsNCkNSRUFURSAgSU5ERVggImlfc2hlZXQiIG9uICJzaGVldCIgdXNpbmcg\nYnRyZWUgKCAic2hlZXRfaWQiICJpbnQ0X29wcyIgKTsNCkNSRUFURSAgSU5E\nRVggImlfc2hlZXRuYW1lIiBvbiAic2hlZXQiIHVzaW5nIGJ0cmVlICggIm5h\nbWUiICJ0ZXh0X29wcyIgKTsNCkNSRUFURSAgSU5ERVggImlfc2hlZXRjb29y\nZHMiIG9uICJzaGVldCIgdXNpbmcgcnRyZWUgKCAiY29vcmRzIiAiYm94X29w\ncyIgKTsNCkNSRUFURSAgSU5ERVggImduaXNfc3RhdGVfaSIgb24gImduaXNf\nc3RhdGUiIHVzaW5nIGJ0cmVlICggImlkIiAiaW50NF9vcHMiICk7DQpDUkVB\nVEUgIElOREVYICJnbmlzX2NvdW50eV9pIiBvbiAiZ25pc19jb3VudHkiIHVz\naW5nIGJ0cmVlICggImlkIiAiaW50NF9vcHMiICk7DQpDUkVBVEUgIElOREVY\nICJnbmlzX2kiIG9uICJnbmlzIiB1c2luZyBidHJlZSAoICJmbmFtZV9sYyIg\nInRleHRfb3BzIiApOw0K\n--0-1134614595-912283854=:795--\n\n\n", "msg_date": "Sat, 28 Nov 1998 17:14:32 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Mysql 321 - Mysql 322 - msql" } ]
[ { "msg_contents": "Several people have mentioned the need for a platform-specific FAQ for\nHPUX. I've finally stepped up and drafted one ... it's doubtless not\nvery complete yet, so if you have had problems using Postgres on HPUX\nwould you take a look and see what can be added or improved?\n\nI've placed the initial draft in the CVS sources (pgsql/doc/FAQ_HPUX)\nas well as on the website (http://postgreSQL.org/docs/faq-hpux.shtml,\nor you can get there by following the link from the main FAQ).\n\nBTW, would someone check my work with adding this new file to both\nthe main CVS branch and REL6_4? I committed the new file in the\nmain branch then did \n\tcvs tag -b REL6_4 FAQ_HPUX\nwhich seemed to work... but I'm not sure I know what I'm doing with\nCVS branches...\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 28 Nov 1998 18:29:51 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "New platform-specific FAQ for HPUX" }, { "msg_contents": "> Several people have mentioned the need for a platform-specific FAQ for\n> HPUX. I've finally stepped up and drafted one ... it's doubtless not\n> very complete yet, so if you have had problems using Postgres on HPUX\n> would you take a look and see what can be added or improved?\n> \n> I've placed the initial draft in the CVS sources (pgsql/doc/FAQ_HPUX)\n> as well as on the website (http://postgreSQL.org/docs/faq-hpux.shtml,\n> or you can get there by following the link from the main FAQ).\n\nFYI, I am using CVS client version 1.9:\n\n\tConcurrent Versions System (CVS) 1.9 (client/server)\n\nand having no problems, so I think 1.9 works. Not sure why Marc\nupgrading the server to 1.10 fixed problems.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 1 Dec 1998 20:09:48 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] New platform-specific FAQ for HPUX" }, { "msg_contents": "On Tue, 1 Dec 1998, Bruce Momjian wrote:\n\n> > Several people have mentioned the need for a platform-specific FAQ for\n> > HPUX. I've finally stepped up and drafted one ... it's doubtless not\n> > very complete yet, so if you have had problems using Postgres on HPUX\n> > would you take a look and see what can be added or improved?\n> > \n> > I've placed the initial draft in the CVS sources (pgsql/doc/FAQ_HPUX)\n> > as well as on the website (http://postgreSQL.org/docs/faq-hpux.shtml,\n> > or you can get there by following the link from the main FAQ).\n> \n> FYI, I am using CVS client version 1.9:\n> \n> \tConcurrent Versions System (CVS) 1.9 (client/server)\n> \n> and having no problems, so I think 1.9 works. Not sure why Marc\n> upgrading the server to 1.10 fixed problems.\n\n\tCause we were running 1.9? And 1.10 was next logical step? :)\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Wed, 2 Dec 1998 00:52:57 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] New platform-specific FAQ for HPUX" } ]
[ { "msg_contents": "I have just checked into the main-branch CVS tree some revisions\nto allow autoconf to decide whether the system has POSIX signals\nor not, rather than relying on the platform os.h file to tell us.\nThis seems to be the only reasonable way to support all the\ndifferent versions of HPUX correctly.\n\nI would like to check this into the 6.4 branch for 6.4.1, but\nI'm afraid Marc will break my thumbs if I just do that without\nany outside testing ;-). Please let me know whether this works\nOK on your platform.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 29 Nov 1998 00:37:52 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "configure revisions to autodetect USE_POSIX_SIGNALS" }, { "msg_contents": "On Sun, 29 Nov 1998, Tom Lane wrote:\n\n> I have just checked into the main-branch CVS tree some revisions\n> to allow autoconf to decide whether the system has POSIX signals\n> or not, rather than relying on the platform os.h file to tell us.\n> This seems to be the only reasonable way to support all the\n> different versions of HPUX correctly.\n> \n> I would like to check this into the 6.4 branch for 6.4.1, but\n> I'm afraid Marc will break my thumbs if I just do that without\n> any outside testing ;-). Please let me know whether this works\n> OK on your platform.\n\n\tConfigure related changes, for the most part, do not bother me\nalong the Release branch...only something that will actually affect the\nbackend server does...\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Sun, 29 Nov 1998 01:59:39 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] configure revisions to autodetect USE_POSIX_SIGNALS" } ]
[ { "msg_contents": "Hi,\n\nThe subject says all.\n\n\tRegards,\n\n\t\tOle\t\n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n\n", "msg_date": "Sun, 29 Nov 1998 22:56:34 +0300 (MSK)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": true, "msg_subject": "How to see rules,functions and triggers in psql ?" }, { "msg_contents": "# psql foo\npsql> \\?\n\nThat lists all of the admin functions available from psql. \\df lists all\nfunctions, I'm not sure of the rest. Try it out and see.\n\n--\nTodd Graham Lewis MindSpring Enterprises [email protected]\n The Windows 2000 name was obviously created over a glass of root beer\n in the company cafeteria by a couple of executives looking for a way\n out of the Windows NT delays. -- John C. Dvorak\n\n", "msg_date": "Sun, 29 Nov 1998 16:06:59 -0500 (EST)", "msg_from": "Todd Graham Lewis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] How to see rules,functions and triggers in psql ?" }, { "msg_contents": "Hi all\n\nOn Sun, 29 Nov 1998, Todd Graham Lewis wrote:\n\n> # psql foo\n> psql> \\?\n> \n> That lists all of the admin functions available from psql. \\df lists all\n> functions, I'm not sure of the rest. Try it out and see.\n\nI suspect what the person was asking, was how to see user defined\nfunction.\n'\\df' only shows built in functions, and they just scroll off the screen\nfor ever.\n\nMaybe we could use a '\\duf' to see user defined functions? I know this\nwould have been nice a couple of times for me.\n\nHave a great day\nTerry Mackintosh <[email protected]> http://www.terrym.com\nsysadmin/owner Please! No MIME encoded or HTML mail, unless needed.\n\nProudly powered by R H Linux 4.2, Apache 1.3, PHP 3, PostgreSQL 6.4\n-------------------------------------------------------------------\nSuccess Is A Choice ... book by Rick Patino, get it, read it!\n\n", "msg_date": "Sun, 29 Nov 1998 21:39:59 -0500 (EST)", "msg_from": "Terry Mackintosh <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] How to see rules,functions and triggers in psql ?" }, { "msg_contents": "On Sun, 29 Nov 1998, Todd Graham Lewis wrote:\n\n> Date: Sun, 29 Nov 1998 16:06:59 -0500 (EST)\n> From: Todd Graham Lewis <[email protected]>\n> To: Oleg Bartunov <[email protected]>\n> Cc: [email protected]\n> Subject: Re: [HACKERS] How to see rules,functions and triggers in psql ?\n> \n> # psql foo\n> psql> \\?\n> \n> That lists all of the admin functions available from psql. \\df lists all\n> functions, I'm not sure of the rest. Try it out and see.\n\nI was asking about user defined functions,triggers and rules.\nI've read man psql. \n\n\tOleg\n\n> \n> --\n> Todd Graham Lewis MindSpring Enterprises [email protected]\n> The Windows 2000 name was obviously created over a glass of root beer\n> in the company cafeteria by a couple of executives looking for a way\n> out of the Windows NT delays. -- John C. Dvorak\n> \n> \n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Mon, 30 Nov 1998 07:09:10 +0300 (MSK)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] How to see rules,functions and triggers in psql ?" } ]
[ { "msg_contents": "Dear Sir/Madam:\n\nA few days ago I stumped into a site mentioned words along the post??? The database can handle 200 Giga Byte of data and the CD was $6.00. Today I did my search again I could never find it. Are you the proper site that I should be in or you could point out otherwise. Does your postgresql handle 200 GB size data ? I also remembered that they mentioned the word \"postmaster\".\n\nWith sincere esteem\n\nPang-Hsin Wang\n\[email protected]\n\n(613) 830-8138\n\n\n\n\n\n\nDear Sir/Madam:\n \nA few days ago I stumped into a site mentioned words along the \npost??? The database can handle 200 Giga Byte of data and the CD was $6.00. \nToday I did my search again I could never find it. Are you the proper site that \nI should be in or you could point out otherwise. Does your postgresql handle 200 \nGB size data ? I also remembered that they mentioned the word \n\"postmaster\".\n \nWith sincere esteem\n \nPang-Hsin Wang\n \[email protected]\n \n(613) 830-8138", "msg_date": "Sun, 29 Nov 1998 22:47:44 -0500", "msg_from": "\"Pang-Hsin Wang\" <[email protected]>", "msg_from_op": true, "msg_subject": "ARP. ARP (Address Resolution Protocol i.e. Who are you)" } ]
[ { "msg_contents": "I'm going to remove subj...\n\nbuf_internals.h:\n\n /*\n * I padded this structure to a power of 2 (PADDED_SBUFDESC_SIZE)\n * because BufferDescriptorGetBuffer is called a billion times and it\n * does an C pointer subtraction (i.e., \"x - y\" -> array index of x\n * relative to y, which is calculated using division by struct size).\n ^^^^^^^^^^^^^^^^^^^^^^^^\n * Integer \".div\" hits you for 35 cycles, as opposed to a 1-cycle\n * \"sra\" ... this hack cut 10% off of the time to create the Wisconsin\n * database! It eats up more shared memory, of course, but we're\n * (allegedly) going to make some of these types bigger soon anyway...\n * -pma 1/2/93\n */\n\nThis is not true now:\n\n#define BufferDescriptorGetBuffer(bdesc) ((bdesc)->buf_id + 1)\n\nComments ?...\n\nVadim\n", "msg_date": "Mon, 30 Nov 1998 12:07:37 +0700", "msg_from": "Vadim Mikheev <[email protected]>", "msg_from_op": true, "msg_subject": "sbufdesc' padding..." }, { "msg_contents": "> * ... this hack cut 10% off of the time to create the Wisconsin\n> * database! It eats up more shared memory, of course, but we're\n> * (allegedly) going to make some of these types bigger soon \n> * anyway... -pma 1/2/93\n> This is not true now:\n> #define BufferDescriptorGetBuffer(bdesc) ((bdesc)->buf_id + 1)\n> Comments ?...\n\nDoes that mean that we have re-introduced the slower allocation\ntechnique sometime since 1993? 10% speed improvement for some operations\nseems interesting...\n\n - Tom\n", "msg_date": "Mon, 30 Nov 1998 16:28:08 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] sbufdesc' padding..." }, { "msg_contents": "\"Thomas G. Lockhart\" wrote:\n> \n> > * ... this hack cut 10% off of the time to create the Wisconsin\n> > * database! It eats up more shared memory, of course, but we're\n> > * (allegedly) going to make some of these types bigger soon\n> > * anyway... -pma 1/2/93\n> > This is not true now:\n> > #define BufferDescriptorGetBuffer(bdesc) ((bdesc)->buf_id + 1)\n> > Comments ?...\n> \n> Does that mean that we have re-introduced the slower allocation\n> technique sometime since 1993? 10% speed improvement for some operations\n> seems interesting...\n\nWhy slower allocation? BufferDescriptorGetBuffer used sizeof sbufdesc\nto get buffer number in 1993. Jolly/Andrew put buffer number\n(buf_id) into sbufdesc itself and so made sbufdesc padding\nuseless.\n\nVadim\n", "msg_date": "Tue, 01 Dec 1998 09:50:44 +0700", "msg_from": "Vadim Mikheev <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] sbufdesc' padding..." }, { "msg_contents": "Vadim Mikheev wrote:\n> \n> Why slower allocation? BufferDescriptorGetBuffer used sizeof sbufdesc\n> to get buffer number in 1993. Jolly/Andrew put buffer number\n> (buf_id) into sbufdesc itself and so made sbufdesc padding\n> useless.\n\nAh. I had only read the comment without understanding the current\nimplementation...\n\n - Tom\n", "msg_date": "Tue, 01 Dec 1998 06:00:43 +0000", "msg_from": "\"Thomas G. Lockhart\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] sbufdesc' padding..." }, { "msg_contents": "> I'm going to remove subj...\n> \n> buf_internals.h:\n> \n> /*\n> * I padded this structure to a power of 2 (PADDED_SBUFDESC_SIZE)\n> * because BufferDescriptorGetBuffer is called a billion times and it\n> * does an C pointer subtraction (i.e., \"x - y\" -> array index of x\n> * relative to y, which is calculated using division by struct size).\n> ^^^^^^^^^^^^^^^^^^^^^^^^\n> * Integer \".div\" hits you for 35 cycles, as opposed to a 1-cycle\n> * \"sra\" ... this hack cut 10% off of the time to create the Wisconsin\n> * database! It eats up more shared memory, of course, but we're\n> * (allegedly) going to make some of these types bigger soon anyway...\n> * -pma 1/2/93\n> */\n> \n> This is not true now:\n> \n> #define BufferDescriptorGetBuffer(bdesc) ((bdesc)->buf_id + 1)\n> \n> Comments ?...\n> \n> Vadim\n> \n> \n\nYou can remove the comment about size-of-2. It currently pads to a\nfixed size of 128, because the old code didn't work. We now do:\n\n /*\n * please, don't take the sizeof() this member and use it for\n * something important\n */\n \n char sb_relname[NAMEDATALEN + /* name of reln */\n PADDED_SBUFDESC_SIZE - sizeof(struct sbufdesc_unpadded)];\n\nwhich is much stranger, but works 100% of the time.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 1 Dec 1998 20:26:34 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] sbufdesc' padding..." }, { "msg_contents": "> > * ... this hack cut 10% off of the time to create the Wisconsin\n> > * database! It eats up more shared memory, of course, but we're\n> > * (allegedly) going to make some of these types bigger soon \n> > * anyway... -pma 1/2/93\n> > This is not true now:\n> > #define BufferDescriptorGetBuffer(bdesc) ((bdesc)->buf_id + 1)\n> > Comments ?...\n> \n> Does that mean that we have re-introduced the slower allocation\n> technique sometime since 1993? 10% speed improvement for some operations\n> seems interesting...\n> \n\nNo, just the way of computing the padding has changed.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 1 Dec 1998 20:30:19 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] sbufdesc' padding..." }, { "msg_contents": "> \"Thomas G. Lockhart\" wrote:\n> > \n> > > * ... this hack cut 10% off of the time to create the Wisconsin\n> > > * database! It eats up more shared memory, of course, but we're\n> > > * (allegedly) going to make some of these types bigger soon\n> > > * anyway... -pma 1/2/93\n> > > This is not true now:\n> > > #define BufferDescriptorGetBuffer(bdesc) ((bdesc)->buf_id + 1)\n> > > Comments ?...\n> > \n> > Does that mean that we have re-introduced the slower allocation\n> > technique sometime since 1993? 10% speed improvement for some operations\n> > seems interesting...\n> \n> Why slower allocation? BufferDescriptorGetBuffer used sizeof sbufdesc\n> to get buffer number in 1993. Jolly/Andrew put buffer number\n> (buf_id) into sbufdesc itself and so made sbufdesc padding\n> useless.\n\nOh, then you can remove the padding.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 1 Dec 1998 20:39:53 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] sbufdesc' padding..." } ]
[ { "msg_contents": "\nHello,\nI'm a new user of PostgreSQL 6.3.2. I plan to build Distributed Databases\nwith PostgreSQL and JDBC, but i don't now where to start? Could anyone\nhelp me?!\n\n\nAkmal Hasan\n****************\[email protected]\[email protected]\n\n \n\n\n\n\n", "msg_date": "Mon, 30 Nov 1998 16:27:08 +0700 (JVT)", "msg_from": "Akmal Hasan <[email protected]>", "msg_from_op": true, "msg_subject": "How to build Distributed Databases with JDBC and PostgreSQL?" }, { "msg_contents": "On Mon, 30 Nov 1998, Akmal Hasan wrote:\n\n> Hello,\n> I'm a new user of PostgreSQL 6.3.2. I plan to build Distributed Databases\n> with PostgreSQL and JDBC, but i don't now where to start? Could anyone\n> help me?!\n\nFirst, take a look at the examples (src/interfaces/jdbc/example) then \ncheck my doc's http://www.retep.org.uk/postgres/ then the JDBC\ndocumentation on the javasoft site.\n\nThat would be a start.\n\n-- \nPeter Mount, IT Section\[email protected]\nAnything I write here are my own views, and cannot be taken as being the\nofficial words of Maidstone Borough Council\n\n\n", "msg_date": "Mon, 30 Nov 1998 12:55:12 +0000 (GMT)", "msg_from": "Peter T Mount <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] How to build Distributed Databases with JDBC and\n\tPostgreSQL?" } ]
[ { "msg_contents": "\t>>> Another question is whether a SERIAL field should automatically\nbe\n\t>>> UNIQUE (ie, create a unique index on it to prevent mistakes in\nmanual\n\t>>> insertion of values for the field).\n\t>\n\t>> Once again - I would like to see SERIAL compatible with\n\t>> SERIAL/IDENTY in other RDBMSes.\n\t>\n\t>Yes, and? What do the other ones do?\n\nIn Informix you need to create the unique index explicitly. I like this\nbecause it keeps\nthings flexible. The unique constraint could be on a compount key.\n\nAndreas\n\n", "msg_date": "Mon, 30 Nov 1998 11:08:28 +0100", "msg_from": "Zeugswetter Andreas IZ5 <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] DROPping tables with SERIALs " }, { "msg_contents": "Zeugswetter Andreas IZ5 wrote:\n> \n> >>> Another question is whether a SERIAL field should automatically\n> be\n> >>> UNIQUE (ie, create a unique index on it to prevent mistakes in\n> manual\n> >>> insertion of values for the field).\n> >\n> >> Once again - I would like to see SERIAL compatible with\n> >> SERIAL/IDENTY in other RDBMSes.\n> >\n> >Yes, and? What do the other ones do?\n> \n> In Informix you need to create the unique index explicitly. I like this\n> because it keeps\n> things flexible. The unique constraint could be on a compount key.\n\nAgreed.\n\nVadim\n", "msg_date": "Mon, 30 Nov 1998 17:31:10 +0700", "msg_from": "Vadim Mikheev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] DROPping tables with SERIALs" } ]
[ { "msg_contents": "Hi,\n\n while on the redolog, I've came across a little detail I'm in\n doubt about. Currently it seems, that the 'C' response to the\n frontend is sent before the transaction get's really\n committed in the backend. So there is a little chance that\n the backend dies between this response and the\n CommitTransaction() call.\n\n Isn't that the wrong order? As a programmer I would assume,\n that if I have positive response to COMMIT, I can forget my\n local data because it made it safely into the database.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#======================================== [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Mon, 30 Nov 1998 17:38:00 +0100 (MET)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": true, "msg_subject": "COMMIT" }, { "msg_contents": "Jan Wieck wrote:\n> \n> Hi,\n> \n> while on the redolog, I've came across a little detail I'm in\n> doubt about. Currently it seems, that the 'C' response to the\n> frontend is sent before the transaction get's really\n> committed in the backend. So there is a little chance that\n> the backend dies between this response and the\n> CommitTransaction() call.\n> \n> Isn't that the wrong order? As a programmer I would assume,\n> that if I have positive response to COMMIT, I can forget my\n> local data because it made it safely into the database.\n\nYes, this should be fixed...\n\nVadim\n", "msg_date": "Tue, 01 Dec 1998 09:51:58 +0700", "msg_from": "Vadim Mikheev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] COMMIT" }, { "msg_contents": "Vadim Mikheev <[email protected]> writes:\n> Jan Wieck wrote:\n>> while on the redolog, I've came across a little detail I'm in\n>> doubt about. Currently it seems, that the 'C' response to the\n>> frontend is sent before the transaction get's really\n>> committed in the backend. So there is a little chance that\n>> the backend dies between this response and the\n>> CommitTransaction() call.\n\n> Yes, this should be fixed...\n\nI don't think it's practical to fix this without changing the semantics\nof queries. Maybe we are willing to do that, but we should think twice\nabout the implications.\n\nThe reason it acts this way is that the command response is generated\nat the end of a command execution subroutine (for COMMIT, it happens\nat the bottom of ProcessUtility), whereas StartTransactionCommand and\nCommitTransactionCommand are called from the outer loop in postgres.c.\n\nWe can't very reasonably move command-response sending to the outer\nloop, since if the query string contains several commands we need to\nsend several responses.\n\nWe could take the start/commit calls out of postgres.c and put them\ninto the command execution subroutines. For example, ProcessUtility\nwould then look like\n\tStartTransactionCommand();\n\tbig switch statement\n\tCommitTransactionCommand();\n\tEndCommand(commandTag, dest); // this sends the command response\n\nHowever there are two disadvantages to that:\n\n1. If you forget to put the start/commit calls into *all* the possible\nexecution paths, you have a problem. Having them only one place, in\nthe outer loop, is much more reliable.\n\n2. This changes the semantics of a query string that contains multiple\ncommands. Right now, such commands are executed within a single\ntransaction. If we change the code as above, each one will get its own\ntransaction, so a failure in a later one will not rollback earlier ones.\n\n\nIn the 6.4 FE/BE protocol there is another answer. The terminating\n\"Z\" (ReadyForQuery) message does not get sent until after the commit\nhas occurred successfully. So, if you believe the command has completed\nwhen you get the \"Z\" message, and not just when you get the \"C\" message,\nthen this concern does not arise.\n\nAnd, in fact, that's how libpq currently works, at least if you are using\nPQexec() and not the lower-level routines --- it won't come back until\nit gets \"Z\".\n\n\nSo my inclination is to leave well enough alone. There might be some\ndocumentation effort called for here ... but I don't see a problem that\njustifies changing the behavior of queries in a way that could break\nsome applications.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 01 Dec 1998 11:14:49 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] COMMIT " }, { "msg_contents": "> Jan Wieck wrote:\n> > \n> > Hi,\n> > \n> > while on the redolog, I've came across a little detail I'm in\n> > doubt about. Currently it seems, that the 'C' response to the\n> > frontend is sent before the transaction get's really\n> > committed in the backend. So there is a little chance that\n> > the backend dies between this response and the\n> > CommitTransaction() call.\n> > \n> > Isn't that the wrong order? As a programmer I would assume,\n> > that if I have positive response to COMMIT, I can forget my\n> > local data because it made it safely into the database.\n> \n> Yes, this should be fixed...\n\nAdded to TODO:\n\n\t* 'C' response returned to fontend before actual data committed\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 12 Dec 1998 22:58:30 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] COMMIT" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> Added to TODO:\n> \t* 'C' response returned to fontend before actual data committed\n\nSee my followup to that thread --- there is already a solution (wait for\n'Z' instead of 'C') and changing the backend's behavior would break\napplications that rely on the current semantics of multiple commands in\na single query string. So I think we should leave well enough alone.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 13 Dec 1998 12:23:38 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] COMMIT " }, { "msg_contents": "> Bruce Momjian <[email protected]> writes:\n> > Added to TODO:\n> > \t* 'C' response returned to fontend before actual data committed\n> \n> See my followup to that thread --- there is already a solution (wait for\n> 'Z' instead of 'C') and changing the backend's behavior would break\n> applications that rely on the current semantics of multiple commands in\n> a single query string. So I think we should leave well enough alone.\n> \n> \t\t\tregards, tom lane\n> \n\nRemoved from TODO list.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 13 Dec 1998 14:32:30 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] COMMIT" } ]
[ { "msg_contents": "\nLooks like Sun has been paying attention to the good features of Linux and\nimplementing them. I guess Sun decided that that game can be played both\ndirections. :)\n\nhttp://www.sunworld.com/swol-11-1998/swol-11-insidesolaris.html?1101a\nhttp://www.sunworld.com/swol-11-1998/swol-11-solaris7.html\nhttp://www.sunworld.com/swol-11-1998/swol-11-perf.html\n\n- Can address 17.5 terabytes of memory, 9TB files\n- Arithmetic computations get to use 64-bit registers and operations\n- UFS logging (mount -o logging /dev/dsk/c0t0d0s0 /mnt)\n- mount \"noatime\" support\n- TCP performance enhancement with SACK, RFC 2018 \n- New crash dump features \n- BIND 8.1.2\n- Sendmail 8.9.1b \n- improved poll(2) system call (used in many applications)\n- increase in number of filedescriptors\n- Full Unicode 2.1 support\n- Better threading subsystem\n- Kerberos V\n- Lots of new CDE apps/improvements\n- Netscape 4.05\n- directory name lookup cache optimized and now dynamically allocated on\n demand (static before)\n- significantly improved paging algorithim, not \"on\" by default\n\tadd the line: set priority_paging = 1 to /etc/system, and reboot\n\t- Note, backported to 2.6 kernel patch \"-09\"\n\thttp://www.sun.com/sun-on-net/performance/priority_paging.html\n\tIMPORTANT NOTE: Ensure that data files do not have the executable\n\tbit set. This can fool the VM into thinking that these are really\n\texecutables, and will not engage priority paging on these files. \n - some new commands like\n\tplimit(1): Set or get resource limits of a process by taking a\n\tprocess ID (PID) as an argument (limit and ulimit only work with the\n\tcurrent process) \n\tpgrep(1): Find a process by name and print the PID\n\tpkill(1): Find a process by name and send it a signal specified on\n\tthe command line \n\ttraceroute\n\nDid I mention that \"facls\" (man setfacl) added with Solaris 2.5.1 are damn\nusefull? \n\n", "msg_date": "Mon, 30 Nov 1998 13:34:08 -0700 (MST)", "msg_from": "Dax Kelson <[email protected]>", "msg_from_op": true, "msg_subject": "New Solaris 7 features" } ]
[ { "msg_contents": "I installed the postgres version from the (freebsd) ports\ncollection. When I try to connect to the backend server, this\nhappens:\n\nbash-2.02# su pgsql -c \"/usr/local/pgsql/bin/postmaster -i -D\n/usr/local/pgsql/data -d 2\"\nFindExec: found \"/usr/local/pgsql/bin/postgres\" using argv[0]\nbinding ShmemCreate(key=52e2c1, size=831176)\n/usr/local/pgsql/bin/postmaster: ServerLoop: handling reading\n5\n/usr/local/pgsql/bin/postmaster: ServerLoop: handling reading\n5\n/usr/local/pgsql/bin/postmaster: ServerLoop: handling writing\n5\n/usr/local/pgsql/bin/postmaster: BackendStartup: pid 13290 user root db\ntemplate1 socket 5\n/usr/local/pgsql/bin/postmaster child[13290]: starting with\n(/usr/local/pgsql/bin/postgres, -p, -d2, -P5, -v131072, template1, )\nFindExec: found \"/usr/local/pgsql/bin/postgres\" using argv[0]\ndebug info:\n User = root\n RemoteHost = localhost\n RemotePort = 0\n DatabaseName = template1\n Verbose = 2\n Noversion = f\n timings = f\n dates = Normal\n bufsize = 64\n sortmem = 512\n query echo = f\nInitPostgres\n/usr/local/pgsql/bin/postmaster: reaping dead processes...\n/usr/local/pgsql/bin/postmaster: CleanupProc: pid 13290 exited with\nstatus 140\n/usr/local/pgsql/bin/postmaster: CleanupProc: reinitializing shared\nmemory and semaphores\nshmem_exit(0) [#0]\nBad system call - core dumped\n\n Has anyone seen this?\n\n-- \n \"A man's humor should exceed his grasp,\n that way he can make fun of things he doesn't understand.\"\n \nKurt Seel, Systems Engineer\nUnified Technologies Corp.\nPhone : 610 964 8200\nEmail : [email protected]\n", "msg_date": "Mon, 30 Nov 1998 16:37:08 -0500", "msg_from": "Kurt Seel <[email protected]>", "msg_from_op": true, "msg_subject": "6.4 core dumping on freebsd 2.2.7" } ]
[ { "msg_contents": "Found today two ugly bugs in pg_dump :\n\nEnvironment : Linux i386 RedHat 5.2, 2.0.36 kernel, Pentium 233 MMX\nPg version : PostgreSQL 6.4\n\n1. It's easy reproductible :\n\n(login as user teo, createdb rights granted)\n$ createdb test\n$ psql test\npsql=> create table people (id int4, name text);\npsql=> grant select on people to teo;\npsql=> \\q\n$ pg_dump -z test\n\\connect - teo\nCREATE TABLE \"people\" (\n \"id\" \"int4\",\n \"name\" \"text\");\nREVOKE ALL on \"people\" from PUBLIC;\nGRANT SELECT on \"people\" to \"people\";\n...\n\nThe error is on the last line : grant select on people to PEOPLE ? Not\nto teo ?\n\nMy pg_hba.conf is :\nlocal all trust\nhost all 127.0.0.1 255.255.255.255 trust\nhost all 133.210.115.4 255.255.255.255 password\nhost all 133.210.115.9 255.255.255.255 password \n\n---------------------------------------------------\n\n2. Got a medium size database (640 Kb dumped) that I have recently moved\nto Pg 6.4. After grant-ing and revoke-ing rights to various people,\ndumping with :\n\npg_dump -z showroom >showroom.dmp\n\nis dumping also core :-)\n\nLooking into the showroom.dmp I noticed that it is dumping core when it\nshould start dumping data from tables.\nTable structures (also table rights) , functions and sequences are\ndumped corectly, when it should begin with table data, it dumpes core.\npg_dump without -z is working fine, database showroom is not broken,\neverything is ok.\n\n\nAll the best and happy bug hunting !\n\nConstantin Teodorescu\nFLEX Consulting Braila, ROMANIA\n", "msg_date": "Tue, 01 Dec 1998 10:00:33 +0200", "msg_from": "Constantin Teodorescu <[email protected]>", "msg_from_op": true, "msg_subject": "Two pg_dump ugly bugs :-(" }, { "msg_contents": "Constantin Teodorescu <[email protected]> writes:\n> Found today two ugly bugs in pg_dump :\n> REVOKE ALL on \"people\" from PUBLIC;\n> GRANT SELECT on \"people\" to \"people\";\n> The error is on the last line : grant select on people to PEOPLE ? Not\n> to teo ?\n\nI fixed that one a week or so ago.\n\n> pg_dump -z showroom >showroom.dmp\n> is dumping also core :-)\n\nKarl Auer reported the same thing, but I don't see it here. Can you\nprovide a gdb backtrace, or perhaps a small sample database that\nprovokes the problem?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 03 Dec 1998 19:19:50 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Two pg_dump ugly bugs :-( " } ]
[ { "msg_contents": "Just a quick question. What is the current state of PostgreSQL on Windows\nNT (yeuch).\n\nI'm asking, because I'm pushing PostgreSQL here, and their eye's lit up as\nsoon as they heared that someone has got it running (even partially\nhelps).\n\nAnyhow, I'm not too worried, as I'll have it running under Unix (Linux or\nSolaris), but I need to convince them.\n\n-- \nPeter Mount, IT Section\[email protected]\nAnything I write here are my own views, and cannot be taken as being the\nofficial words of Maidstone Borough Council\n\n\n", "msg_date": "Tue, 1 Dec 1998 16:40:22 +0000 (GMT)", "msg_from": "Peter T Mount <[email protected]>", "msg_from_op": true, "msg_subject": "NT port" }, { "msg_contents": "On Tue, 1 Dec 1998, Peter T Mount wrote:\n\n> Just a quick question. What is the current state of PostgreSQL on Windows\n> NT (yeuch).\n> \n> I'm asking, because I'm pushing PostgreSQL here, and their eye's lit up as\n> soon as they heared that someone has got it running (even partially\n> helps).\n\n\tTo the best of my knowledge, the last report oabout it with NT was\nthat it compiled out of the box using the Cygwin development package...\n\n\tIf someone wants to build an NT binary package up of v6.4 to put\non the WWW/FTP server, I will be quite happy to do so...same goes for any\nother platform, actually...just make sure you include a README.<platform>\nfile so that ppl have some sort of base instructions...\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Tue, 1 Dec 1998 17:01:11 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] NT port" }, { "msg_contents": "On Tue, 1 Dec 1998, The Hermit Hacker wrote:\n\n> On Tue, 1 Dec 1998, Peter T Mount wrote:\n> \n> > Just a quick question. What is the current state of PostgreSQL on Windows\n> > NT (yeuch).\n> > \n> > I'm asking, because I'm pushing PostgreSQL here, and their eye's lit up as\n> > soon as they heared that someone has got it running (even partially\n> > helps).\n> \n> \tTo the best of my knowledge, the last report oabout it with NT was\n> that it compiled out of the box using the Cygwin development package...\n>\n> \tIf someone wants to build an NT binary package up of v6.4 to put\n> on the WWW/FTP server, I will be quite happy to do so...same goes for any\n> other platform, actually...just make sure you include a README.<platform>\n> file so that ppl have some sort of base instructions...\n\nhmmm, I have cygwin at work, I'll have a crack at a binary today.\n\n-- \n Peter T Mount [email protected]\n Main Homepage: http://www.retep.org.uk\nPostgreSQL JDBC Faq: http://www.retep.org.uk/postgres\n Java PDF Generator: http://www.retep.org.uk/pdf\n\n", "msg_date": "Wed, 2 Dec 1998 06:50:49 +0000 (GMT)", "msg_from": "Peter T Mount <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] NT port" } ]
[ { "msg_contents": "Hi,\n\nI am Loan Pham, a student from the University of Oklahoma.\n\nI am doing independent study on indexing techniques for object\noriented databases, and trying to implement an indexing technique in\nPostgreSQL. I already installed PostgreSQL v6.4 on a Sun machine\n(SunOS 5.5, Sparc.) Would you please to tell me where i can get the\nimplementation guide, and any other suggestion?\n\nThank you very much!\n\nLoan Pham\n\n\n", "msg_date": "Tue, 01 Dec 1998 13:15:35 -0600", "msg_from": "\"Loan B. Pham\" <[email protected]>", "msg_from_op": true, "msg_subject": "Indexing Implementation" } ]