threads
listlengths
1
2.99k
[ { "msg_contents": "[email protected] (Jan Wieck) writes:\n> we still need the enhancement of the scanner/parser combo to\n> enable FOREIGN KEY specification as column constraint (the\n> due to shift/reduce disabled NOT DEFERRABLE part).\n> IMHO this must be done before going into BETA. As discussed,\n> a little token lookup/queueing between lex and yacc can do\n> the trick. I'd like to add a slightly generic method for it,\n> so the lookahead function can be reused if we sometimes get\n> trapped again with a similar problem.\n> Do we have a consensus to implement it that way now?\n\nAFAIR that was the only concrete solution offered. I think Thomas\nwanted to look into whether he could tweak the grammar to avoid the\nproblem without lookahead, but he hasn't produced any results ---\nand I misdoubt that a fix done that way will be any cleaner than\ninserting a lexer lookahead interface.\n\nIn short, it's fine by me but I dunno if Thomas has signed on yet.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 03 Jan 2000 10:21:46 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] scanner/parser for FOREIGN KEY " }, { "msg_contents": "> AFAIR that was the only concrete solution offered. I think Thomas\n> wanted to look into whether he could tweak the grammar to avoid the\n> problem without lookahead, but he hasn't produced any results ---\n> and I misdoubt that a fix done that way will be any cleaner than\n> inserting a lexer lookahead interface.\n> In short, it's fine by me but I dunno if Thomas has signed on yet.\n\nI glanced at it, but have not had a chance to dive in. There had been\nso many changes to the parser code while I was off playing with outer\njoin syntax that I decided to start over (lots of what I had done\nneeded to be cleaned up anyway).\n\nI hope to get back to development within a few days, but in the\nmeantime my parser is re-broken and I haven't yet fixed Jan's parts. I\nhate to be holding up Jan, but otoh I hate to see us having to use a\nnew techique for parsing if the usual ones can be made to work...\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Mon, 03 Jan 2000 15:51:12 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] scanner/parser for FOREIGN KEY" }, { "msg_contents": "> > AFAIR that was the only concrete solution offered. I think Thomas\n> > wanted to look into whether he could tweak the grammar to avoid the\n> > problem without lookahead, but he hasn't produced any results ---\n> > and I misdoubt that a fix done that way will be any cleaner than\n> > inserting a lexer lookahead interface.\n> > In short, it's fine by me but I dunno if Thomas has signed on yet.\n> \n> I glanced at it, but have not had a chance to dive in. There had been\n> so many changes to the parser code while I was off playing with outer\n> join syntax that I decided to start over (lots of what I had done\n> needed to be cleaned up anyway).\n> \n> I hope to get back to development within a few days, but in the\n> meantime my parser is re-broken and I haven't yet fixed Jan's parts. I\n> hate to be holding up Jan, but otoh I hate to see us having to use a\n> new techique for parsing if the usual ones can be made to work...\n\nAgreed. Maybe Thomas and I can get on the phone and hammer out a fix.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 3 Jan 2000 11:04:28 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] scanner/parser for FOREIGN KEY" } ]
[ { "msg_contents": "\nHi,\n\nI look at your (Philippe's) replicator, but I don't good understand\nyour replication concept.\n\n\n node1: SQL --IPC--> node-broker\n |\n TCP/IP\n |\n master-node --IPC--> replikator\n | | |\n libpq\n | | |\n node2 node..n \n\n(Is it right picture?)\n\nIf I good understand, all nodes make connection to master node and data\nreplicate \"replicator\" on this master node. But it (master node) is very\ncritical space in this concept - If master node not work replication for \n*all* nodes is lost. Hmm.. but I want use replication for high available\napplications...\n\nIMHO is problem with node registration / authentification on master node.\nWhy concept is not more upright? As:\n\n\tSQL --IPC--> node-replicator\n\t\t\t| | | \n\t\t via libpq send data to all nodes with\n current client/backend auth.\n\n\t(not exist any master node, all nodes have connection to all nodes)\t\n\n\nUse replicator as external proces and copy data from SQL to this replicator\nvia IPC is (your) very good idea. \n\n\t\t\t\t\t\t\tKarel\n\n\n----------------------------------------------------------------------\nKarel Zak <[email protected]> http://home.zf.jcu.cz/~zakkr/\n\nDocs: http://docs.linux.cz (big docs archive)\t\nKim Project: http://home.zf.jcu.cz/~zakkr/kim/ (process manager)\nFTP: ftp://ftp2.zf.jcu.cz/users/zakkr/ (C/ncurses/PgSQL)\n-----------------------------------------------------------------------\n\n", "msg_date": "Mon, 3 Jan 2000 20:23:35 +0100 (CET)", "msg_from": "Karel Zak - Zakkr <[email protected]>", "msg_from_op": true, "msg_subject": "replicator" }, { "msg_contents": "Hi Karel!\n\nKarel Wrote:\n> node1: SQL --IPC--> node-broker\n> |\n> TCP/IP\n> |\n> master-node --IPC--> replikator\n> | | |\n> libpq\n> | | |\n> node2 node..n\n>\n>(Is it right picture?)\n\nYes, you got the concept right. I admit it's a bit complicated. Your comments\nmade me go back to the drawing board and I found several flaws with the design.\nThe first one is that this design does not allow us to use Transaction Blocks.\nAn example might go a long way:\n\nNode 1, Client1 (1,1) Issues a begin statement ---> Node 2 Client 1 (2,1) (the\nreplicator process) sends this command.\n(1, 1) Sends a INSERT statement. ---> (2,1) Sends the INSERT to the backend.\nNode 2 Client 2 (2,2) Checks (SELECT) the data and the INSERT of 1,1 is not\nthere. That's normal, it was not commited.\n\nNode 1, Client 2 (1,2) Issues a BEGIN statement. ---> (2,1) Receives a warning,\nabout the state not being in progress.\n(1,2) Does some stuff...\n(1, 2) issues a Rollback Statement ---> (2,1) Sends the rollback. Node 2 rolls\nback all the transactions made since 1,1 sent the BEGIN.\n(1, 1) Sends the final Commit , It fails on the remote nodes because it was\nrolled back.\n\nSo the problem is that we have more than two connections on a single link. It\ncould be fixed by sending the statements in a block only when we do a COMMIT.\nBut then we might have some performance problems with big blocks of inserts.\nAlso I am worried about UPDATES that could be done between separate COMMITs\nthus putting the database out of sync. :-(\n\n> IMHO is problem with node registration / authentification on master node.\n> Why concept is not more upright? As:\n>\n> SQL --IPC--> node-replicator\n> | | |\n> via libpq send data to all nodes with\n> current client/backend auth.\n\nYes, the concept can be more simple but The above would create some performance\nproblems. If you had many nodes, it would take a long time to send the last\nstatement. You would have to wait until the statement was completly processed\nby all the nodes. A better solution IMHO would be to have a bit more padding\nbetween the node-replicator and the backend.\n\nSo it could become:\n\nSQL --IPC--> node-replicator\n | | |\n via TCP send statements to each node\n replicator (on local node)\n |\n via libpq send data to\n current (local) backend.\n\n> (not exist any master node, all nodes have connection to all nodes)\n\nExactly, if the replicator dies only the node dies, everything else keeps\nworking.\n\nLooking foward to hearing from you,\n\nPhilippe Marchesseault\n\nPS: Please excuse me for the DIFF, it's the first time I'm contributing to an\nOSS project.\n\n", "msg_date": "Mon, 03 Jan 2000 20:22:55 -0500", "msg_from": "Philippe Marchesseault <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] replicator" }, { "msg_contents": "\nOn Mon, 3 Jan 2000, Philippe Marchesseault wrote:\n\n> So it could become:\n> \n> SQL --IPC--> node-replicator\n> | | |\n> via TCP send statements to each node\n> replicator (on local node)\n> |\n> via libpq send data to\n> current (local) backend.\n> \n> > (not exist any master node, all nodes have connection to all nodes)\n> \n> Exactly, if the replicator dies only the node dies, everything else keeps\n> working.\n\n\n Hi,\n\n I a little explore replication conception on Oracle and Sybase (in manuals).\n(Know anyone some interesting links or publication about it?)\n\n Firstly, I sure, untimely is write replication to PgSQL now, if we\nhaven't exactly conception for it. It need more suggestion from more\ndevelopers. We need firstly answers for next qestion:\n\n\t1/ How replication concept choose for PG?\n\t2/ How manage transaction for nodes? (and we need define any \n replication protocol for this)\n\t3/ How involve replication in current PG transaction code?\n\nMy idea (dream:-) is replication that allow you use full read-write on all\nnodes and replication which use current transaction method in PG - not is\ndifference between more backends on one host or more backend on more hosts\n- it makes \"global transaction consistency\".\n\nNow is transaction manage via ICP (one host), my dream is alike manage \nthis transaction, but between more host via TCP. (And make optimalization \nfor this - transfer commited data/commands only.)\n\n\nAny suggestion?\n\n\n-------------------\nNote:\n \n(transaction oriented replication)\n\n Sybase - I. model (only one node is read-write) \n\n\t primary SQL data (READ-WRITE)\n |\n\t replication agent (transaction log monitoring)\n\t\t|\n\t primary distribution server (one or more repl. servers)\n\t | / | \\\n | nodes (READ-ONLY)\n |\n secondary dist. server\n / | \\\n nodes (READ-ONLY)\n\n\n If primary SQL is read-write and the other nodes *read-only* \n => system good work if connection is disable (data are save to\n replication-log and if connection is available log is write \n\t to node). \n\n\n Sybase - II. model (all nodes read-write)\n\n \t SQL data 1 --->--+ NODE I.\n | |\n ^ |\n\t | replication agent 1 (transaction log monitoring)\n V |\n\t\t| V\n | |\n replication server 1\n |\n\t\t^\n V\n |\n replication server 2 NODE II.\n | |\n ^ +-<-->--- SQL data 2\n | | \n replcation agent 2 -<--\n\n\n\nSorry, I not sure if I re-draw previous picture total good..\n\n\t\t\t\t\t\t\t\tKarel \n\n\n\t\n \n\n", "msg_date": "Tue, 4 Jan 2000 17:02:06 +0100 (CET)", "msg_from": "Karel Zak - Zakkr <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] replicator" } ]
[ { "msg_contents": "FYI:\n\nSCOTTS VALLEY, Calif., Jan. 3 /PRNewswire/ -- Inprise Corporation\n (Nasdaq: INPR) today announced that it is jumping to the forefront of\n the Linux database market by open-sourcing the beta version of\n InterBase 6, the new version of its SQL database. InterBase will be\n released in open-source form for multiple platforms, including Linux,\n Windows NT, and Solaris. \n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 3 Jan 2000 16:01:24 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Inprise/Borland releasing Interbase as Open source" }, { "msg_contents": "Bruce Momjian wrote:\n> InterBase 6, the new version of its SQL database. InterBase will be\n> released in open-source form for multiple platforms, including Linux,\n\nI wonder just how 'open' it will be, license-wise.....\n\nNice thing about PostgreSQL -- it doesn't get any more open than the BSD\nlicense.\n\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Mon, 03 Jan 2000 16:14:22 -0500", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Inprise/Borland releasing Interbase as Open source" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> FYI:\n> \n> SCOTTS VALLEY, Calif., Jan. 3 /PRNewswire/ -- Inprise Corporation\n> (Nasdaq: INPR) today announced that it is jumping to the forefront of\n> the Linux database market by open-sourcing the beta version of\n> InterBase 6, the new version of its SQL database. InterBase will be\n> released in open-source form for multiple platforms, including Linux,\n> Windows NT, and Solaris.\n>\n\nSeems we are starting to get some serious competition ;)\nAFAIK, they cover more or less the same features (except domains which \nwe don't have)\n\nBTW, it also says:\n\nThe source code for InterBase 6 is scheduled to be published during \nthe first part of the year 2000.\n\nChould this \"part\" be 1/2, 1/3, 1/4, 1/12 (or 1/1) ?\n\n------------\nHannu\n", "msg_date": "Tue, 04 Jan 2000 00:56:46 +0200", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Inprise/Borland releasing Interbase as Open source" }, { "msg_contents": "I wish this announcement had been made a few months ago!! We have several\ndevelopers porting our server software to PostgreSQL. Although we like\nPostgreSQL, we have run into a number of memory leaks and bugs - something\nwe never encountered with Interbase.\n\nNow Interbase is going open source, we will discontinue the PostgreSQL\ndevelopment effort. Interbase is such a well written DBMS, it doesn't make\nsense to continue.\n\nYou guys have done a great job - but, frankly, IB is better.\n\nSteve\n\n\nBruce Momjian wrote:\n\n> FYI:\n>\n> SCOTTS VALLEY, Calif., Jan. 3 /PRNewswire/ -- Inprise Corporation\n> (Nasdaq: INPR) today announced that it is jumping to the forefront of\n> the Linux database market by open-sourcing the beta version of\n> InterBase 6, the new version of its SQL database. InterBase will be\n> released in open-source form for multiple platforms, including Linux,\n> Windows NT, and Solaris.\n>\n> --\n> Bruce Momjian | http://www.op.net/~candle\n> [email protected] | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n>\n> ************\n\n", "msg_date": "Mon, 03 Jan 2000 17:25:21 -0800", "msg_from": "Stephen Birch <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Inprise/Borland releasing Interbase as Open source" }, { "msg_contents": "On Mon, 3 Jan 2000, Stephen Birch wrote:\n\n> I wish this announcement had been made a few months ago!! We have\n> several developers porting our server software to PostgreSQL. \n> Although we like PostgreSQL, we have run into a number of memory leaks\n> and bugs - something we never encountered with Interbase.\n\nWhat version of PostgreSQL? Did the problem reports you sent in not\nimprove the situation?\n\n> Now Interbase is going open source, we will discontinue the PostgreSQL\n> development effort. Interbase is such a well written DBMS, it doesn't\n> make sense to continue.\n\nTwo points...when will Interbase go open source? Right now they've\nannounced the intention to do so, and even given a very brood time\nframe...but, when is it going to happen. two...what says Interbase will\ncontinue to be \"as good\" when becomes open source and they are no longer\nmaking any money on it?\n\n> You guys have done a great job - but, frankly, IB is better.\n\nIn what ways? \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Mon, 3 Jan 2000 22:39:41 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Inprise/Borland releasing Interbase as Open source" }, { "msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nOn Mon, 3 Jan 2000, Stephen Birch wrote:\n\n> You guys have done a great job - but, frankly, IB is better.\n> \n> Steve\n\nwtf ? what sort of lame remark/incentive is that ? gimme a break being\namongst the multitudes who use pg dayin/dayout i hate tirekickers that say\nrubbish like 'such and such is better' or 'if you do such and such' having\nbeen down this path many times over the years myself, the last thing the\nengineers working on pg need is to hear those sort of negative comments.\n\njust my $0.02c knowing how hard everyone works on pg and it is superb!\n\n/Torqumada\n\nNorman Widders - Paladin Corporation Pty Ltd. ACN: 081-191-611\nThe lyf so short, the craft so long to lerne - Chaucer\nNIC: NW83-AU OpenBSD, FreeBSD, Solaris, SCO, Debian\nSoftware Engineering: c/c++/perl/sql/eiffel/pascal/haskell\nPh: +612 9835-4782 Fax: +612 9864-0487 Mobile: 0416-207-857\nPowered by Symetric Multiple Processors running on FreeBSD 3.4/SMP\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.0.0 (FreeBSD)\nComment: Made with pgp4pine\n\niEYEARECAAYFAjhxr4oACgkQfpbFlIYNi7dHxwCcCYDevE7ev1VE5XS0cAz5L266\nVtwAoIfdLqeqEw2JEVZXW4tyPnp3rsLn\n=BzpQ\n-----END PGP SIGNATURE-----\n\n", "msg_date": "Tue, 4 Jan 2000 19:29:57 +1100 (EST)", "msg_from": "Norman Widders <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Inprise/Borland releasing Interbase as Open source" }, { "msg_contents": "Stephen Birch wrote:\n> \n> I wish this announcement had been made a few months ago!! We have several\n> developers porting our server software to PostgreSQL. Although we like\n> PostgreSQL, we have run into a number of memory leaks and bugs - something\n> we never encountered with Interbase.\n> \n> Now Interbase is going open source, we will discontinue the PostgreSQL\n> development effort. Interbase is such a well written DBMS, it doesn't make\n> sense to continue.\n\nThe announcement said that IB version 6 _beta_ is going to be open source,\nwithout specifying what kind of license it will have. \n\nIt could very well be something like SCL (i.e. you can have the source, but \nwhat you can do with it is quite limited). If you just need a\nbeer-kind-of-free \ndatabase, you may be better off with using Sybase or IB v.4\n\nI suspect that the move to open-source it is at least partly an effort to \nfix \"a number of memory leaks and bugs\" ;)\n\n\n------------\nHannu\n", "msg_date": "Tue, 04 Jan 2000 10:41:49 +0200", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Inprise/Borland releasing Interbase as Open source" }, { "msg_contents": "\nStephen Birch <[email protected]> wrote: \n\n> Now Interbase is going open source, we will discontinue the PostgreSQL\n> development effort. Interbase is such a well written DBMS, it doesn't make\n> sense to continue.\n\nYou might want to wait to see what they mean by \"open\nsource\". They might mean GPL, BSD, MPL or they could be\nrolling their own vanity license that'll take months to\ndebug. They also might go the bogus \"open source\" route,\nala Sun's \"Community Source License\". \n\nOpen source projects are a tricky business... if they don't\ndo it right, they won't attract the critical mass of\ndevelopers they need to keep the project going (yes, old\ncode never dies, but it does bitrot away...). \n\n\n\n", "msg_date": "Tue, 04 Jan 2000 01:11:09 -0800", "msg_from": "Joe Brenner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Inprise/Borland releasing Interbase as Open source " }, { "msg_contents": "Stephen Birch wrote:\n> \n> I wish this announcement had been made a few months ago!! We have several\n> developers porting our server software to PostgreSQL. Although we like\n> PostgreSQL, we have run into a number of memory leaks and bugs - something\n> we never encountered with Interbase.\n> \n> Now Interbase is going open source, we will discontinue the PostgreSQL\n> development effort. Interbase is such a well written DBMS, it doesn't make\n> sense to continue.\n> \n> You guys have done a great job - but, frankly, IB is better.\n\nJust been porting our app to Oracle. It took me 3 days to install, and an\nextreme amount of frustration ie. jre1.1.6 is hardwired in so you have to\nhave it to install, oci apps core when dynamically linked, column size\n(linesize) set to greater than 125 causes a core when describing an \nobject in sqlplus,...\n\nWill look at IB when it comes around, but right now give me Postgres anyday!\n--------\nRegards\nTheo\n", "msg_date": "Tue, 04 Jan 2000 11:11:13 +0200", "msg_from": "Theo Kramer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Inprise/Borland releasing Interbase as Open source" }, { "msg_contents": "The Hermit Hacker wrote:\n\n> On Mon, 3 Jan 2000, Stephen Birch wrote:\n>\n> > I wish this announcement had been made a few months ago!! We have\n> > several developers porting our server software to PostgreSQL.\n> > Although we like PostgreSQL, we have run into a number of memory leaks\n> > and bugs - something we never encountered with Interbase.\n>\n> What version of PostgreSQL? Did the problem reports you sent in not\n> improve the situation?\n\n I haven't seen that many. And what kind of a project leader must it be,\n that a simple announcement causes the work of several programmers over\n months (sounds at least like a man-year) to be thrown away? IMHO the\n kind of PL, companies like M$ are targeting with their huge amount of\n announcements.\n\n\n> > Now Interbase is going open source, we will discontinue the PostgreSQL\n> > development effort. Interbase is such a well written DBMS, it doesn't\n> > make sense to continue.\n>\n> Two points...when will Interbase go open source? Right now they've\n> announced the intention to do so, and even given a very brood time\n> frame...but, when is it going to happen. two...what says Interbase will\n> continue to be \"as good\" when becomes open source and they are no longer\n> making any money on it?\n\n Since it's the toplevel story on www.borland.com, I think it'll really\n happen soon. And I also think they intend to continue making money on\n it, just not by selling DB-licenses any more. They have a rich set of\n development tools etc. they can sell anyway. And in many projects I've\n seen that it's never a bad choice not to mixup too many\n hardware/software vendors (they'll all point to each other as soon as\n problems arise). So it's a big PRO for their applications and tools, if\n you'll get the DB they use for free. And it's your decision to spend\n money when going into production to buy commercial support (what I\n expect they'll offer).\n\n Another point is this. As long as I know Postgres, a couple of features\n had been added just because some user needed it. And they are supported\n and kept alive. Do they have some proposal on that? How will they deal\n with some feature-patch sent in?\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n\n", "msg_date": "Tue, 04 Jan 2000 13:25:49 +0100", "msg_from": "Jan Wieck <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Inprise/Borland releasing Interbase as Open source" }, { "msg_contents": "\nI've since gotten an email from Stephen in response to his comments, and I\nthink that when he wrote his original, he needed his morning cup of coffee\n(or equivalent), since it came over alot heavier then what he email'd\nme...\n\nA quick summary:\n\n\tThey didn't report the memory leaks...they fixed them and uploaded\npatches, which have been accepted and commit'd\n\n\tA few problems couldn't be reproduced, and, therefore, left \nunreported. I wish ppl would report anyway, as someone else might be\ncoming across this, finding it also non-reproducable and might have some\ndata to add :(\n\n\tThe only one that is left outstanding right now has to do with:\n\n\"However, the biggest problem was reported recently, see \"HEAP_MOVED_IN during\nvacuum\" posted on Saturday, no replies\" ... \n\n\tAnyone have any comments on that last one?\n\nOn Tue, 4 Jan 2000, Jan Wieck wrote:\n\n> The Hermit Hacker wrote:\n> \n> > On Mon, 3 Jan 2000, Stephen Birch wrote:\n> >\n> > > I wish this announcement had been made a few months ago!! We have\n> > > several developers porting our server software to PostgreSQL.\n> > > Although we like PostgreSQL, we have run into a number of memory leaks\n> > > and bugs - something we never encountered with Interbase.\n> >\n> > What version of PostgreSQL? Did the problem reports you sent in not\n> > improve the situation?\n> \n> I haven't seen that many. And what kind of a project leader must it be,\n> that a simple announcement causes the work of several programmers over\n> months (sounds at least like a man-year) to be thrown away? IMHO the\n> kind of PL, companies like M$ are targeting with their huge amount of\n> announcements.\n> \n> \n> > > Now Interbase is going open source, we will discontinue the PostgreSQL\n> > > development effort. Interbase is such a well written DBMS, it doesn't\n> > > make sense to continue.\n> >\n> > Two points...when will Interbase go open source? Right now they've\n> > announced the intention to do so, and even given a very brood time\n> > frame...but, when is it going to happen. two...what says Interbase will\n> > continue to be \"as good\" when becomes open source and they are no longer\n> > making any money on it?\n> \n> Since it's the toplevel story on www.borland.com, I think it'll really\n> happen soon. And I also think they intend to continue making money on\n> it, just not by selling DB-licenses any more. They have a rich set of\n> development tools etc. they can sell anyway. And in many projects I've\n> seen that it's never a bad choice not to mixup too many\n> hardware/software vendors (they'll all point to each other as soon as\n> problems arise). So it's a big PRO for their applications and tools, if\n> you'll get the DB they use for free. And it's your decision to spend\n> money when going into production to buy commercial support (what I\n> expect they'll offer).\n> \n> Another point is this. As long as I know Postgres, a couple of features\n> had been added just because some user needed it. And they are supported\n> and kept alive. Do they have some proposal on that? How will they deal\n> with some feature-patch sent in?\n> \n> \n> Jan\n> \n> --\n> \n> #======================================================================#\n> # It's easier to get forgiveness for being wrong than for being right. #\n> # Let's break this rule - forgive me. #\n> #========================================= [email protected] (Jan Wieck) #\n> \n> \n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Tue, 4 Jan 2000 09:18:49 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Inprise/Borland releasing Interbase as Open source" }, { "msg_contents": "On Tue, 4 Jan 2000, Don Baccus wrote:\n\n> At 10:39 PM 1/3/00 -0400, The Hermit Hacker wrote:\n> >On Mon, 3 Jan 2000, Stephen Birch wrote:\n> \n> (HH):\n> >Two points...when will Interbase go open source? Right now they've\n> >announced the intention to do so, and even given a very brood time\n> >frame...but, when is it going to happen. two...what says Interbase will\n> >continue to be \"as good\" when becomes open source and they are no longer\n> >making any money on it?\n> \n> They say they'll continue to sell it via their traditional channels\n> and sell support, too. So it's not really clear what open-source\n> means in this context. Open-source doesn't have to mean the disappearance\n> of license fees...\n> \n> >> You guys have done a great job - but, frankly, IB is better.\n> >\n> >In what ways? \n> \n> Outer joins, for one. \n\ncurrently being worked on by Thomas, schedualed for, I believe, v7.1 this\nsummer ...\n\nNext? :)\n\n\n", "msg_date": "Tue, 4 Jan 2000 10:25:09 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Inprise/Borland releasing Interbase as Open source" }, { "msg_contents": "Don Baccus wrote:\n\n> At 10:39 PM 1/3/00 -0400, The Hermit Hacker wrote:\n> >On Mon, 3 Jan 2000, Stephen Birch wrote:\n>\n> (HH):\n> >Two points...when will Interbase go open source? Right now they've\n> >announced the intention to do so, and even given a very brood time\n> >frame...but, when is it going to happen. two...what says Interbase will\n> >continue to be \"as good\" when becomes open source and they are no longer\n> >making any money on it?\n>\n> They say they'll continue to sell it via their traditional channels\n> and sell support, too.\n\nThey say:\n\"The source code for InterBase 6 is scheduled to be published during the first\npart of the year 2000.\nThe company also announced it plans to continue to sell and support InterBase\n5.6 through normal distribution channels...\"\n\nIf I understand seems they refer to previous version i.e. InterBase ver. 5.6\nbut version 6 will be open source, maybe...\n\n\n> So it's not really clear what open-source\n> means in this context. Open-source doesn't have to mean the disappearance\n> of license fees...\n>\n> >> You guys have done a great job - but, frankly, IB is better.\n> >\n> >In what ways?\n>\n> Outer joins, for one.\n>\n> - Don Baccus, Portland OR <[email protected]>\n> Nature photos, on-line guides, Pacific Northwest\n> Rare Bird Alert Service and other goodies at\n> http://donb.photo.net.\n>\n> ************\n\n", "msg_date": "Tue, 04 Jan 2000 16:29:09 +0100", "msg_from": "Jose Soares <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Inprise/Borland releasing Interbase as Opensource" }, { "msg_contents": "The Hermit Hacker <[email protected]> writes:\n> \"However, the biggest problem was reported recently, see \"HEAP_MOVED_IN\n> during vacuum\" posted on Saturday, no replies\" ... \n\n> \tAnyone have any comments on that last one?\n\nI replied to it --- not with any useful ideas I'm afraid, just asking\nfor more info. But if Stephen is claiming he was ignored, then he's\nnot reading his email...\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 04 Jan 2000 10:58:33 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Inprise/Borland releasing Interbase as Open source " }, { "msg_contents": "At 10:39 PM 1/3/00 -0400, The Hermit Hacker wrote:\n>On Mon, 3 Jan 2000, Stephen Birch wrote:\n\n(HH):\n>Two points...when will Interbase go open source? Right now they've\n>announced the intention to do so, and even given a very brood time\n>frame...but, when is it going to happen. two...what says Interbase will\n>continue to be \"as good\" when becomes open source and they are no longer\n>making any money on it?\n\nThey say they'll continue to sell it via their traditional channels\nand sell support, too. So it's not really clear what open-source\nmeans in this context. Open-source doesn't have to mean the disappearance\nof license fees...\n\n>> You guys have done a great job - but, frankly, IB is better.\n>\n>In what ways? \n\nOuter joins, for one. \n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n", "msg_date": "Tue, 04 Jan 2000 09:19:03 -0800", "msg_from": "Don Baccus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Inprise/Borland releasing Interbase as Open\n source" }, { "msg_contents": "Don Baccus wrote:\n\n\n>\n>\n> So they'll sell 5.6 but maybe 6 will be free? Strange. Rumor on Slashdot\n> is that they've lost their key developers (a month or so ago).\n\n If THAT's the case, man, then they try to get back experienced\n programmers for development and support for free via the internet. Then\n it would take some time until they're able to offer professional\n support.\n\n>\n>\n> And (in response to Jan), yeah, I know outer joins are scheduled\n> to be completed in 7.1. Personally, the interbase news leaves me\n> yawning. Postgres, since 6.5, is meeting my needs just fine.\n\n Was Marc IIRC. Anyway, most of our proposed features appear in time or\n with a 25-50% overrun. What's absolutely strong for free+open software.\n And moreover, almost every serious bug, that is fixable without\n destroying anything else, get's fixed in a couple of days or weeks. The\n reason for the latter is, that we have a fistfull of programmes who\n work for years now on the code. Some of us since the release from\n Berkeley. That are key developers, who know intuitively into what\n region of the code to dive if some strange misbehaviour is reported.\n\n So if Inprise really lost them, they have a severe problem.\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n\n", "msg_date": "Tue, 04 Jan 2000 20:07:11 +0100", "msg_from": "Jan Wieck <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Inprise/Borland releasing Interbase as Opensource" }, { "msg_contents": "> > They say they'll continue to sell it via their traditional channels\n> > and sell support, too. So it's not really clear what open-source\n> > means in this context. Open-source doesn't have to mean the disappearance\n> > of license fees...\n> > \n> > >> You guys have done a great job - but, frankly, IB is better.\n> > >\n> > >In what ways? \n> > \n> > Outer joins, for one. \n> \n> currently being worked on by Thomas, schedualed for, I believe, v7.1 this\n> summer ...\n\nI have spoken to him about getting some minimal OUTER join functionality\nin 7.0. Let's see what happens.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 4 Jan 2000 14:42:19 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Inprise/Borland releasing Interbase as Open source" }, { "msg_contents": "> Was Marc IIRC. Anyway, most of our proposed features appear in time or\n> with a 25-50% overrun. What's absolutely strong for free+open software.\n> And moreover, almost every serious bug, that is fixable without\n> destroying anything else, get's fixed in a couple of days or weeks. The\n> reason for the latter is, that we have a fistfull of programmes who\n> work for years now on the code. Some of us since the release from\n> Berkeley. That are key developers, who know intuitively into what\n> region of the code to dive if some strange misbehaviour is reported.\n> \n> So if Inprise really lost them, they have a severe problem.\n\nAnother _big_ issue is how clean the code is. MySQL, for example,\nprobably loses tons of people because their code is so poorly designed,\nand just plain ugly to me.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 4 Jan 2000 14:50:04 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Inprise/Borland releasing Interbase as Opensource" }, { "msg_contents": "At 04:29 PM 1/4/00 +0100, Jose Soares wrote:\n>Don Baccus wrote:\n\n>> They say they'll continue to sell it via their traditional channels\n>> and sell support, too.\n>\n>They say:\n>\"The source code for InterBase 6 is scheduled to be published during the\nfirst\n>part of the year 2000.\n>The company also announced it plans to continue to sell and support InterBase\n>5.6 through normal distribution channels...\"\n>\n>If I understand seems they refer to previous version i.e. InterBase ver. 5.6\n>but version 6 will be open source, maybe...\n\nSo they'll sell 5.6 but maybe 6 will be free? Strange. Rumor on Slashdot\nis that they've lost their key developers (a month or so ago).\n\nAnd (in response to Jan), yeah, I know outer joins are scheduled\nto be completed in 7.1. Personally, the interbase news leaves me\nyawning. Postgres, since 6.5, is meeting my needs just fine.\n\nBTW, it appears that they have \"multi-generational\" concurrency\ncontrol, which sounds very much like MVCC. Indeed, the white\npaper describing it makes it sound as though the basic strategy\nfor storing tuples with transaction ids (the \"generations\") is\nkinda similar to PostgreSQL. They support dirty reads and some\nways to specify which \"generation\" to read from. Might be some\nideas there worth looking at for future PostgreSQL work...\n\nKeep in mind that I spent no more than 15 minutes trucking around\ntheir site and docs so I picked up no more than a very, very \nsurface impression of stuff. (in other words, my quick impressions\nmay be very innaccurate).\n\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n", "msg_date": "Tue, 04 Jan 2000 12:21:31 -0800", "msg_from": "Don Baccus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Inprise/Borland releasing Interbase as Opensource" }, { "msg_contents": "> At 02:50 PM 1/4/00 -0500, Bruce Momjian wrote:\n> \n> >Another _big_ issue is how clean the code is. MySQL, for example,\n> >probably loses tons of people because their code is so poorly designed,\n> >and just plain ugly to me.\n> \n> I'll have to say that the Postgres code's quite easy to follow, at\n> least at the \"grasp-the-big-picture\" level at which I've been reading\n> it on a casual, off-and-on basis. Understanding it well enough to\n> contribute - well, that's another issue 'cause by its nature it is\n> a fairly complex beast! \n\nTotally true. Education is very important, and clean coding helps with\nthat. \n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 4 Jan 2000 16:49:21 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Inprise/Borland releasing Interbase as Opensource" }, { "msg_contents": "Jan Wieck wrote:\n\n> I haven't seen that many. And what kind of a project leader must it be,\n> that a simple announcement causes the work of several programmers over\n> months (sounds at least like a man-year) to be thrown away? IMHO the\n> kind of PL, companies like M$ are targeting with their huge amount of\n> announcements.\n>\n\nOuch - that hurt.\n\nLet me address the kind of project manager we are talking about here by looking\nback at the decision to move from MS to Linux:\n\nIn fact, the move to Linux meant throwing away about 10 man years worth of\nwork on WIN32. However, the cost savings to my employer were substantial as\ncustomer support issues disappeared overnight once the port was done. However,\nthe 10 MY were not discarded over a single announcement, we watched Linux and\nexperimented with it for about 3 years before starting the port.\n\nTo see why we would abort development on a PostgreSQL port of our servers\nbecause of the IB announcement, you must understand why we financed the port to\nPG in the first place. When GNU changed the C run time library to 2.1 (again),\nit broke our IB 4.0 based software and prevented us from moving forward to the\ncurrent SuSE 6.2 (at the time) release. We knew the IB problem could be fixed\nin an hour or two by recompiling the IB code - but we did not have the source\nand Borland considered 4.0 dead.\n\nIn itself, this problem did not justify authorizing the PG development - but we\nfelt it was indicative of future problems with IB. Hence we started to\nresearch PG to see if it was a suitable replacement. Investing in PG made me\ndamn nervous because I failed to locate example sites trusting it with mission\ncritical work. In fact, I was convinced by reading the discussion groups and\nnoting the extremely high caliber of people working on PG and also the\nincredible integrity these guys have. Evenb though they don't make a dime from\nPostgreSQL, they really, really care about the software and its users.\n\nWe now have PG based servers under test in the lab and are still solving PG\nissues before releasing alpha code. Of course, the IB announcement forces us\nto rethink the issue.\n\nAs for me being influenced by marketing literature, especially from MS - you\nare way off mark.\n\nBy the way, I was the idiot that specified NT to our customer base in the first\nplace - I consider that to be the single worst decision of my successful 20\nyear computing career.\n\nOne final point, I live in a 100% commercial world. In the capacity of my\nwork, I am not concerned about the free software ethos, nor do I care if\nsoftware is free or not (as in beer) - I just need to deploy solutions that\nwork.\n\nSteve\n\n\n{{{{{{{{ {{{{{ 1 hour of real time passed here }}}}}}}}}\n\nSince writing the above, I was called to attend a telecon with my manager and\nthe ITS managers from our two biggest customers to discuss exactly this issue.\nThe decision has been made to deploy the PostgreSQL based server. We all\nagreed that whatever happens to Interbase, the personal commitment by the\nPostgreSQL folks is not likely to dry up. Hence the code will continue to\nimprove over time. I believe that they clearly understand how important\nreliability is to a database server.\n\nThere is a good chance that the Borland decision will have a benificial ripple\neffect on PG as other engineers turn their attention to Open Source\nalternatives.\n\nWish us luck, we will load the new software on our customers' servers for a FOT\n(field operational test) next week.\n\nSteve\n\n\n\n", "msg_date": "Tue, 04 Jan 2000 14:53:06 -0800", "msg_from": "Stephen Birch <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Inprise/Borland releasing Interbase as Open source" }, { "msg_contents": "On Tue, 4 Jan 2000, Stephen Birch wrote:\n\n> By the way, I was the idiot that specified NT to our customer base in\n> the first place - I consider that to be the single worst decision of\n> my successful 20 year computing career.\n\nOuch, that must have hurt :(\n\n> Since writing the above, I was called to attend a telecon with my\n> manager and the ITS managers from our two biggest customers to discuss\n> exactly this issue. The decision has been made to deploy the\n> PostgreSQL based server. We all agreed that whatever happens to\n> Interbase, the personal commitment by the PostgreSQL folks is not\n> likely to dry up. Hence the code will continue to improve over time. \n> I believe that they clearly understand how important reliability is to\n> a database server.\n> \n> There is a good chance that the Borland decision will have a\n> benificial ripple effect on PG as other engineers turn their attention\n> to Open Source alternatives.\n> \n> Wish us luck, we will load the new software on our customers' servers\n> for a FOT (field operational test) next week.\n\nGood luck, and keep us informed as to how things are going ... :)\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Tue, 4 Jan 2000 19:10:22 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Inprise/Borland releasing Interbase as Open source" }, { "msg_contents": "> Since writing the above, I was called to attend a telecon with my manager and\n> the ITS managers from our two biggest customers to discuss exactly this issue.\n> The decision has been made to deploy the PostgreSQL based server. We all\n> agreed that whatever happens to Interbase, the personal commitment by the\n> PostgreSQL folks is not likely to dry up. Hence the code will continue to\n> improve over time. I believe that they clearly understand how important\n> reliability is to a database server.\n\nAll's well that end's well...\n\nI have been very impressed over the three years of work on PostgreSQL\nthat everything is done is such a civilized matter. I mention that in\nmy book and in the development history.\n\nNot sure how we have achieved this, but we certainly have.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 4 Jan 2000 18:42:30 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Inprise/Borland releasing Interbase as Open source" }, { "msg_contents": "> > Since writing the above, I was called to attend a telecon with my\n> > manager and the ITS managers from our two biggest customers to discuss\n> > exactly this issue. The decision has been made to deploy the\n> > PostgreSQL based server. We all agreed that whatever happens to\n> > Interbase, the personal commitment by the PostgreSQL folks is not\n> > likely to dry up. Hence the code will continue to improve over time. \n> > I believe that they clearly understand how important reliability is to\n> > a database server.\n> > \n> > There is a good chance that the Borland decision will have a\n> > benificial ripple effect on PG as other engineers turn their attention\n> > to Open Source alternatives.\n> > \n> > Wish us luck, we will load the new software on our customers' servers\n> > for a FOT (field operational test) next week.\n> \n> Good luck, and keep us informed as to how things are going ... :)\n\nLet me add we are planning a 7.0 release in the next few months that\nimproves reliability and adds new features. You can actually try the\nsnapshot if you want to see how we are doing. 6.5.* is based in code\nthat solidified in June of 1999, which is ages ago in PostgreSQL time.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 4 Jan 2000 18:44:21 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Inprise/Borland releasing Interbase as Open source" }, { "msg_contents": "Stephen Birch wrote:\n\n> Jan Wieck wrote:\n>\n> > I haven't seen that many. And what kind of a project leader must it be,\n> > that a simple announcement causes the work of several programmers over\n> > months (sounds at least like a man-year) to be thrown away? IMHO the\n> > kind of PL, companies like M$ are targeting with their huge amount of\n> > announcements.\n> >\n>\n> Ouch - that hurt.\n\n Pardon for beeing that harsh, but similar to you (as Marc said you\n haven't had your first cup of coffee), I missed my required amount of\n beer :-)\n\n> We now have PG based servers under test in the lab and are still solving PG\n> issues before releasing alpha code. Of course, the IB announcement forces us\n> to rethink the issue.\n\n That sounds totally different to your first message. This is definitely\n a PUSH BREAK for possible limitation of loss.\n\n> Since writing the above, I was called to attend a telecon with my manager and\n> the ITS managers from our two biggest customers to discuss exactly this issue.\n> The decision has been made to deploy the PostgreSQL based server. We all\n> agreed that whatever happens to Interbase, the personal commitment by the\n> PostgreSQL folks is not likely to dry up. Hence the code will continue to\n> improve over time. I believe that they clearly understand how important\n> reliability is to a database server.\n\n Great news. Be sure, I'll be one of the last rats leaving the ship.\n\n> There is a good chance that the Borland decision will have a benificial ripple\n> effect on PG as other engineers turn their attention to Open Source\n> alternatives.\n\n There aleady is a noticeable turn in attention. Creative recently\n decided to put their SB-Live! drivers for Linux under GPL (after they\n had severe problems with IRQ and DMA handling at least in the SMP\n environment). One month later, the driver totally fit's my needs. Well,\n they're a hardware vendor, primarily selling their cards to make money.\n\n OTOH, I'm an SAP R/3 base consultant for years now. And all these DB\n runtime license discussions are annoying. SAP needs about 2-3 months to\n port R/3 to a new database. But they need another year or so to ship it\n due to their internal quality assurance policy. And it's a not to\n underestimate efford impact to support it in the future. The same\n applies to the OS corner, but they decided to port R/3 to Linux anyway,\n because coupling the benefits of the UNIX world (WRT administration\n issues) with the low cost level of PC hardware, is definitely worth the\n above efford.\n\n So I wouldn't be surprised if SAP, one of the biggest software vendors\n worldwide, would decide to support an open source database too at some\n point in the future. And something like that might be the intention of\n Inprise. As we both know, a customer usually keeps his database world\n consistent to be able to share knowledge inside the company. So if they\n can save tenth of thousands of dollars DB-license fee per year (a\n usually small fee in the SAP market) when moving to an open source\n database, they will decide to do so. And at that point, they'll need to\n port their intranet-, internet- and other solutions as well.\n\n If it's not true (as someone rumored) that Inprise lost the key\n developers, that'd be the point.\n\n> Wish us luck, we will load the new software on our customers' servers for a FOT\n> (field operational test) next week.\n\n Report any problems ASAP, and we'll help to make it a success-story.\n\n> Steve\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n\n", "msg_date": "Wed, 05 Jan 2000 00:47:03 +0100", "msg_from": "Jan Wieck <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Inprise/Borland releasing Interbase as Open source" }, { "msg_contents": "At 02:50 PM 1/4/00 -0500, Bruce Momjian wrote:\n\n>Another _big_ issue is how clean the code is. MySQL, for example,\n>probably loses tons of people because their code is so poorly designed,\n>and just plain ugly to me.\n\nI'll have to say that the Postgres code's quite easy to follow, at\nleast at the \"grasp-the-big-picture\" level at which I've been reading\nit on a casual, off-and-on basis. Understanding it well enough to\ncontribute - well, that's another issue 'cause by its nature it is\na fairly complex beast! \n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n", "msg_date": "Tue, 04 Jan 2000 15:49:31 -0800", "msg_from": "Don Baccus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Inprise/Borland releasing Interbase as Opensource" }, { "msg_contents": "I have now created a test case that demonstrate the HEAP_MOVED_IN during\nvacuum problem. Since the tar ball is 182k - I put it on an ftp site\ninstead of mailing it.\n\nYou can grab it from the following location:\n\n http://www.ironmountainsystems.com/heap_moved_in/\n\nThe tar ball contains two files - a shell script (show_bug) and a pg_dump\ndump. The shell script does the following using the dump file:\n\n1. Create database ntis\n2. Create table msg and populate it.\n3. Use trim() twice.\n4. Vacuum.\n\nThe three interesting commands reside at the end of ntis.dmp:\n\nupdate msg set description = trim(description);\nupdate msg set owner = trim(owner);\nvacuum;\n\nwhen the script \"show_bug\" is run, we get the following output:\n\nCREATE DATABASE\nYou are now connected to database ntis.\nCREATE\nUPDATE 12069\nUPDATE 12069\nERROR: HEAP_MOVED_IN was not expected\n\nOne interesting point: if either one of the trim operations is omitted,\nvacuum does not give the HEAP_MOVED_IN error. I also notice that if you\nchange ntis.dmp so a vacuum is done between the two, the problem goes away.\n\nAny ideas?\n\n\nTom Lane wrote:\n\n> The Hermit Hacker <[email protected]> writes:\n> > \"However, the biggest problem was reported recently, see \"HEAP_MOVED_IN\n> > during vacuum\" posted on Saturday, no replies\" ...\n>\n> > Anyone have any comments on that last one?\n>\n> I replied to it --- not with any useful ideas I'm afraid, just asking\n> for more info. But if Stephen is claiming he was ignored, then he's\n> not reading his email...\n>\n> regards, tom lane\n>\n> ************\n\n", "msg_date": "Sun, 09 Jan 2000 10:45:20 -0800", "msg_from": "Stephen Birch <[email protected]>", "msg_from_op": false, "msg_subject": "Re:HEAP_MOVED_IN during vacuum - test case" }, { "msg_contents": "Stephen Birch <[email protected]> writes:\n> I have now created a test case that demonstrate the HEAP_MOVED_IN during\n> vacuum problem.\n\nUsing this script, I see no failure under either REL6_5_PATCHES or\ncurrent branch on HPUX 10.20 --- but I do see it in current sources\non a Linux box! Platform-dependent problem, evidently. Will start\ndigging.\n\nStephen, many thanks for creating a small, reproducible example.\nI know that's often the hardest part of finding a bug...\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 09 Jan 2000 16:33:21 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re:HEAP_MOVED_IN during vacuum - test case " }, { "msg_contents": "> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]]On Behalf Of Stephen Birch\n>\n> I have now created a test case that demonstrate the HEAP_MOVED_IN during\n> vacuum problem. Since the tar ball is 182k - I put it on an ftp site\n> instead of mailing it.\n>\n> You can grab it from the following location:\n>\n> http://www.ironmountainsystems.com/heap_moved_in/\n>\n\nThe following patch seems to fix your case.\nHowever I'm not sure it's a right solution.\n\nRegards.\n\nHiroshi Inoue\[email protected]\n\nIndex: commands/vacuum.c\n===================================================================\nRCS file: /home/cvs/pgcurrent/backend/commands/vacuum.c,v\nretrieving revision 1.18\ndiff -c -r1.18 vacuum.c\n*** commands/vacuum.c\t2000/01/05 03:05:35\t1.18\n--- commands/vacuum.c\t2000/01/10 02:39:35\n***************\n*** 1049,1054 ****\n--- 1049,1055 ----\n \t\t\t *idcur;\n \tint\t\t\tlast_fraged_block,\n \t\t\t\tlast_vacuum_block,\n+ \t\t\t\tlast_movedin_block,\n \t\t\t\ti = 0;\n \tSize\t\ttuple_len;\n \tint\t\t\tnum_moved,\n***************\n*** 1084,1089 ****\n--- 1085,1091 ----\n \tvacuumed_pages = vacuum_pages->vpl_num_pages -\nvacuum_pages->vpl_empty_end_pages;\n \tlast_vacuum_page = vacuum_pages->vpl_pagedesc[vacuumed_pages - 1];\n \tlast_vacuum_block = last_vacuum_page->vpd_blkno;\n+ \tlast_movedin_block = 0;\n \tAssert(last_vacuum_block >= last_fraged_block);\n \tcur_buffer = InvalidBuffer;\n \tnum_moved = 0;\n***************\n*** 1097,1102 ****\n--- 1099,1107 ----\n \t\t/* if it's reapped page and it was used by me - quit */\n \t\tif (blkno == last_fraged_block && last_fraged_page->vpd_offsets_used >\n0)\n \t\t\tbreak;\n+ \t\t/* couldn't shrink any more if this block has MOVED_INd tuples - quit */\n+ \t\tif (blkno == last_movedin_block)\n+ \t\t\tbreak;\n\n \t\tbuf = ReadBuffer(onerel, blkno);\n \t\tpage = BufferGetPage(buf);\n***************\n*** 1477,1482 ****\n--- 1482,1489 ----\n \t\t\t\t\tnewtup.t_datamcxt = NULL;\n \t\t\t\t\tnewtup.t_data = (HeapTupleHeader) PageGetItem(ToPage, newitemid);\n \t\t\t\t\tItemPointerSet(&(newtup.t_self), vtmove[ti].vpd->vpd_blkno, newoff);\n+ \t\t\t\t\tif (vtmove[i].vpd->vpd_blkno > last_movedin_block)\n+ \t\t\t\t\t\tlast_movedin_block = vtmove[i].vpd->vpd_blkno;\n\n \t\t\t\t\t/*\n \t\t\t\t\t * Set t_ctid pointing to itself for last tuple in\n***************\n*** 1610,1615 ****\n--- 1617,1624 ----\n \t\t\tnewtup.t_data = (HeapTupleHeader) PageGetItem(ToPage, newitemid);\n \t\t\tItemPointerSet(&(newtup.t_data->t_ctid), cur_page->vpd_blkno, newoff);\n \t\t\tnewtup.t_self = newtup.t_data->t_ctid;\n+ \t\t\tif (cur_page->vpd_blkno > last_movedin_block)\n+ \t\t\t\tlast_movedin_block = cur_page->vpd_blkno;\n\n \t\t\t/*\n \t\t\t * Mark old tuple as moved_off by vacuum and store vacuum XID\n\n\n", "msg_date": "Mon, 10 Jan 2000 12:03:23 +0900", "msg_from": "\"Hiroshi Inoue\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: [HACKERS] Re:HEAP_MOVED_IN during vacuum - test case" }, { "msg_contents": "Stephen Birch <[email protected]> writes:\n> I have now created a test case that demonstrate the HEAP_MOVED_IN during\n> vacuum problem.\n\nOK, I've sussed it. Dunno if you want the details, but briefly: the\ncode was using the last element of a list of target pages (pages that\nhad room to insert more tuples) as a sentinel point to know when to\nstop trying to move tuples out of source pages. But there was also\nan optimization in there to remove target pages from the target list\nas soon as they got full (so as not to keep checking them). Sure\nenough, with the right data pattern it was possible to remove the\nlast modified page from the target-page list before the source loop\ngot to it, and then everything falls over. I'm surprised we haven't\nheard more complaints about this, actually --- it doesn't look like\nthe failure should be all that unlikely.\n\nI have committed what I think is a proper fix into current sources,\nbut I don't really think it should be trusted until it's been through\na beta test cycle. Instead, attached is a very low-risk patch that\njust dikes out the code that tries to remove target pages early.\nThis will result in some marginal slowdown when vacuuming huge\nrelations, but I think it should be safe to plug into production\n6.5.* servers.\n\nThanks again for the narrowly focused test case --- I suspect you\nput quite a bit of time into developing it...\n\n\t\t\tregards, tom lane\n\n*** src/backend/commands/vacuum.c.orig\tTue Jan 4 12:27:26 2000\n--- src/backend/commands/vacuum.c\tSun Jan 9 23:16:10 2000\n***************\n*** 1253,1258 ****\n--- 1253,1259 ----\n \t\t\t\t{\n \t\t\t\t\tif (!vc_enough_space(to_vpd, tlen))\n \t\t\t\t\t{\n+ #if 0\t\t\t\t\t\t\t/* this code is broken */\n \t\t\t\t\t\tif (to_vpd != last_fraged_page &&\n \t\t\t\t\t\t !vc_enough_space(to_vpd, vacrelstats->min_tlen))\n \t\t\t\t\t\t{\n***************\n*** 1263,1268 ****\n--- 1264,1270 ----\n \t\t\t\t\t\t\tnum_fraged_pages--;\n \t\t\t\t\t\t\tAssert(last_fraged_page == fraged_pages->vpl_pagedesc[num_fraged_pages - 1]);\n \t\t\t\t\t\t}\n+ #endif\n \t\t\t\t\t\tfor (i = 0; i < num_fraged_pages; i++)\n \t\t\t\t\t\t{\n \t\t\t\t\t\t\tif (vc_enough_space(fraged_pages->vpl_pagedesc[i], tlen))\n***************\n*** 1517,1522 ****\n--- 1519,1525 ----\n \t\t\t\t\tWriteBuffer(cur_buffer);\n \t\t\t\t\tcur_buffer = InvalidBuffer;\n \n+ #if 0\t\t\t\t\t\t\t/* this code is broken */\n \t\t\t\t\t/*\n \t\t\t\t\t * If no one tuple can't be added to this page -\n \t\t\t\t\t * remove page from fraged_pages. - vadim 11/27/96\n***************\n*** 1534,1539 ****\n--- 1537,1543 ----\n \t\t\t\t\t\tnum_fraged_pages--;\n \t\t\t\t\t\tAssert(last_fraged_page == fraged_pages->vpl_pagedesc[num_fraged_pages - 1]);\n \t\t\t\t\t}\n+ #endif\n \t\t\t\t}\n \t\t\t\tfor (i = 0; i < num_fraged_pages; i++)\n \t\t\t\t{\n", "msg_date": "Sun, 09 Jan 2000 23:31:49 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re:HEAP_MOVED_IN during vacuum - test case " }, { "msg_contents": "\"Hiroshi Inoue\" <[email protected]> writes:\n> The following patch seems to fix your case.\n> However I'm not sure it's a right solution.\n\nThat looks like nearly the same logic that I arrived at, although\nwhat I committed included some additional code cleanups. As I said\nin my prior message, I don't fully trust it yet --- but I am glad\nyou came to the same conclusion.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 09 Jan 2000 23:48:31 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re:HEAP_MOVED_IN during vacuum - test case " }, { "msg_contents": "I still can't believe how fast you guys are, but would like to thank\nyou.\n\nIf I understood the vacuum logic, I would check your fixes, but it is\nstill black magic to me!! I did try the new code against the full\ndatabase and found no problems. As far as I can tell, you found it.\n\nThanks again.\n\nSteve\n\n\nTom Lane wrote:\n\n> \"Hiroshi Inoue\" <[email protected]> writes:\n> > The following patch seems to fix your case.\n> > However I'm not sure it's a right solution.\n>\n> That looks like nearly the same logic that I arrived at, although\n> what I committed included some additional code cleanups. As I said\n> in my prior message, I don't fully trust it yet --- but I am glad\n> you came to the same conclusion.\n>\n> regards, tom lane\n\n", "msg_date": "Sun, 09 Jan 2000 23:24:41 -0800", "msg_from": "Stephen Birch <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re:HEAP_MOVED_IN during vacuum - test case" } ]
[ { "msg_contents": "Great. Rumour has it there is some HOT transaction code in there, among\nother things.\n\nMikeA\n\n\n-----Original Message-----\nFrom: Bruce Momjian\nTo: PostgreSQL-development\nSent: 00/01/03 11:01\nSubject: [HACKERS] Inprise/Borland releasing Interbase as Open source\n\nFYI:\n\nSCOTTS VALLEY, Calif., Jan. 3 /PRNewswire/ -- Inprise Corporation\n (Nasdaq: INPR) today announced that it is jumping to the forefront of\n the Linux database market by open-sourcing the beta version of\n InterBase 6, the new version of its SQL database. InterBase will be\n released in open-source form for multiple platforms, including Linux,\n Windows NT, and Solaris. \n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania\n19026\n\n************\n", "msg_date": "Tue, 4 Jan 2000 00:26:26 +0200 ", "msg_from": "\"Ansley, Michael\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] Inprise/Borland releasing Interbase as Open source" }, { "msg_contents": "> Great. Rumour has it there is some HOT transaction code in there, among\n> other things.\n> \n> FYI:\n> \n> SCOTTS VALLEY, Calif., Jan. 3 /PRNewswire/ -- Inprise Corporation\n> (Nasdaq: INPR) today announced that it is jumping to the forefront of\n> the Linux database market by open-sourcing the beta version of\n> InterBase 6, the new version of its SQL database. InterBase will be\n> released in open-source form for multiple platforms, including Linux,\n> Windows NT, and Solaris. \n\nWe may find that that HOT transaction code is the same as our\ntransaction code, which I guess would mean ours is HOT too.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 4 Jan 2000 14:53:40 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Inprise/Borland releasing Interbase as Open source" }, { "msg_contents": "On Tue, 4 Jan 2000, Don Baccus wrote:\n\n> At 02:53 PM 1/4/00 -0500, Bruce Momjian wrote:\n> >> Great. Rumour has it there is some HOT transaction code in there, among\n> >> other things.\n> \n> >We may find that that HOT transaction code is the same as our\n> >transaction code, which I guess would mean ours is HOT too.\n> \n> (redundant, but what the heck)\n> \n> Their \"multi-generational\" concurrency control sure sounds just like\n> MVCC. \n\nI wonder when they implemented theirs? Basically...is their's based on\nold technology/concepts, while ours is based on newer ones?\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Tue, 4 Jan 2000 17:07:41 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Inprise/Borland releasing Interbase as Open source" }, { "msg_contents": "> At 02:53 PM 1/4/00 -0500, Bruce Momjian wrote:\n> >> Great. Rumour has it there is some HOT transaction code in there, among\n> >> other things.\n> \n> >We may find that that HOT transaction code is the same as our\n> >transaction code, which I guess would mean ours is HOT too.\n> \n> (redundant, but what the heck)\n> \n> Their \"multi-generational\" concurrency control sure sounds just like\n> MVCC. \n\nReminds me of the guy who said our MVCC was a leader in database\ntechnology. He did not realize is was only one person, Vadim, who did\nthe whole thing.\n\nAmazing when just one of our developers makes the commercial db's look\nlike they are standing still.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 4 Jan 2000 16:50:37 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Inprise/Borland releasing Interbase as Open source" }, { "msg_contents": "The Hermit Hacker wrote:\n\n> I wonder when they implemented theirs? Basically...is their's based on\n> old technology/concepts, while ours is based on newer ones?\n\n Maybe it's based on the same technology. If the've used a similar (HOT)\n transactional concept for tuples, based legally on the PG technique\n released under the BSD license years ago, they might have come to the\n same conclusion. That'd mean - well - OLD concepts like ours.\n\n Truth remains truth.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n\n", "msg_date": "Tue, 04 Jan 2000 23:45:13 +0100", "msg_from": "Jan Wieck <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Inprise/Borland releasing Interbase as Open source" }, { "msg_contents": "> Apparently they were formed out of DEC in 1985, with the notion of\n> doing a \"multi-generational\" model from the beginning.\n> \n> I don't know enough Postgres history to answer. Obviously, MVCC\n> is new but the way that tuples are stored, which made MVCC fairly\n> simple to implement (for Vadim, at least!), has been part of \n> Postgres from the beginning. \n> \n> And, again, my information on Interbase comes from a VERY quick\n> read of docs and a white paper found on their site, take my \n> quick-hit analysis with a grain of salt, please!\n\nYes, MVCC was natural for us.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 4 Jan 2000 18:48:28 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Inprise/Borland releasing Interbase as Open source" }, { "msg_contents": "At 02:53 PM 1/4/00 -0500, Bruce Momjian wrote:\n>> Great. Rumour has it there is some HOT transaction code in there, among\n>> other things.\n\n>We may find that that HOT transaction code is the same as our\n>transaction code, which I guess would mean ours is HOT too.\n\n(redundant, but what the heck)\n\nTheir \"multi-generational\" concurrency control sure sounds just like\nMVCC. \n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n", "msg_date": "Tue, 04 Jan 2000 15:52:22 -0800", "msg_from": "Don Baccus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Inprise/Borland releasing Interbase as Open\n source" }, { "msg_contents": "At 05:07 PM 1/4/00 -0400, The Hermit Hacker wrote:\nI wrote:\n>> Their \"multi-generational\" concurrency control sure sounds just like\n>> MVCC. \n\n>I wonder when they implemented theirs? Basically...is their's based on\n>old technology/concepts, while ours is based on newer ones?\n\nApparently they were formed out of DEC in 1985, with the notion of\ndoing a \"multi-generational\" model from the beginning.\n\nI don't know enough Postgres history to answer. Obviously, MVCC\nis new but the way that tuples are stored, which made MVCC fairly\nsimple to implement (for Vadim, at least!), has been part of \nPostgres from the beginning. \n\nAnd, again, my information on Interbase comes from a VERY quick\nread of docs and a white paper found on their site, take my \nquick-hit analysis with a grain of salt, please!\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n", "msg_date": "Tue, 04 Jan 2000 17:29:44 -0800", "msg_from": "Don Baccus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Inprise/Borland releasing Interbase as Open\n source" } ]
[ { "msg_contents": "I've started going through the regression tests and updating for the\nnew psql output. The first few tests have been committed, and I'll try\nworking through the others in the next few days.\n\nI've also updated the test queries to use the extended SQL92 type\ncoersion syntax rather than the older, non-standard Postgres \"::\"\nnotation. I'll still keep some \"::\" queries somewhere so that the\nsyntax continues to be tested...\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Tue, 04 Jan 2000 16:33:32 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": true, "msg_subject": "Regression tests" }, { "msg_contents": "Just as I was thinking about putting in a \"compatibility mode\". This whole\npsql output thing is a really tough issue and I think it won't be\ncompletely resolved until Jan. 31st ...\n\nOn 2000-01-04, Thomas Lockhart mentioned:\n\n> I've started going through the regression tests and updating for the\n> new psql output. The first few tests have been committed, and I'll try\n> working through the others in the next few days.\n> \n> I've also updated the test queries to use the extended SQL92 type\n> coersion syntax rather than the older, non-standard Postgres \"::\"\n> notation. I'll still keep some \"::\" queries somewhere so that the\n> syntax continues to be tested...\n> \n> - Thomas\n> \n> \n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n\n", "msg_date": "Tue, 11 Jan 2000 14:26:43 +0100", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Regression tests" } ]
[ { "msg_contents": "songoku:/opt/build/postgresql-6.5.3/src/backend/utils/adt% gmake\ncc -I../../../include -I../../../backend -I/opt/TWWfsw/tcl81/include\n-I/opt/TWWfsw/tk81/include -I/opt/TWWfsw/readline/include -I../..\n-c -o date.o date.c\n\"date.c\", line 153: warning: statement not reached\n\"date.c\", line 372: undefined symbol: __const\n\"date.c\", line 372: syntax error before or at: double\ncc: acomp failed for date.c\ngmake: *** [date.o] Error 2\n\nThe Sun C compiler doesn't like the definition of NAN in\nsrc/include/port/solaris_i386.h:\n#define NAN (*(__const double *) __nan)\n\nIs there a danger of removing this line? Should it be changed to\nsomething else?\n\n-- \nalbert chin ([email protected])\n\n", "msg_date": "Tue, 4 Jan 2000 11:40:44 -0600 (CST)", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Error compiling 6.5.3 on Solaris 2.7/x86 with Sun C compiler 5.0" } ]
[ { "msg_contents": ">> > Great. Rumour has it there is some HOT transaction code in there,\n>> among\n>> > other things.\n>> > \n>> \n>> We may find that that HOT transaction code is the same as our\n>> transaction code, which I guess would mean ours is HOT too.\nYes, that's the point. It gives us a measure of where our code stands in\nrelation to (previously) commercial code. If it's as good or better than\nIBs (assuming that IBs is good in the area of concern), great. If not, we\ncan look for ideas. Either way, we win!\n\nMikeA\n", "msg_date": "Tue, 4 Jan 2000 23:28:33 +0200 ", "msg_from": "\"Ansley, Michael\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] Inprise/Borland releasing Interbase as Open source" } ]
[ { "msg_contents": "Makes you wonder what people (companies) are spending their money on. My\ntake on it is that commercial software licensing is, by and large, a hoax.\n\nMikeA\n\n\n-----Original Message-----\nFrom: Bruce Momjian\nTo: Don Baccus\nCc: Ansley, Michael; 'PostgreSQL-development '\nSent: 00/01/04 11:50\nSubject: Re: [HACKERS] Inprise/Borland releasing Interbase as Open source\n\n> At 02:53 PM 1/4/00 -0500, Bruce Momjian wrote:\n> >> Great. Rumour has it there is some HOT transaction code in there,\namong\n> >> other things.\n> \n> >We may find that that HOT transaction code is the same as our\n> >transaction code, which I guess would mean ours is HOT too.\n> \n> (redundant, but what the heck)\n> \n> Their \"multi-generational\" concurrency control sure sounds just like\n> MVCC. \n\nReminds me of the guy who said our MVCC was a leader in database\ntechnology. He did not realize is was only one person, Vadim, who did\nthe whole thing.\n\nAmazing when just one of our developers makes the commercial db's look\nlike they are standing still.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania\n19026\n", "msg_date": "Wed, 5 Jan 2000 00:16:02 +0200 ", "msg_from": "\"Ansley, Michael\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] Inprise/Borland releasing Interbase as Open source" }, { "msg_contents": "> Makes you wonder what people (companies) are spending their money on. \n> My take on it is that commercial software licensing is, by and large, a\n> hoax.\n> \n> MikeA\n\nMy guess is that in most organizations only a handful of people\nunderstand the code. The rest do support/sales/marketing.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 4 Jan 2000 18:24:18 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Inprise/Borland releasing Interbase as Open source" }, { "msg_contents": "On Tue, 4 Jan 2000, Bruce Momjian wrote:\n\n> > Makes you wonder what people (companies) are spending their money on. \n> > My take on it is that commercial software licensing is, by and large, a\n> > hoax.\n> > \n> > MikeA\n> \n> My guess is that in most organizations only a handful of people\n> understand the code. The rest do support/sales/marketing.\n> \n\nThats the way it is every place I have worked. It's known as the 2/3\nrule. 1/3 of your coders actually know what the hell is going on, the\nother 2/3's of them are in the dark/do not care/collect a check.\n\nscott \n\n> -- \n> Bruce Momjian | http://www.op.net/~candle\n> [email protected] | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> \n> ************\n> \n\n", "msg_date": "Tue, 04 Jan 2000 18:41:14 -0500 (EST)", "msg_from": "Scott Beasley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Inprise/Borland releasing Interbase as Open source" }, { "msg_contents": "> On Tue, 4 Jan 2000, Bruce Momjian wrote:\n> \n> > > Makes you wonder what people (companies) are spending their money on. \n> > > My take on it is that commercial software licensing is, by and large, a\n> > > hoax.\n> > > \n> > > MikeA\n> > \n> > My guess is that in most organizations only a handful of people\n> > understand the code. The rest do support/sales/marketing.\n> > \n> \n> Thats the way it is every place I have worked. It's known as the 2/3\n> rule. 1/3 of your coders actually know what the hell is going on, the\n> other 2/3's of them are in the dark/do not care/collect a check.\n\nOh, I didn't know it had a name. I find it usually less than 1/3.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 4 Jan 2000 19:18:35 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Inprise/Borland releasing Interbase as Open source" }, { "msg_contents": "> On Tue, 4 Jan 2000, Bruce Momjian wrote:\n> \n> > > Makes you wonder what people (companies) are spending their money on. \n> > > My take on it is that commercial software licensing is, by and large, a\n> > > hoax.\n> > > \n> > > MikeA\n> > \n> > My guess is that in most organizations only a handful of people\n> > understand the code. The rest do support/sales/marketing.\n> > \n> \n> Thats the way it is every place I have worked. It's known as the 2/3\n> rule. 1/3 of your coders actually know what the hell is going on, the\n> other 2/3's of them are in the dark/do not care/collect a check.\n\nYes, the 2/3's just push the bits around, making things worse.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 4 Jan 2000 19:19:01 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Inprise/Borland releasing Interbase as Open source" }, { "msg_contents": "\n\nOn Tue, 4 Jan 2000, Bruce Momjian wrote:\n\n> > On Tue, 4 Jan 2000, Bruce Momjian wrote:\n> > \n> > > > Makes you wonder what people (companies) are spending their money on. \n> > > > My take on it is that commercial software licensing is, by and large, a\n> > > > hoax.\n> > > > \n> > > > MikeA\n> > > \n> > > My guess is that in most organizations only a handful of people\n> > > understand the code. The rest do support/sales/marketing.\n> > > \n> > \n> > Thats the way it is every place I have worked. It's known as the 2/3\n> > rule. 1/3 of your coders actually know what the hell is going on, the\n> > other 2/3's of them are in the dark/do not care/collect a check.\n> \n> Oh, I didn't know it had a name. I find it usually less than 1/3.\n\nIt's from the book, \"The rise and fall of the American programer\", if I\nremember right (It's been several years since I read it.) I would think\nit's lower too, I guess it's an average, but I still find it true for the \nmost part. I see open source being diffent tho. The people working on OS\nprojects want to code.\n\nscott\n\n> \n> -- \n> Bruce Momjian | http://www.op.net/~candle\n> [email protected] | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> \n\n", "msg_date": "Tue, 04 Jan 2000 19:19:54 -0500 (EST)", "msg_from": "Scott Beasley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Inprise/Borland releasing Interbase as Open source" } ]
[ { "msg_contents": "Damned,\n\n\n while hacking down a little test suite for FOREIGN KEY\n (just to have some script based checking while doing\n the file buffering of the event queue), I discovered\n something looking wrong.\n\n Having the following table schema:\n\n\n CREATE TABLE t1 (\n a int4 PRIMARY KEY,\n b int4\n );\n\n CREATE TABLE t2 (\n c int4,\n d int4,\n\n CONSTRAINT t2_d_t1_a FOREIGN KEY (d)\n REFERENCES t1 MATCH FULL\n ON UPDATE CASCADE\n DEFERRABLE INITIALLY IMMEDIATE\n );\n\n I can do the following:\n\n\n BEGIN;\n SET CONSTRAINTS ALL DEFERRED;\n UPDATE t1 SET a = 99 WHERE a = 1;\n UPDATE t1 SET a = 1 WHERE a = 2;\n UPDATE t1 SET a = 2 WHERE a = 99;\n COMMIT;\n\n to swap t1.a 1<->2.\n\n The result (due to my internal condensing of trigger\n events) is, that all references to the OLD.a=1 will end\n up by referencing to NEW.a=1. In fact, they should\n point to 2. What I'm unable to figure out from the SQL3\n specs is, what is the correct behaviour in this case?\n\n The simple solution would be, to bomb out at the third\n UPDATE with a \"triggered data change violation\"\n exception. Rows, resulting from the first UPDATE\n (identified by XMIN) are subject to change again, and\n there are outstanding trigger events. Or must the\n references follow exactly the above swap? Would be more\n tricky, but IMHO possible anyway.\n\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n\n", "msg_date": "Wed, 05 Jan 2000 01:20:02 +0100", "msg_from": "Jan Wieck <[email protected]>", "msg_from_op": true, "msg_subject": "Thomas! FOREIGN KEY problem!" }, { "msg_contents": "I wrote:\n\n> Damned,\n>\n> while hacking down a little test suite for FOREIGN KEY\n> (just to have some script based checking while doing\n> the file buffering of the event queue), I discovered\n> something looking wrong.\n\n After rereading the part of the SQL3 spec in question, I saw that\n the checks I did for \"triggered data change violation\" where\n wrong.\n\n The just committed changes to the trigger manager and related\n areas cause ANY second change of a value, possibly referenced by\n a foreign key, to bomb out with the above exception. So the\n example below doesn't work any more.\n\n That means, that a row cannot get deleted, if it has been\n inserted or possibly referenced attributes updated inside the\n same transaction. Also, possibly referenced attributes cannot be\n changed twice inside one and the same transaction. The previous\n \"event condensing\" is gone.\n\n The benefit is, that since the trigger manager now checks for\n RI_FKey... triggers, if the referenced attributes change while\n adding the event to the queue, he will suppress the real trigger\n call at all if the key's are equal. This saves fetching back OLD\n and NEW at the time, the checks have to be executed.\n\n> Having the following table schema:\n>\n> CREATE TABLE t1 (\n> a int4 PRIMARY KEY,\n> b int4\n> );\n>\n> CREATE TABLE t2 (\n> c int4,\n> d int4,\n>\n> CONSTRAINT t2_d_t1_a FOREIGN KEY (d)\n> REFERENCES t1 MATCH FULL\n> ON UPDATE CASCADE\n> DEFERRABLE INITIALLY IMMEDIATE\n> );\n>\n> I can do the following:\n>\n> BEGIN;\n> SET CONSTRAINTS ALL DEFERRED;\n> UPDATE t1 SET a = 99 WHERE a = 1;\n> UPDATE t1 SET a = 1 WHERE a = 2;\n> UPDATE t1 SET a = 2 WHERE a = 99;\n> COMMIT;\n>\n> to swap t1.a 1<->2.\n\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n\n", "msg_date": "Thu, 06 Jan 2000 21:52:44 +0100", "msg_from": "Jan Wieck <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Thomas! FOREIGN KEY problem!" } ]
[ { "msg_contents": "By now, most of the pgsql mailing lists surely have messages\ndated later than 31-Dec-1999 ... but you wouldn't know it by\nlooking at http://www.postgresql.org/lists/mailing-list.html.\nI'm guessing the code that adds links to those pages has a\nlittle Y2K bug.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 05 Jan 2000 01:56:14 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Y2K glitch in pgsql mail list archives" } ]
[ { "msg_contents": "First of all, ECPG doesn't seem to recognise the FETCH command at all,\nreturning with Syntax Error (see below).\n\nSecondly, are there any postgres specific Embedded SQL docs around\n'anywhere'?\nI've already have a few difficulties with subtle differences between\ndifferent precompiler syntax, and it's becoming frustrating.\n\nThanks for any help!\n\nTim.\n\n\neg:\n\nEXEC SQL DECLARE rowcur CURSOR FOR\n SELECT prod_id, name, format\n FROM products\n WHERE name like '%ABC%';\n\nEXEC SQL OPEN rowcur;\n\nfor (i=0; i<5; i++)\n{\n EXEC SQL FETCH rowcur INTO :prod_id, :title, :format;\n // Do something.\n}\n\n\n\n\n", "msg_date": "Wed, 05 Jan 2000 18:24:51 +1100", "msg_from": "Tim Kane <[email protected]>", "msg_from_op": true, "msg_subject": "ECPG and FETCH" }, { "msg_contents": "On Wed, Jan 05, 2000 at 06:24:51PM +1100, Tim Kane wrote:\n> EXEC SQL FETCH rowcur INTO :prod_id, :title, :format;\n> \n\nTry EXEC SQL FETCH IN rowcur ...\n\nIvo.\n", "msg_date": "Wed, 5 Jan 2000 16:04:20 +0100", "msg_from": "Ivo Simicevic <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [INTERFACES] ECPG and FETCH" }, { "msg_contents": "> Secondly, are there any postgres specific Embedded SQL docs around\n> 'anywhere'?\n> I've already have a few difficulties with subtle differences between\n> different precompiler syntax, and it's becoming frustrating.\n\nThat's one of the holes in our documentation. If you get inspired to\nwrite, we would welcome a contribution of any sort (including just\nyour notes on syntax as you learn to use the preprocessor). Our doc\nsources are in sgml, but if you want to just write flat files I'll\nconvert it for you...\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Wed, 05 Jan 2000 16:07:02 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [INTERFACES] ECPG and FETCH" }, { "msg_contents": "(back on-list; hope that is OK...)\n\n> I just wanted to let you know that some time ago, after contacting\n> Michael Meskes I have started working on ECPG documentation.\n> ... you can see some text file sketches on my web at\n> http://www.ultra.hr/gpl/ecpg\n\nLooks great! Hope you find time to get back to it, and I'll be happy\nto help with sgml markup.\n\nbtw, if you could finish a first draft (or have enough sections to be\nusable) within a month or so there is a good chance we can get it\nincluded in the v7.0 release. Much later than that and we'll probably\nmiss the window...\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Wed, 05 Jan 2000 18:10:14 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ECPG documentation" }, { "msg_contents": "On Wed, Jan 05, 2000 at 06:24:51PM +1100, Tim Kane wrote:\n> First of all, ECPG doesn't seem to recognise the FETCH command at all,\n> returning with Syntax Error (see below).\n\nFETCH IN should work as well as FETCH FROM. I think this is what standard\nsays. I once tried to add FETCH without IN but got some shift/reduce\nconflicts. I haven't looked into it for quite some time, so maybe I can add\na compatibility rule. But don't bet on it.\n\n> Secondly, are there any postgres specific Embedded SQL docs around\n> 'anywhere'?\n\nUnfortunately not much. But there are some (5 to be precise) demo files in\nthe source tree.\n\nMichael\n-- \nMichael Meskes | Go SF 49ers!\nTh.-Heuss-Str. 61, D-41812 Erkelenz | Go Rhein Fire!\nTel.: (+49) 2431/72651 | Use Debian GNU/Linux!\nEmail: [email protected] | Use PostgreSQL!\n", "msg_date": "Wed, 5 Jan 2000 19:30:59 +0100", "msg_from": "Michael Meskes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [INTERFACES] ECPG and FETCH" }, { "msg_contents": "> > > I just wanted to let you know that some time ago, after contacting\n> > > Michael Meskes I have started working on ECPG documentation.\n> > > ... you can see some text file sketches on my web at\n> > > http://www.ultra.hr/gpl/ecpg\n> > Looks great! Hope you find time to get back to it, and I'll be happy\n> > to help with sgml markup.\n> > btw, if you could finish a first draft (or have enough sections to be\n> > usable) within a month or so there is a good chance we can get it\n> > included in the v7.0 release. Much later than that and we'll probably\n> > miss the window...\n\nHello Ivo. Have you had a chance to make progress on your docs? I'm\nsure people would be *very* interested in them for the upcoming\nrelease, and if you need some help on finishing the writing or editing\nI'm sure there will be some volunteers.\n\nRegards.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Thu, 10 Feb 2000 22:15:24 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: ECPG documentation" }, { "msg_contents": "On Thu, Feb 10, 2000 at 10:15:24PM +0000, Thomas Lockhart wrote:\n> Hello Ivo. Have you had a chance to make progress on your docs? I'm\n> sure people would be *very* interested in them for the upcoming\n> release, and if you need some help on finishing the writing or editing\n> I'm sure there will be some volunteers.\n\nI'm willing to help with explanations. But I won't have the time to write\ndocs in time for the release. In fact I even haven't found the time to\ntackle my one and only todo item for 7.0.\n\nBTW I have a problem with a user defined function. I posted a question about\nsome time ago but got no answer. The source is part of our source tree\n(pgsql/src/interfaces/ecpg/test[test5.pgc|stp.pgc]). \n\nI can insert it but when I execute it I get a result of -220 which is not\nexactly the minimu of 14 and 7.\n\nAny ideas? Just hitting make in ecpg/test should build the binaries. YOu\njust have to adjust the path in test5.pgc.\n\nMichael\n-- \nMichael Meskes | Go SF 49ers!\nTh.-Heuss-Str. 61, D-41812 Erkelenz | Go Rhein Fire!\nTel.: (+49) 2431/72651 | Use Debian GNU/Linux!\nEmail: [email protected] | Use PostgreSQL!\n", "msg_date": "Sat, 12 Feb 2000 11:56:47 +0100", "msg_from": "Michael Meskes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: ECPG documentation" }, { "msg_contents": "This looks really good. Please, please, please keep working on this\ndocumentation.\n\nWish I'd had your work several months ago!!\n\n\nSteve\n\n\n\nThomas Lockhart wrote:\n\n> (back on-list; hope that is OK...)\n>\n> > I just wanted to let you know that some time ago, after contacting\n> > Michael Meskes I have started working on ECPG documentation.\n> > ... you can see some text file sketches on my web at\n> > http://www.ultra.hr/gpl/ecpg\n>\n> Looks great! Hope you find time to get back to it, and I'll be happy\n> to help with sgml markup.\n>\n> btw, if you could finish a first draft (or have enough sections to be\n> usable) within a month or so there is a good chance we can get it\n> included in the v7.0 release. Much later than that and we'll probably\n> miss the window...\n>\n> - Thomas\n>\n> --\n> Thomas Lockhart [email protected]\n> South Pasadena, California\n>\n> ************\n\n", "msg_date": "Wed, 05 Jan 2005 15:16:07 -0800", "msg_from": "Stephen Birch <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: ECPG documentation" } ]
[ { "msg_contents": "\n\nHi,\n\nI add to the to_char() routine \"ordinal-number\" feature, but my \nEnglish is insufficient for this :-( (sorry)\n\nI good know how is it for non-decimal numbers, but if number has \ndecimal part?\n\nExample:\t2.6 --> 2.6th \n or 2.6 --> 2.6nd \n\nPlease!\n\n\t\t\t\t\t\tKarel\n\n----------------------------------------------------------------------\nKarel Zak <[email protected]> http://home.zf.jcu.cz/~zakkr/\n\nDocs: http://docs.linux.cz (big docs archive)\t\nKim Project: http://home.zf.jcu.cz/~zakkr/kim/ (process manager)\nFTP: ftp://ftp2.zf.jcu.cz/users/zakkr/ (C/ncurses/PgSQL)\n-----------------------------------------------------------------------\n\n", "msg_date": "Wed, 5 Jan 2000 12:44:59 +0100 (CET)", "msg_from": "Karel Zak - Zakkr <[email protected]>", "msg_from_op": true, "msg_subject": "ordinal decimal number" }, { "msg_contents": "> Hi,\n> \n> I add to the to_char() routine \"ordinal-number\" feature, but my \n> English is insufficient for this :-( (sorry)\n\nThere are enough people that speak English, what we don't have enough\nof on this world are people that know what they can and can't do :)\n\n> I good know how is it for non-decimal numbers, but if number has \n> decimal part?\n> \n> Example: 2.6 --> 2.6th \n> or 2.6 --> 2.6nd \n\nIt's: 2.6 --> 2.6th\n\nJoost Roeleveld\n\n", "msg_date": "Wed, 5 Jan 2000 13:09:50 +0100", "msg_from": "\"J. Roeleveld\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] ordinal decimal number" }, { "msg_contents": "\nOn Wed, 5 Jan 2000, Oliver Elphick wrote:\n\n> \"J. Roeleveld\" wrote:\n> >> Hi,\n> >> \n> >> I add to the to_char() routine \"ordinal-number\" feature, but my \n> >> English is insufficient for this :-( (sorry)\n> >\n> >There are enough people that speak English, what we don't have enough\n> >of on this world are people that know what they can and can't do :)\n> >\n> >> I good know how is it for non-decimal numbers, but if number has \n> >> decimal part?\n> >> \n> >> Example: 2.6 --> 2.6th \n> >> or 2.6 --> 2.6nd \n> >\n> >It's: 2.6 --> 2.6th\n> \n> It isn't really possible to have an ordinal with decimal places in\n> English; it sounds very awkward.\n> \n> Ordinals designate placing in a list; a computer example would be an\n> array index. How can such a number have decimal places?\n\n I implement it to to_char (ordinal with decimal places), but is user choise \nif use or not use it...\n\n\t\t\t\t\t\t\tKarel\n \n\n", "msg_date": "Wed, 5 Jan 2000 14:51:35 +0100 (CET)", "msg_from": "Karel Zak - Zakkr <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] ordinal decimal number " }, { "msg_contents": "\"J. Roeleveld\" wrote:\n >> Hi,\n >> \n >> I add to the to_char() routine \"ordinal-number\" feature, but my \n >> English is insufficient for this :-( (sorry)\n >\n >There are enough people that speak English, what we don't have enough\n >of on this world are people that know what they can and can't do :)\n >\n >> I good know how is it for non-decimal numbers, but if number has \n >> decimal part?\n >> \n >> Example: 2.6 --> 2.6th \n >> or 2.6 --> 2.6nd \n >\n >It's: 2.6 --> 2.6th\n\nIt isn't really possible to have an ordinal with decimal places in\nEnglish; it sounds very awkward.\n\nOrdinals designate placing in a list; a computer example would be an\narray index. How can such a number have decimal places?\n\n-- \nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\n PGP key from public servers; key ID 32B8FAA1\n ========================================\n \"And thou shalt love the LORD thy God with all thine \n heart, and with all thy soul, and with all thy might.\"\n Deuteronomy 6:5 \n\n\n", "msg_date": "Wed, 05 Jan 2000 13:55:47 +0000", "msg_from": "\"Oliver Elphick\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] ordinal decimal number " }, { "msg_contents": "Karel Zak - Zakkr wrote:\n> \n> On Wed, 5 Jan 2000, Oliver Elphick wrote:\n> \n> > \"J. Roeleveld\" wrote:\n> > >> Hi,\n> > >>\n> > >> I add to the to_char() routine \"ordinal-number\" feature, but my\n> > >> English is insufficient for this :-( (sorry)\n> > >\n> > >There are enough people that speak English, what we don't have enough\n> > >of on this world are people that know what they can and can't do :)\n> > >\n> > >> I good know how is it for non-decimal numbers, but if number has\n> > >> decimal part?\n> > >>\n> > >> Example: 2.6 --> 2.6th\n> > >> or 2.6 --> 2.6nd\n> > >\n> > >It's: 2.6 --> 2.6th\n> >\n> > It isn't really possible to have an ordinal with decimal places in\n> > English; it sounds very awkward.\n> >\n> > Ordinals designate placing in a list; a computer example would be an\n> > array index. How can such a number have decimal places?\n\nI guess they are awkward in most languages, except for designating powers \nwhere they _could_ be used by extension of their use for integer powers?\n \ne raised to the pi-th power ?\n\nbtw, should 2.2 be 2.2nd or 2.2th (two point tooth :)\n\nwhat about rationals 7 2/3 th ?\n\nwhat about legal float numbers like infinity (is it infinitieth)\nand NaN - NaN-th or NaNd :)\n\nfor me 2.2nd represents not decimal but hierrachy, so it should be possible to\nhave\n2.2.2.2nd\n\n> I implement it to to_char (ordinal with decimal places), but is user choise\n> if use or not use it...\n\nIs your code locale-aware ?\n\nI guess that this is something that could probbaly be found in localisation\ntables,\nexcept perhaps for floats.\n\n------------------\nHannu\n", "msg_date": "Wed, 05 Jan 2000 18:00:55 +0200", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] ordinal decimal number" }, { "msg_contents": "\nOn Wed, 5 Jan 2000, Hannu Krosing wrote:\n\n> I guess they are awkward in most languages, except for designating powers \n> where they _could_ be used by extension of their use for integer powers?\n> \n> e raised to the pi-th power ?\n> \n> btw, should 2.2 be 2.2nd or 2.2th (two point tooth :)\n> \n> what about rationals 7 2/3 th ?\n> \n> what about legal float numbers like infinity (is it infinitieth)\n> and NaN - NaN-th or NaNd :)\n> \n> for me 2.2nd represents not decimal but hierrachy, so it should be possible to\n> have\n> 2.2.2.2nd\n> \n> > I implement it to to_char (ordinal with decimal places), but is user choise\n> > if use or not use it...\n> \n> Is your code locale-aware ?\n> \n> I guess that this is something that could probbaly be found in localisation\n> tables,\n> except perhaps for floats.\n\n(IMHO) POSIX locale not contains information about ordinal numbers (if \nyou mean this). But to_char supports locales for currency symbol, decimal\npoin and group separator. \n\n\t\t\t\t\t\t\tKarel\n\n", "msg_date": "Wed, 5 Jan 2000 18:49:12 +0100 (CET)", "msg_from": "Karel Zak - Zakkr <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] ordinal decimal number" } ]
[ { "msg_contents": "They need to be educated about us.....\n\nhttp://www2.linuxjournal.com/articles/conversations/010.html\n\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Wed, 05 Jan 2000 11:42:41 -0500", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": true, "msg_subject": "InterBase interview on Linux Journal" }, { "msg_contents": "> They need to be educated about us.....\n> http://www2.linuxjournal.com/articles/conversations/010.html\n\nYeah. I just sent a comment to them on this. I had talked to Marjorie\nRichards (from memory; I think that is the name) at LinuxWorld in\nAugust regarding Postgres articles, and she indicated that they might\nbe interested in principle but that that they had recently done an\nintroductory review article (nice and complimentary btw) and didn't\nhave a specific need for more intro material. We have had mention in\nother articles since then, as the tool used to implement other apps.\nBut it seems that Doc Searls doesn't read them very carefully ;)\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Wed, 05 Jan 2000 17:30:15 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] InterBase interview on Linux Journal" }, { "msg_contents": "On Wed, 5 Jan 2000, Thomas Lockhart wrote:\n\n> > They need to be educated about us.....\n> > http://www2.linuxjournal.com/articles/conversations/010.html\n> \n> Yeah. I just sent a comment to them on this. I had talked to Marjorie\n> Richards (from memory; I think that is the name) at LinuxWorld in\n> August regarding Postgres articles, and she indicated that they might\n> be interested in principle but that that they had recently done an\n> introductory review article (nice and complimentary btw) and didn't\n> have a specific need for more intro material. We have had mention in\n> other articles since then, as the tool used to implement other apps.\n> But it seems that Doc Searls doesn't read them very carefully ;)\n\nPersonally, I think that everyone should go to that article and put in a\ncomment to the effect that the Title of the article is inaccurate, and\ninsults the whole Open Source movement by claiming that a commercial\nproduct, going open source, gets labelled as \"The first major...\"\n\n\n", "msg_date": "Wed, 5 Jan 2000 14:21:55 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] InterBase interview on Linux Journal" }, { "msg_contents": "> On Wed, 5 Jan 2000, Thomas Lockhart wrote:\n> \n> > > They need to be educated about us.....\n> > > http://www2.linuxjournal.com/articles/conversations/010.html\n> > \n> > Yeah. I just sent a comment to them on this. I had talked to Marjorie\n> > Richards (from memory; I think that is the name) at LinuxWorld in\n> > August regarding Postgres articles, and she indicated that they might\n> > be interested in principle but that that they had recently done an\n> > introductory review article (nice and complimentary btw) and didn't\n> > have a specific need for more intro material. We have had mention in\n> > other articles since then, as the tool used to implement other apps.\n> > But it seems that Doc Searls doesn't read them very carefully ;)\n> \n> Personally, I think that everyone should go to that article and put in a\n> comment to the effect that the Title of the article is inaccurate, and\n> insults the whole Open Source movement by claiming that a commercial\n> product, going open source, gets labelled as \"The first major...\"\n\nTotally agree. And as far as I am concerned, Interbase is not major at\nall.\n\nSo we have a non-major database vendor claiming they are the first major\ndatabase vendor to go open-source.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 5 Jan 2000 14:23:17 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] InterBase interview on Linux Journal" } ]
[ { "msg_contents": "My personal opinion: ordinal numbers can get a th, rd, nd, or st. That\nimplies whole numbers only. If an ordinal is requested, test for a positive\ninteger; if not an integer, leave alone, otherwise add whichever suffix is\nrequired.\n\n2 -> 2nd\n2.2 -> 2.2\nNaN -> NaN\n4 -> 4th\n-6 -> -6\n\n\nMikeA\n\n\n\n-----Original Message-----\nFrom: Hannu Krosing\nTo: Karel Zak - Zakkr\nCc: Oliver Elphick; J. Roeleveld; pgsql-hackers\nSent: 00/01/05 06:00\nSubject: Re: [HACKERS] ordinal decimal number\n\nKarel Zak - Zakkr wrote:\n> \n> On Wed, 5 Jan 2000, Oliver Elphick wrote:\n> \n> > \"J. Roeleveld\" wrote:\n> > >> Hi,\n> > >>\n> > >> I add to the to_char() routine \"ordinal-number\" feature, but my\n> > >> English is insufficient for this :-( (sorry)\n> > >\n> > >There are enough people that speak English, what we don't have\nenough\n> > >of on this world are people that know what they can and can't do\n:)\n> > >\n> > >> I good know how is it for non-decimal numbers, but if number\nhas\n> > >> decimal part?\n> > >>\n> > >> Example: 2.6 --> 2.6th\n> > >> or 2.6 --> 2.6nd\n> > >\n> > >It's: 2.6 --> 2.6th\n> >\n> > It isn't really possible to have an ordinal with decimal places in\n> > English; it sounds very awkward.\n> >\n> > Ordinals designate placing in a list; a computer example would be an\n> > array index. How can such a number have decimal places?\n\nI guess they are awkward in most languages, except for designating\npowers \nwhere they _could_ be used by extension of their use for integer powers?\n \ne raised to the pi-th power ?\n\nbtw, should 2.2 be 2.2nd or 2.2th (two point tooth :)\n\nwhat about rationals 7 2/3 th ?\n\nwhat about legal float numbers like infinity (is it infinitieth)\nand NaN - NaN-th or NaNd :)\n\nfor me 2.2nd represents not decimal but hierrachy, so it should be\npossible to\nhave\n2.2.2.2nd\n\n> I implement it to to_char (ordinal with decimal places), but is user\nchoise\n> if use or not use it...\n\nIs your code locale-aware ?\n\nI guess that this is something that could probbaly be found in\nlocalisation\ntables,\nexcept perhaps for floats.\n\n------------------\nHannu\n\n************\n", "msg_date": "Wed, 5 Jan 2000 21:38:43 +0200 ", "msg_from": "\"Ansley, Michael\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] ordinal decimal number" }, { "msg_contents": "\n\nOn Wed, 5 Jan 2000, Ansley, Michael wrote:\n\n> My personal opinion: ordinal numbers can get a th, rd, nd, or st. That\n> implies whole numbers only. If an ordinal is requested, test for a positive\n> integer; if not an integer, leave alone, otherwise add whichever suffix is\n> required.\n> \n> 2 -> 2nd\n> 2.2 -> 2.2\n> NaN -> NaN\n> 4 -> 4th\n> -6 -> -6\n\nWell, it is good solution. I implement it. Or exist the other suggestion?\n\n\t\t\t\t\t\t\tKarel\n\n", "msg_date": "Thu, 6 Jan 2000 11:52:55 +0100 (CET)", "msg_from": "Karel Zak - Zakkr <[email protected]>", "msg_from_op": false, "msg_subject": "RE: [HACKERS] ordinal decimal number" } ]
[ { "msg_contents": "\nMore: \nTechnical: \n\nComments: As a developing member of *several* Open Source Projects (INN,\n FreeBSD, PostgreSQL, WU-FTPd *and* OpenSSH), I would like to\n state that I find the title of this whole article to be\n insulting to the Open Source Community ... to classify a\n commercial vendor as \"the first major...\" when they haven't even\n *released* the source yet is, at best, premature. Both\n PostgreSQL and MySQL have *always* been Open Source, and\n probably have more influence on the OS-DBMS market then Inbase\n does, or even will ...\n\nSeries: Conversations \nArticle: 10 \nTitle: The First Major Open Source Database \nAuthor: Doc Searls \nAuthor's email: [email protected] \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Wed, 5 Jan 2000 15:41:47 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "My response/comments to Inbase Interview ..." } ]
[ { "msg_contents": "Yes, it was a little inaccurate ;-)\n\nHowever, having read the article, I think that there may just be a decent\nproduct out at the end under a good license. Borland have always, for\nbetter or worse (and it shows in their last couple of income statements ;-)\nmade a habit of putting technology before profits. I think that they're\nabout to do it again. However, this time, they appear to have someone who\nhas a reasonable understanding of what he's doing in business terms at the\nhead, and I think that they may just make good here, for the benefit of all\nof us. Doing open source business is something that people like Bill will\nnever be able to understand completely. This guy seemed to almost grow up\non it, and understands it like must of us do.\n\nPerhaps the coding is crappy, or perhaps they do what Sun did for licensing,\nor something even worse. \nBut maybe the code is good, and the license really open, and no programmer\ncan get too much exposure to other peoples code, whether it's to learn how\nto do things, or to learn how not to do things.\n\nI think that PostgreSQL stands to gain an enormous amount out of the whole\nepisode, both in marketing, as well as guidance in certain areas, and\nverification (that's not the word I'm looking for, but I can't think of the\nright one now) in others.\n\nAnyway, EXPLAIN needs some adjustments, so no more rambling...\n\n\nMikeA\n\n\n-----Original Message-----\nFrom: The Hermit Hacker\nTo: Thomas Lockhart\nCc: Lamar Owen; [email protected]\nSent: 00/01/05 08:21\nSubject: Re: [HACKERS] InterBase interview on Linux Journal\n\nOn Wed, 5 Jan 2000, Thomas Lockhart wrote:\n\n> > They need to be educated about us.....\n> > http://www2.linuxjournal.com/articles/conversations/010.html\n> \n> Yeah. I just sent a comment to them on this. I had talked to Marjorie\n> Richards (from memory; I think that is the name) at LinuxWorld in\n> August regarding Postgres articles, and she indicated that they might\n> be interested in principle but that that they had recently done an\n> introductory review article (nice and complimentary btw) and didn't\n> have a specific need for more intro material. We have had mention in\n> other articles since then, as the tool used to implement other apps.\n> But it seems that Doc Searls doesn't read them very carefully ;)\n\nPersonally, I think that everyone should go to that article and put in a\ncomment to the effect that the Title of the article is inaccurate, and\ninsults the whole Open Source movement by claiming that a commercial\nproduct, going open source, gets labelled as \"The first major...\"\n\n\n\n************\n", "msg_date": "Wed, 5 Jan 2000 22:20:57 +0200 ", "msg_from": "\"Ansley, Michael\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] InterBase interview on Linux Journal" } ]
[ { "msg_contents": "I am thinking about redefining and simplifying the planner's interface\nto index-type-dependent estimation routines.\n\nCurrently, each defined index type is supposed to supply two routines:\nan \"amopselect\" routine and an \"amopnpages\" routine. (The existing\nactual routines of this kind are btreesel, btreenpages, etc in\nsrc/backend/utils/adt/selfuncs.c.) These things are called by\nindex_selectivity() in src/backend/optimizer/util/plancat.c. amopselect\ntries to determine the selectivity of an indexqual (the fraction of\nmain-table tuples it will select) and amopnpages tries to determine\nthe number of index pages that will be read to do it.\n\nNow, this collection of code is largely redundant with\noptimizer/path/clausesel.c, which also tries to estimate the selectivity\nof qualification conditions. Furthermore, the interface to these\nroutines is fundamentally misdesigned, because there is no way to deal\nwith interrelated qualification conditions --- for example, if we have\na range query like \"... WHERE x > 10 AND x < 20\", the code estimates\nthe selectivity as the product of the selectivities of the two terms\nindependently, but the actual selectivity is very different from that.\nI am working on fixing clausesel.c to be smarter about correlated\nconditions, but it won't do much good to fix that code without fixing\nthe index-related code.\n\nWhat I'm thinking about doing is replacing these two per-index-type\nroutines with a single routine, which is called once per proposed\nindexscan rather than once per qual clause. It would receive the\nwhole indexqual list as a parameter, instead of just one qual.\nA typical implementation would just call clausesel.c's general-purpose\ncode to estimate the selectivity, and then do a little bit of extra\nwork to derive the estimated number of index pages from that number.\n\nI suppose the original reason for having amopselect at all was to allow\nexploitation of index-specific knowledge during selectivity estimation\n--- but none of the existing index types actually provide any such\nknowledge in their amopselect routines. Still, this redesign preserves\nthe flexibility for an index type to do something specialized.\n\nA possible objection to this scheme is that the inputs and outputs\nof these routines would be structs that aren't full-fledged SQL types\n(and no, I'm not willing to promote parser expression trees into an\nSQL type ;-)). But I don't think that's a real problem. No one is\ngoing to be inventing new index types without doing a lot of C coding,\nso having to write the amopselect routines in C doesn't seem like a\nbig drawback.\n\nComments, objections, better ideas?\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 05 Jan 2000 23:25:08 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Proposed cleanup of index-related planner estimation procedures" }, { "msg_contents": "Tom Lane wrote:\n> \n> I am thinking about redefining and simplifying the planner's interface\n> to index-type-dependent estimation routines.\n\nGood Idea. I looked at that code quite closely in the past year, and I\nagree that, though harmless, much of it seems bogus --- or at least not\nwell motivated by thorough analysis. One reason for this is that the\nproblem of\ncomputing index search costs is very difficult, the theoretical papers\nthat I could find on the subject are not terribly satisfactory. \nCurrently only equals operators for hash and linear comparison operators\nfor nbtree really implement anything, the other access method operator\nclasses merely copy the nbtree operator algorithms.\n\n> \n> What I'm thinking about doing is replacing these two per-index-type\n> routines with a single routine, which is called once per proposed\n> indexscan rather than once per qual clause. It would receive the\n> whole indexqual list as a parameter, instead of just one qual.\n> A typical implementation would just call clausesel.c's general-purpose\n> code to estimate the selectivity, and then do a little bit of extra\n> work to derive the estimated number of index pages from that number.\n> \n> I suppose the original reason for having amopselect at all was to allow\n> exploitation of index-specific knowledge during selectivity estimation\n> --- but none of the existing index types actually provide any such\n> knowledge in their amopselect routines. Still, this redesign preserves\n> the flexibility for an index type to do something specialized.\n>\n\nGood, I have a special access method that needs to subvert the normal\noptimizer in a way that ensures an index scan is used for every heap\naccess predicated on a particular attribute. The reason for this is\nthat the data (representations of cells in a partition of a high\ndimensional space) that would normally sit in heap tuples are actually\ndistributed through the index structure. A query predicated on a\nproperty of my special attributes needs to be executed with an index\nscan, so I subvert the optimizer by coding amopselect and amopnpages to\nalways give zero cost for an index scan. Since index scans are always\nconsidered first, this hairy hack works. I would certainly breath more\neasily at each new release of PostgreSQl if the ability of the system to\nsupport this type of hack were a recognized feature.\n\n> \n> A possible objection to this scheme is that the inputs and outputs\n> of these routines would be structs that aren't full-fledged SQL types\n> (and no, I'm not willing to promote parser expression trees into an\n> SQL type ;-)). But I don't think that's a real problem. No one is\n> going to be inventing new index types without doing a lot of C coding,\n\nYes, a lot of C-code, I have 10K lines in my access method. What is\nremarkable about Postgresql is that the interface between index access\nmethods and the rest of the system is so clean that this sort of project\nis feasible. \n\n> so having to write the amopselect routines in C doesn't seem like a\n> big drawback.\n\nOne thing that I have on my `really cool ideas' list would be to link\nsomehing like a Python interpreter and compiler into the backend. one\nwould use the scripting language to write and debug stuff like this, and\nthen compile and dynamically link the debugged code into the backend. \nWhat I ended up doing in my index scheme was to code all the\nmathematical algorithms in MATLAB and get them working there, and then\nhand translate the MATLAB code to C (there is a lot of linear algebra in\nthe algorithms). Debugging was a major pain. With my idea you could\nwrite the whole access method in a high-level language, and the low\nlevel backend interfaces to things like buffer locking, MVCC and\npostgresql memory management would be encapsulated by interfaces in the\nhigh-level language. If there were something like that in PostgreSQL I\nbet a lot more people would be rolling their own access methods. I am\nnot sure that people who value stability over new features would see\nthis as a step in the right direction. ;-)\n\n> \n> Comments, objections, better ideas?\n> \n> regards, tom lane\n> \n> ************\n", "msg_date": "Thu, 06 Jan 2000 13:03:04 -0500", "msg_from": "Bernard Adrian Frankpitt <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Proposed cleanup of index-related planner estimation\n\tprocedures" }, { "msg_contents": "> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]]On Behalf Of Tom Lane\n> \n> I am thinking about redefining and simplifying the planner's interface\n> to index-type-dependent estimation routines.\n> \n> Currently, each defined index type is supposed to supply two routines:\n> an \"amopselect\" routine and an \"amopnpages\" routine. (The existing\n> actual routines of this kind are btreesel, btreenpages, etc in\n> src/backend/utils/adt/selfuncs.c.) These things are called by\n> index_selectivity() in src/backend/optimizer/util/plancat.c. amopselect\n> tries to determine the selectivity of an indexqual (the fraction of\n> main-table tuples it will select) and amopnpages tries to determine\n> the number of index pages that will be read to do it.\n> \n> Now, this collection of code is largely redundant with\n> optimizer/path/clausesel.c, which also tries to estimate the selectivity\n> of qualification conditions. Furthermore, the interface to these\n> routines is fundamentally misdesigned, because there is no way to deal\n> with interrelated qualification conditions --- for example, if we have\n> a range query like \"... WHERE x > 10 AND x < 20\", the code estimates\n> the selectivity as the product of the selectivities of the two terms\n> independently, but the actual selectivity is very different from that.\n> I am working on fixing clausesel.c to be smarter about correlated\n> conditions, but it won't do much good to fix that code without fixing\n> the index-related code.\n> \n> What I'm thinking about doing is replacing these two per-index-type\n> routines with a single routine, which is called once per proposed\n> indexscan rather than once per qual clause. It would receive the\n> whole indexqual list as a parameter, instead of just one qual.\n> A typical implementation would just call clausesel.c's general-purpose\n> code to estimate the selectivity, and then do a little bit of extra\n> work to derive the estimated number of index pages from that number.\n>\n\nSeems good to me.\n\nI have also been suspicious about per qual selectivity and have\nanother exmaple.\nFor the following query\n\tselect * from .. where col1=val1 and col2=val2;\n\nthe selectivity is selectivity of (col1=val1) * selectivity of (col2=val2)\ncurrently. But it's not right in many cases.\n\nThough it's almost impossible to hold disbursions for all combination\nof columns,it may be possible to hold multi-column disbursions for\nmulti-columns indexes,\n\nRegards.\n\nHiroshi Inoue\[email protected]\n", "msg_date": "Fri, 7 Jan 2000 09:09:58 +0900", "msg_from": "\"Hiroshi Inoue\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: [HACKERS] Proposed cleanup of index-related planner estimation\n\tprocedures" } ]
[ { "msg_contents": "\nWe've almost got UdmSearch up and running, and I'm noticing something odd:\n\n-rw------- 1 pgsql pgsql 204800 Jan 6 00:05 url\n-rw------- 1 pgsql pgsql 1622016 Jan 6 00:05 word_url\n\nurl is:\n\nCREATE TABLE \"url\" (\n \"rec_id\" int4 DEFAULT nextval('next_url_id') PRIMARY KEY,\n \"status\" int4 NOT NULL DEFAULT 0,\n \"url\" character varying(128) NOT NULL,\n \"content_type\" character varying(32) NOT NULL DEFAULT '',\n \"last_modified\" character varying(32) NOT NULL DEFAULT '',\n \"title\" character varying(128) NOT NULL DEFAULT '',\n \"text\" character varying(255) NOT NULL DEFAULT '',\n \"size\" int4 NOT NULL DEFAULT 0,\n \"indexed\" int4 NOT NULL DEFAULT 0,\n \"last_index_time\" datetime NOT NULL DEFAULT 'Thu Dec 31 20:00:00 1970 GMT',\n \"next_index_time\" datetime NOT NULL DEFAULT 'Thu Dec 31 20:00:00 1970 GMT',\n \"referrer\" int4 NOT NULL DEFAULT 0,\n \"tag\" int4 NOT NULL DEFAULT 0,\n \"hops\" int4 NOT NULL DEFAULT 0,\n \"keywords\" character varying(255) NOT NULL DEFAULT '',\n \"description\" character varying(100) NOT NULL DEFAULT '',\n \"crc\" character varying(33) NOT NULL DEFAULT '');\n\nand word_url is:\n\nCREATE INDEX \"word_url\" on \"dict\" using btree ( \"word\" \"varchar_ops\", \"url_id\" \"int4_ops\" );\n\n=============\n\nis it just me, or does an index ~6x the size of the data itself look\n\"odd\"?\n\nIts an older v6.5.0 database (haven't had time to upgrade *sigh*), so if\nexplains it, so be it...I'll do an upgrade ASAP...but if that doesn't?\n\nThanks...\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Thu, 6 Jan 2000 01:08:44 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "UdmSearch: tables vs indices ..." }, { "msg_contents": "The Hermit Hacker wrote:\n> \n> We've almost got UdmSearch up and running, and I'm noticing something odd:\n> \n> -rw------- 1 pgsql pgsql 204800 Jan 6 00:05 url\n> -rw------- 1 pgsql pgsql 1622016 Jan 6 00:05 word_url\n> \n> url is:\n> \n> CREATE TABLE \"url\" (\n> \"rec_id\" int4 DEFAULT nextval('next_url_id') PRIMARY KEY,\n> \"status\" int4 NOT NULL DEFAULT 0,\n> \"url\" character varying(128) NOT NULL,\n> \"content_type\" character varying(32) NOT NULL DEFAULT '',\n> \"last_modified\" character varying(32) NOT NULL DEFAULT '',\n> \"title\" character varying(128) NOT NULL DEFAULT '',\n> \"text\" character varying(255) NOT NULL DEFAULT '',\n> \"size\" int4 NOT NULL DEFAULT 0,\n> \"indexed\" int4 NOT NULL DEFAULT 0,\n> \"last_index_time\" datetime NOT NULL DEFAULT 'Thu Dec 31 20:00:00 1970 GMT',\n> \"next_index_time\" datetime NOT NULL DEFAULT 'Thu Dec 31 20:00:00 1970 GMT',\n> \"referrer\" int4 NOT NULL DEFAULT 0,\n> \"tag\" int4 NOT NULL DEFAULT 0,\n> \"hops\" int4 NOT NULL DEFAULT 0,\n> \"keywords\" character varying(255) NOT NULL DEFAULT '',\n> \"description\" character varying(100) NOT NULL DEFAULT '',\n> \"crc\" character varying(33) NOT NULL DEFAULT '');\n> \n> and word_url is:\n> \n> CREATE INDEX \"word_url\" on \"dict\" using btree ( \"word\" \"varchar_ops\", \"url_id\" \"int4_ops\" );\n> \n> =============\n> \n> is it just me, or does an index ~6x the size of the data itself look\n> \"odd\"?\n> \n> Its an older v6.5.0 database (haven't had time to upgrade *sigh*), so if\n> explains it, so be it...I'll do an upgrade ASAP...but if that doesn't?\n> \n> Thanks...\n\n\nAccording to your CREATE INDEX statement, word_url is on the\ntable dict, not url. Is dict a large dictionary of some sort?\n\nMike\n", "msg_date": "Thu, 06 Jan 2000 00:23:32 -0500", "msg_from": "Mike Mascari <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] UdmSearch: tables vs indices ..." }, { "msg_contents": "On Thu, 6 Jan 2000, Mike Mascari wrote:\n\n> According to your CREATE INDEX statement, word_url is on the\n> table dict, not url. Is dict a large dictionary of some sort?\n\nDamn...ya, thanks for pointing what should have been obvious :( dict is\n~10Meg and growing, word_url is now 7meg *sigh*\n\nokay...ignore that one :(\n\n\n", "msg_date": "Thu, 6 Jan 2000 01:31:27 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] UdmSearch: tables vs indices ..." }, { "msg_contents": "On Thu, 6 Jan 2000, The Hermit Hacker wrote:\n\n> \n> We've almost got UdmSearch up and running, and I'm noticing something odd:\n> \n> -rw------- 1 pgsql pgsql 204800 Jan 6 00:05 url\n> -rw------- 1 pgsql pgsql 1622016 Jan 6 00:05 word_url\n\nHere's what I have on the test system I'm working with. The apache\ndocs are the only thing in it (or that should be in it).\n\n-rw------- 1 postgres postgres 1671168 Dec 15 08:35 url\n-rw------- 1 postgres postgres 278528 Dec 15 08:35 url_crc\n-rw------- 1 postgres postgres 106496 Dec 15 08:35 url_pkey\n-rw------- 1 postgres postgres 335872 Dec 15 08:35 url_url\n-rw------- 1 postgres postgres 1179648 Dec 15 08:35 word_url\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] http://www.pop4.net\n 128K ISDN: $24.95/mo or less - 56K Dialup: $17.95/mo or less at Pop4\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Thu, 6 Jan 2000 06:06:01 -0500 (EST)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] UdmSearch: tables vs indices ..." }, { "msg_contents": "On Thu, 6 Jan 2000, Vince Vielhaber wrote:\n\n> On Thu, 6 Jan 2000, The Hermit Hacker wrote:\n> \n> > \n> > We've almost got UdmSearch up and running, and I'm noticing something odd:\n> > \n> > -rw------- 1 pgsql pgsql 204800 Jan 6 00:05 url\n> > -rw------- 1 pgsql pgsql 1622016 Jan 6 00:05 word_url\n> \n> Here's what I have on the test system I'm working with. The apache\n> docs are the only thing in it (or that should be in it).\n> \n> -rw------- 1 postgres postgres 1671168 Dec 15 08:35 url\n> -rw------- 1 postgres postgres 278528 Dec 15 08:35 url_crc\n> -rw------- 1 postgres postgres 106496 Dec 15 08:35 url_pkey\n> -rw------- 1 postgres postgres 335872 Dec 15 08:35 url_url\n> -rw------- 1 postgres postgres 1179648 Dec 15 08:35 word_url\n\nHere is *just* http://www.postgresql.org/docs:\n\n-rw------- 1 pgsql pgsql 3039232 Jan 6 06:02 url\n-rw------- 1 pgsql pgsql 35602432 Jan 6 06:02 dict\n-rw------- 1 pgsql pgsql 303104 Jan 6 06:02 url_pkey\n-rw------- 1 pgsql pgsql 376832 Jan 6 06:02 url_crc\n-rw------- 1 pgsql pgsql 1294336 Jan 6 06:02 url_url\n-rw------- 1 pgsql pgsql 27385856 Jan 6 06:02 word_url\n-rw------- 1 pgsql pgsql 8192 Jan 6 06:01 next_url_id\n\nThey are generating what I think is a very very weird looking query on the\ntables that appears to be just hanging the whole thing...can someone\nexplain to me what *this* would do:\n\n\t\tsum(case dict.word when '$t' then 1 else 0 end)\n\nI'm trying to get more details out of Alexander, since I'm guessing that\nthe query itself could possibly be done cleaner, but they acknowledge that\ntheir PostgreSQL knowledge tends to be rather \"sparse\", at best :)\n\nMore as it becomes available ...\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Thu, 6 Jan 2000 09:15:11 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] UdmSearch: tables vs indices ..." }, { "msg_contents": "The Hermit Hacker <[email protected]> writes:\n> explain to me what *this* would do:\n\n> \t\tsum(case dict.word when '$t' then 1 else 0 end)\n\nLooks to me like it generates the same result as\n\n\tselect count(*) where dict.word = '$t';\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 06 Jan 2000 10:52:32 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] UdmSearch: tables vs indices ... " } ]
[ { "msg_contents": "OK, I've updated most of the regression tests to match the new psql\noutput conventions. Several of the tests at the end, starting with\n\"join\", currently fail on my test setup, but that is probably because\nI've got a few changes in for \"outer join syntax\" which are not\nsufficient to avoid crashes. It might be good for someone to run the\nregression test and hand-inspect the tests which fail to verify that\nit is just a formatting difference and not the \"backend closed\nconnection\" I'm seeing here.\n\nOnce I've got a bit more code done, I'll come back to the regression\ntests. If someone wants to finish up the regression test updates, then\nthe only thing remaining is to do the following:\n\n1) run the regression test (hey, that \"parallel testing\" looks\ninteresting btw; thanks Jan!)\n\n2) cd results\n\n3) For each of the failed tests at the end,\n diff -w <testfile.out> ../expected/ | less\n (if differences are not in the query results)\n cp -p <testfile.out> ../expected/\n\n3') There may be one or two \"expected\" files coming from ../output/;\nit isn't that complicated to get those updated, involving copying the\nresults file to ../output/ and then modifying the file path names.\n\n4) Commit the changed \"expected\" files to CVS\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Thu, 06 Jan 2000 07:01:01 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": true, "msg_subject": "Regression tests" } ]
[ { "msg_contents": "I've got no problems with spaces being used rather than the tab\ncharacter either.\n\nPeter\n\n-- \nPeter Mount\nEnterprise Support\nMaidstone Borough Council\nAny views stated are my own, and not those of Maidstone Borough Council.\n\n\n\n-----Original Message-----\nFrom: The Hermit Hacker [mailto:[email protected]]\nSent: Tuesday, January 04, 2000 12:21 AM\nTo: Bruce Momjian\nCc: Jan Wieck; Hannu Krosing; Thomas Lockhart; PostgreSQL-development\nSubject: Re: [HACKERS] Source code format vote\n\n\n\nI'm for no-tabs personally ...\n\n\nOn Mon, 3 Jan 2000, Bruce Momjian wrote:\n\n> > Hannu Krosing wrote:\n> > \n> > > Tom Lane wrote:\n> > > >\n> > > > Thomas Lockhart <[email protected]> writes:\n> > > > > Was \"spaces instead of tabs\" one of the voted-on options? That\nwould\n> > > > > make the tab issue moot, and would result in consistant\nappearance not\n> > > > > matter what tab setting one is using.\n> > > >\n> > > > I'd be willing to vote for this if the space penalty is not\nlarge...\n> > >\n> > > Me too!\n> > \n> > Count me in.\n> \n> Do I need to tabluate a vote on this too?\n> \n> I have to get all new votes from everyone on:\n> \t\n> \t8-space tabs\n> \tno tabs\n> \n> Indentaion is still 4-spaces.\n> \n> I prefer the 8-space tabs to no tabs. Seems like 8-space tabs won\nover\n> 4-space tabs, so we need a vote on 8-space tabs vs. no tabs.\n> \n> I will need new votes because I have not kept any of the old messages.\n> \n> -- \n> Bruce Momjian | http://www.op.net/~candle\n> [email protected] | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania\n19026\n> \n> ************\n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick:\nScrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary:\nscrappy@{freebsd|postgresql}.org \n\n\n************\n", "msg_date": "Thu, 6 Jan 2000 10:04:45 -0000 ", "msg_from": "Peter Mount <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] Source code format vote" } ]
[ { "msg_contents": "I am currently looking into the possibility of extending the current\npostgres SQL implementation to be compatible with Informix's SQL\nimplementation.\n\nThe changes required seem relatively straightforward, with one notable\nexception.\n\nRequirements:\n\t1/\tDatetime type specifiers (should have no impact)\n\t\to\tinformix uses datetime specifiers of the form\n\t\t\tDATETIME YEAR TO HOUR. (which is just the year,\n\t\t\tmonth, day and hour portion of a datetime).\n\t2/\tInterval type specifiers (ditto)\n\t\to\tinformix uses interval specifiers of the form\n\t\t\tINTERVAL DAY TO HOUR. (which is just the \n\t\t\tday and hour portion of an interval).\n\t3/\tMoney type specifiers\n\t\to\tinformix has money type specifiers that are akin\n\t\t\tto decimal speicifiers\n\t4/\tInformix outer join syntax\n\t\to\tinformix uses outer joins of the form\n\t\t\tSELECT * FROM a, outer b where a.nr = b.nr\n\t\t\tThis will require some post-processing to determine\n\t\t\tthe actual join conditions.\n\t5/\tserial data type\n\t\to\tSerial type must return inserted key value\n\t\to\tUnfortunately (and this is the big bad hit)\n\t\t\tinformix's serial datatype does serial number\n\t\t\tgeneration on a zero inserted valued.\n\t\t\tThe modification required to do this may have\n\t\t\timpact on existing programs.\n\n\nI'd be interested if anyone can see any conceptual difficulties i've\nmissed in these definitions, and welcome any concepts on the\nimplementation.\n\n\n.............................Rod\n\n+-----------------------------------------------------------------------------+\n| Rod Chamberlin | [email protected] Tel +44 1703 232345 |\n| Software Engineer | Mob +44 7803 295406 |\n| QueriX | Fax +44 1703 399685 |\n+-----------------------------------------------------------------------------+\n| The views expressed in this document do not necessarily represent those of |\n| the management of QueriX (UK) Ltd. |\n+-----------------------------------------------------------------------------+\n\n", "msg_date": "Thu, 6 Jan 2000 12:49:36 +0000 (GMT)", "msg_from": "Rod Chamberlin <[email protected]>", "msg_from_op": true, "msg_subject": "Enhancing PGSQL to be compatible with Informix SQL" }, { "msg_contents": "Rod Chamberlin <[email protected]> writes:\n> I am currently looking into the possibility of extending the current\n> postgres SQL implementation to be compatible with Informix's SQL\n> implementation.\n\nDon Baccus already made the point that we are more interested in being\ncompatible with the standard than with specific commercial\nimplementations, so I won't repeat it. I do have a couple of practical\nsuggestions though:\n\n> \t1/\tDatetime type specifiers (should have no impact)\n> \t2/\tInterval type specifiers (ditto)\n\nWe support enough datestyle variants already that it's hard to believe\nthere isn't one that will meet your needs. But if not, I think adding\nan \"Informix\" datestyle option might be considered reasonable.\n\n> \t5/\tserial data type\n> \t\to\tSerial type must return inserted key value\n> \t\to\tUnfortunately (and this is the big bad hit)\n> \t\t\tinformix's serial datatype does serial number\n> \t\t\tgeneration on a zero inserted valued.\n> \t\t\tThe modification required to do this may have\n> \t\t\timpact on existing programs.\n\nBreaking existing applications will not fly. If you have lots of\ncode that depends on this behavior, you could easily emulate it\nby adding a BEFORE INSERT trigger on each table that needs it.\nIgnoring the boilerplate, the critical bit would look like:\n\n\tif new.serialcolumn = 0 then\n\t\tnew.serialcolumn = nextval('sequenceobject');\n\nHowever, if you need to know what value is being given to the\ninserted tuple, much the cleanest solution is to select nextval\nbefore inserting:\n\n\tSELECT nextval('sequenceobject');\n\tINSERT INTO table VALUES(... , value-you-just-got, ...);\n\nIf you are always going to do that, then a trigger is a waste of cycles.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 06 Jan 2000 10:50:27 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Enhancing PGSQL to be compatible with Informix SQL " }, { "msg_contents": "> I am currently looking into the possibility of extending the current\n> postgres SQL implementation to be compatible with Informix's SQL\n> implementation.\n> \n> The changes required seem relatively straightforward, with one notable\n> exception.\n\nI am very familiar wit Informix.\n\n> \n> Requirements:\n> \t1/\tDatetime type specifiers (should have no impact)\n> \t\to\tinformix uses datetime specifiers of the form\n> \t\t\tDATETIME YEAR TO HOUR. (which is just the year,\n> \t\t\tmonth, day and hour portion of a datetime).\n\nI have to admit I usually find this very confusing with Informix.\n\n> \t2/\tInterval type specifiers (ditto)\n> \t\to\tinformix uses interval specifiers of the form\n> \t\t\tINTERVAL DAY TO HOUR. (which is just the \n> \t\t\tday and hour portion of an interval).\n\nThis I can usually understand, though I think we can do this too clearer\nthan Informix.\n\n> \t3/\tMoney type specifiers\n> \t\to\tinformix has money type specifiers that are akin\n> \t\t\tto decimal speicifiers\n\nWe have a MONEY type now, and are looking to invisibly use DECIMAL for\nthis instead.\n\n> \t4/\tInformix outer join syntax\n> \t\to\tinformix uses outer joins of the form\n> \t\t\tSELECT * FROM a, outer b where a.nr = b.nr\n> \t\t\tThis will require some post-processing to determine\n> \t\t\tthe actual join conditions.\n\nBelieve it or not, I am hoping to get this into 7.0. The ANSI syntax\nrequires a lot of optimizer changes, because it basically allows user\nspecification of the join order. In talking to Thomas, we hoped to\nimplement OUTER as a flag on the table that we could easily implement in\n7.0. Let's see how it goes.\n\n> \t5/\tserial data type\n> \t\to\tSerial type must return inserted key value\n\nHow does Informix return the value?\n\n> \t\to\tUnfortunately (and this is the big bad hit)\n> \t\t\tinformix's serial datatype does serial number\n> \t\t\tgeneration on a zero inserted valued.\n> \t\t\tThe modification required to do this may have\n> \t\t\timpact on existing programs.\n\nYes, I have been thrown off by this. We don't allow a zero to\nauto-number. You have to use nextval('sequence_name') in the query to\nsupply the sequence value, not a zero. I can see this as a pain, but\nthe developers think the 0 replace with nextval() thing is strange and\nnon-intuitive. The current behavior fits in the DEFAULT column\nactivation in a logical way. I don't think I can get people to make\nthis change. The 0 replacement is a behind the scenes thing, while\nDEFAULT and nextval() calls are logically consistent.\n\n> I'd be interested if anyone can see any conceptual difficulties i've\n> missed in these definitions, and welcome any concepts on the\n> implementation.\n\nI agree Informix compatibility is a good thing.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 6 Jan 2000 11:08:43 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Enhancing PGSQL to be compatible with Informix SQL" }, { "msg_contents": "> However, if you need to know what value is being given to the\n> inserted tuple, much the cleanest solution is to select nextval\n> before inserting:\n> \n> \tSELECT nextval('sequenceobject');\n> \tINSERT INTO table VALUES(... , value-you-just-got, ...);\n> \n> If you are always going to do that, then a trigger is a waste of cycles.\n\nHe can do:\n\n \tINSERT INTO table VALUES(... , nextval('sequenceobject'), ...);\n\nand currval() will get him the previous nextval() value.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 6 Jan 2000 11:11:32 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Enhancing PGSQL to be compatible with Informix SQL" }, { "msg_contents": "On Thu, 6 Jan 2000, Tom Lane wrote:\n\n> > \t1/\tDatetime type specifiers (should have no impact)\n> > \t2/\tInterval type specifiers (ditto)\n> \n> We support enough datestyle variants already that it's hard to believe\n> there isn't one that will meet your needs. But if not, I think adding\n> an \"Informix\" datestyle option might be considered reasonable.\n\nIsn't Thomas trying to reduce the number of variants? \n\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Thu, 6 Jan 2000 12:36:10 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Enhancing PGSQL to be compatible with Informix SQL " }, { "msg_contents": "The Hermit Hacker <[email protected]> writes:\n> On Thu, 6 Jan 2000, Tom Lane wrote:\n>> We support enough datestyle variants already that it's hard to believe\n>> there isn't one that will meet your needs. But if not, I think adding\n>> an \"Informix\" datestyle option might be considered reasonable.\n\n> Isn't Thomas trying to reduce the number of variants? \n\nHe wants to eliminate the essentially-duplicate datatypes, but I didn't\nthink he was proposing eliminating any datestyle functionality...\nthere would be squawks if he did, methinks...\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 06 Jan 2000 11:40:03 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Enhancing PGSQL to be compatible with Informix SQL " }, { "msg_contents": "On Thu, 6 Jan 2000, Tom Lane wrote:\n\n> The Hermit Hacker <[email protected]> writes:\n> > On Thu, 6 Jan 2000, Tom Lane wrote:\n> >> We support enough datestyle variants already that it's hard to believe\n> >> there isn't one that will meet your needs. But if not, I think adding\n> >> an \"Informix\" datestyle option might be considered reasonable.\n> \n> > Isn't Thomas trying to reduce the number of variants? \n> \n> He wants to eliminate the essentially-duplicate datatypes, but I didn't\n> think he was proposing eliminating any datestyle functionality...\n> there would be squawks if he did, methinks...\n\n'K, just wanted to clarify that point...\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Thu, 6 Jan 2000 12:46:10 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Enhancing PGSQL to be compatible with Informix SQL " }, { "msg_contents": "On Thu, 6 Jan 2000, Bruce Momjian wrote:\n\n> > However, if you need to know what value is being given to the\n> > inserted tuple, much the cleanest solution is to select nextval\n> > before inserting:\n> > \n> > \tSELECT nextval('sequenceobject');\n> > \tINSERT INTO table VALUES(... , value-you-just-got, ...);\n> > \n> > If you are always going to do that, then a trigger is a waste of cycles.\n> \n> He can do:\n> \n> \tINSERT INTO table VALUES(... , nextval('sequenceobject'), ...);\n> \n> and currval() will get him the previous nextval() value.\n> \n\nThe problem is unfortunately much more generic than this. I would like\nable to take an informix/4GL program an run it without modification on a\npostgres backend. The difficulty here is that the database interface\n*does not know* the datatypes in the insert statement. The problem\nactually becomes more tricky because the catalog tables don't even know\nthat the original datatype was a serial, so the interface layer cannot\ntake any special steps to pre-process the data.\n\nThe only other alternative is to write a secondary parser in the interface\nlayer which does the SQL conversion. This strikes me as an exceptionally\ncomplex solution given the relative similarity between Informix/SQL and\nPostgress SQL.\n\n.............................Rod\n\n+-----------------------------------------------------------------------------+\n| Rod Chamberlin | [email protected] Tel +44 1703 232345 |\n| Software Engineer | Mob +44 7803 295406 |\n| QueriX | Fax +44 1703 399685 |\n+-----------------------------------------------------------------------------+\n| The views expressed in this document do not necessarily represent those of |\n| the management of QueriX (UK) Ltd. |\n+-----------------------------------------------------------------------------+\n\n\n\n\n", "msg_date": "Thu, 6 Jan 2000 16:46:58 +0000 (GMT)", "msg_from": "Rod Chamberlin <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Enhancing PGSQL to be compatible with Informix SQL" }, { "msg_contents": "On Thu, 6 Jan 2000, Bruce Momjian wrote:\n\n> > I am currently looking into the possibility of extending the current\n> > postgres SQL implementation to be compatible with Informix's SQL\n> > implementation.\n> > \n> > The changes required seem relatively straightforward, with one notable\n> > exception.\n> \n> I am very familiar wit Informix.\n> \n> > \n> > Requirements:\n> > \t1/\tDatetime type specifiers (should have no impact)\n> > \t\to\tinformix uses datetime specifiers of the form\n> > \t\t\tDATETIME YEAR TO HOUR. (which is just the year,\n> > \t\t\tmonth, day and hour portion of a datetime).\n> \n> I have to admit I usually find this very confusing with Informix.\n\nI can't disagree. The way informix decided to do DATETIME stuff is\ndefinately odd. That said, from a calcualtion standpoint you can pretty\nmuch ignore the qualifier during calcualtions, its only really important\nin the representation. (I'm actually making assumptions here, and it\nproduces considerable work at the representation stages, but that can\neasily be accomodated).\n\n> \n> > \t2/\tInterval type specifiers (ditto)\n> > \t\to\tinformix uses interval specifiers of the form\n> > \t\t\tINTERVAL DAY TO HOUR. (which is just the \n> > \t\t\tday and hour portion of an interval).\n> \n> This I can usually understand, though I think we can do this too clearer\n> than Informix.\n> \n> > \t3/\tMoney type specifiers\n> > \t\to\tinformix has money type specifiers that are akin\n> > \t\t\tto decimal speicifiers\n> \n> We have a MONEY type now, and are looking to invisibly use DECIMAL for\n> this instead.\n> \n\nThis would actually be sensible. My comment about money, is that the\nexisting type doesn't have a concept of precision; two decimal places of\nmoney is somewhat meaningless where in the local currency it takes 1000\nwashers to buy a packet of crisps. The ability to set the precision of\nthe MONEY type is kinda important in this case.\n\n> > \t4/\tInformix outer join syntax\n> > \t\to\tinformix uses outer joins of the form\n> > \t\t\tSELECT * FROM a, outer b where a.nr = b.nr\n> > \t\t\tThis will require some post-processing to determine\n> > \t\t\tthe actual join conditions.\n> \n> Believe it or not, I am hoping to get this into 7.0. The ANSI syntax\n> requires a lot of optimizer changes, because it basically allows user\n> specification of the join order. In talking to Thomas, we hoped to\n> implement OUTER as a flag on the table that we could easily implement in\n> 7.0. Let's see how it goes.\n> \n\nSounds great! :)\n\n> > \t5/\tserial data type\n> > \t\to\tSerial type must return inserted key value\n> \n> How does Informix return the value?\n> \n\n>From a user standpoint it mystically appears in sqlca just after the\ninsert statement is executed. Actually the informix engine recognises\nit's just done a serial insert, and sends it back in addition to the\nstandard status packets.\n\n> > \t\to\tUnfortunately (and this is the big bad hit)\n> > \t\t\tinformix's serial datatype does serial number\n> > \t\t\tgeneration on a zero inserted valued.\n> > \t\t\tThe modification required to do this may have\n> > \t\t\timpact on existing programs.\n> \n> Yes, I have been thrown off by this. We don't allow a zero to\n> auto-number. You have to use nextval('sequence_name') in the query to\n> supply the sequence value, not a zero. I can see this as a pain, but\n> the developers think the 0 replace with nextval() thing is strange and\n> non-intuitive. The current behavior fits in the DEFAULT column\n> activation in a logical way. I don't think I can get people to make\n> this change. The 0 replacement is a behind the scenes thing, while\n> DEFAULT and nextval() calls are logically consistent.\n> \n\nI can understand the situation here (one of the main reasons I raised the\nthread in the first place). Above all else the difficulty I have with\nserial at the moment is the impossibility of differentiating a serial with\nan int4 after creation (after all the database treats them identically).\nThe catalog tables don't contain any information. The only way you can\nwork out you created a serial column is by looking for an appropriately\nnamed sequence in the database on every int4 column that exists (or am I\nwrong?). This is not exactly something that appeals to me\n\nAlso, in order to get correct returns from the serial column insert it\nseems likely that the serial type would have to gain some kind of extra\nspecial processing within the database above what it has already. In this\ncase all of the required behaviour could probably be implemented.\n\n\n> > I'd be interested if anyone can see any conceptual difficulties i've\n> > missed in these definitions, and welcome any concepts on the\n> > implementation.\n> \n> I agree Informix compatibility is a good thing.\n> \n> -- \n> Bruce Momjian | http://www.op.net/~candle\n> [email protected] | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> \n\n.............................Rod\n\n+-----------------------------------------------------------------------------+\n| Rod Chamberlin | [email protected] Tel +44 1703 232345 |\n| Software Engineer | Mob +44 7803 295406 |\n| QueriX | Fax +44 1703 399685 |\n+-----------------------------------------------------------------------------+\n| The views expressed in this document do not necessarily represent those of |\n| the management of QueriX (UK) Ltd. |\n+-----------------------------------------------------------------------------+\n\n\n\n\n", "msg_date": "Thu, 6 Jan 2000 17:09:16 +0000 (GMT)", "msg_from": "Rod Chamberlin <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Enhancing PGSQL to be compatible with Informix SQL" }, { "msg_contents": "> > I have to admit I usually find this very confusing with Informix.\n> \n> I can't disagree. The way informix decided to do DATETIME stuff is\n> definately odd. That said, from a calcualtion standpoint you can pretty\n> much ignore the qualifier during calcualtions, its only really important\n> in the representation. (I'm actually making assumptions here, and it\n> produces considerable work at the representation stages, but that can\n> easily be accomodated).\n\nYes, I don't want to start having explain that mess to people.\n\n> \n> > \n> > > \t2/\tInterval type specifiers (ditto)\n> > > \t\to\tinformix uses interval specifiers of the form\n> > > \t\t\tINTERVAL DAY TO HOUR. (which is just the \n> > > \t\t\tday and hour portion of an interval).\n> > \n> > This I can usually understand, though I think we can do this too clearer\n> > than Informix.\n> > \n> > > \t3/\tMoney type specifiers\n> > > \t\to\tinformix has money type specifiers that are akin\n> > > \t\t\tto decimal speicifiers\n> > \n> > We have a MONEY type now, and are looking to invisibly use DECIMAL for\n> > this instead.\n> > \n> \n> This would actually be sensible. My comment about money, is that the\n> existing type doesn't have a concept of precision; two decimal places of\n> money is somewhat meaningless where in the local currency it takes 1000\n> washers to buy a packet of crisps. The ability to set the precision of\n> the MONEY type is kinda important in this case.\n\nThe move to make MONEY use decimal would add precision.\n\n> > > \t5/\tserial data type\n> > > \t\to\tSerial type must return inserted key value\n> > \n> > How does Informix return the value?\n> > \n> \n> >From a user standpoint it mystically appears in sqlca just after the\n> insert statement is executed. Actually the informix engine recognises\n> it's just done a serial insert, and sends it back in addition to the\n> standard status packets.\n\nYes, we have currval() which allows such retrieval _inside_ the\ndatabase, as well as in the application.\n\n\n> I can understand the situation here (one of the main reasons I raised the\n> thread in the first place). Above all else the difficulty I have with\n> serial at the moment is the impossibility of differentiating a serial with\n> an int4 after creation (after all the database treats them identically).\n> The catalog tables don't contain any information. The only way you can\n> work out you created a serial column is by looking for an appropriately\n> named sequence in the database on every int4 column that exists (or am I\n> wrong?). This is not exactly something that appeals to me\n\nYes, the SERIAL gets lost once it is created. This can cause confusion\nbecause doing a \\dt on the table shows it as an INT4 with DEFAULT, and\nnot a serial. This can confuse people. I remember someone saying we\nwould need to keep the SERIAL understanding around so we would use it\nfor pg_dump, but I don't remember why we needed to do that.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 6 Jan 2000 13:12:46 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Enhancing PGSQL to be compatible with Informix SQL" }, { "msg_contents": "> I've been wanting outer joins, but in my porting efforts have managed\n> to work around them without too much difficulty, even though 6.5's\n> limitations on subselects (not in target lists) requires that I\n> create PL/pgSQL functions in some cases.\n> \n> I certainly can't speak for the majority of users, but as one data\n> point I'd personally rather see outer joins done right (SQL 92\n> syntax) and wait a bit.\n> \n> Then again, I tend to be a bit of a language purist...\n> \n\nThomas has tried to explain the ANSI syntax for outer joins, and I must\nsay I am quite confused by it. A simple OUTER added before the column\nname would be a quick and simple way to do outers, perhap get them into\n7.0, and allow new users to do outers without having to learn the quite\ncomplex ANSI syntax.\n\nAt least that was my idea.\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 6 Jan 2000 13:20:03 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Enhancing PGSQL to be compatible with Informix SQL" }, { "msg_contents": "On Thu, 6 Jan 2000, Bruce Momjian wrote:\n\n> > I've been wanting outer joins, but in my porting efforts have managed\n> > to work around them without too much difficulty, even though 6.5's\n> > limitations on subselects (not in target lists) requires that I\n> > create PL/pgSQL functions in some cases.\n> > \n> > I certainly can't speak for the majority of users, but as one data\n> > point I'd personally rather see outer joins done right (SQL 92\n> > syntax) and wait a bit.\n> > \n> > Then again, I tend to be a bit of a language purist...\n> > \n> \n> Thomas has tried to explain the ANSI syntax for outer joins, and I must\n> say I am quite confused by it. A simple OUTER added before the column\n> name would be a quick and simple way to do outers, perhap get them into\n> 7.0, and allow new users to do outers without having to learn the quite\n> complex ANSI syntax.\n> \n> At least that was my idea.\n\nFirst, I'm for getting OUTER JOINs in ASAP...but, I'm a little concerned\nwith thought of throwing in what *sounds* like a 'stop gap' measure...\n\nJust to clarify...\"A simple OUTER added before the column\" would be a\nPostgreSQL-ism? Sort of like Oracle and all the rest have their own\nspecial traits? Eventually, the plan is to implement OJs as \"SQL92 spec\",\nand leave our -ism in for backwards compatibility?\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Thu, 6 Jan 2000 14:39:55 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Enhancing PGSQL to be compatible with Informix SQL" }, { "msg_contents": "> > Thomas has tried to explain the ANSI syntax for outer joins, and I must\n> > say I am quite confused by it. A simple OUTER added before the column\n> > name would be a quick and simple way to do outers, perhap get them into\n> > 7.0, and allow new users to do outers without having to learn the quite\n> > complex ANSI syntax.\n> > \n> > At least that was my idea.\n> \n> First, I'm for getting OUTER JOINs in ASAP...but, I'm a little concerned\n> with thought of throwing in what *sounds* like a 'stop gap' measure...\n> \n> Just to clarify...\"A simple OUTER added before the column\" would be a\n> PostgreSQL-ism? Sort of like Oracle and all the rest have their own\n> special traits? Eventually, the plan is to implement OJs as \"SQL92 spec\",\n> and leave our -ism in for backwards compatibility?\n\nYes, OUTER is an Informix-ism. Oracle uses *=. I think the first is\neasier to add and makes more sense for us. *= could be defined by\nsomeone as an operator, and overloading our already complex operator\ncode to do *= for OUTER may be too complex for people to understand.\n\nIt would be:\n\n\tSELECT *\n\tFROM tab1, OUTER tab2\n\tWHERE tab1.col1 = tab2.col2\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 6 Jan 2000 13:44:57 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Enhancing PGSQL to be compatible with Informix SQL" }, { "msg_contents": "On Thu, 6 Jan 2000, Bruce Momjian wrote:\n\n> > > Thomas has tried to explain the ANSI syntax for outer joins, and I must\n> > > say I am quite confused by it. A simple OUTER added before the column\n> > > name would be a quick and simple way to do outers, perhap get them into\n> > > 7.0, and allow new users to do outers without having to learn the quite\n> > > complex ANSI syntax.\n> > > \n> > > At least that was my idea.\n> > \n> > First, I'm for getting OUTER JOINs in ASAP...but, I'm a little concerned\n> > with thought of throwing in what *sounds* like a 'stop gap' measure...\n> > \n> > Just to clarify...\"A simple OUTER added before the column\" would be a\n> > PostgreSQL-ism? Sort of like Oracle and all the rest have their own\n> > special traits? Eventually, the plan is to implement OJs as \"SQL92 spec\",\n> > and leave our -ism in for backwards compatibility?\n> \n> Yes, OUTER is an Informix-ism. Oracle uses *=. I think the first is\n> easier to add and makes more sense for us. *= could be defined by\n> someone as an operator, and overloading our already complex operator\n> code to do *= for OUTER may be too complex for people to understand.\n> \n> It would be:\n> \n> \tSELECT *\n> \tFROM tab1, OUTER tab2\n> \tWHERE tab1.col1 = tab2.col2\n\nWhat about >2 table joins? Wish I had my book here, but I though tyou\ncould do multiple OUTER joins, no?\n\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Thu, 6 Jan 2000 15:01:49 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Enhancing PGSQL to be compatible with Informix SQL" }, { "msg_contents": "> I've been wanting outer joins, but in my porting efforts have managed\n> to work around them without too much difficulty, even though 6.5's\n> limitations on subselects (not in target lists) requires that I\n> create PL/pgSQL functions in some cases.\n> I certainly can't speak for the majority of users, but as one data\n> point I'd personally rather see outer joins done right (SQL 92\n> syntax) and wait a bit.\n\nA bit of a misunderstanding here: we are using SQL92 syntax but will\ntry to implement the outer join operation using *internal* data\nstructures similar to what we have now.\n\nAny alternate syntaxes are just a diversion which slow us down on the\nroad to world domination ;)\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Thu, 06 Jan 2000 19:08:00 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Enhancing PGSQL to be compatible with InformixSQL" }, { "msg_contents": "> > Yes, OUTER is an Informix-ism. Oracle uses *=. I think the first is\n> > easier to add and makes more sense for us. *= could be defined by\n> > someone as an operator, and overloading our already complex operator\n> > code to do *= for OUTER may be too complex for people to understand.\n> > \n> > It would be:\n> > \n> > \tSELECT *\n> > \tFROM tab1, OUTER tab2\n> > \tWHERE tab1.col1 = tab2.col2\n> \n> What about >2 table joins? Wish I had my book here, but I though tyou\n> could do multiple OUTER joins, no?\n\n \tSELECT *\n \tFROM tab1, OUTER tab2, OUTER tab3\n \tWHERE tab1.col1 = tab2.col2 AND\n\t tab1.col3 = tab3.col3\n\n\nMy assumption is that you can't join tab2 to tab3 becaue tab2 is already\nouter, but I don't know.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 6 Jan 2000 14:16:30 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Enhancing PGSQL to be compatible with Informix SQL" }, { "msg_contents": "> > I've been wanting outer joins, but in my porting efforts have managed\n> > to work around them without too much difficulty, even though 6.5's\n> > limitations on subselects (not in target lists) requires that I\n> > create PL/pgSQL functions in some cases.\n> > I certainly can't speak for the majority of users, but as one data\n> > point I'd personally rather see outer joins done right (SQL 92\n> > syntax) and wait a bit.\n> \n> A bit of a misunderstanding here: we are using SQL92 syntax but will\n> try to implement the outer join operation using *internal* data\n> structures similar to what we have now.\n> \n> Any alternate syntaxes are just a diversion which slow us down on the\n> road to world domination ;)\n\nOK, I stand corrected. Let world domination continue.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 6 Jan 2000 14:17:17 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Enhancing PGSQL to be compatible with InformixSQL" }, { "msg_contents": "At 11:08 AM 1/6/00 -0500, Bruce Momjian wrote:\n>\n>> \t4/\tInformix outer join syntax\n>> \t\to\tinformix uses outer joins of the form\n>> \t\t\tSELECT * FROM a, outer b where a.nr = b.nr\n>> \t\t\tThis will require some post-processing to determine\n>> \t\t\tthe actual join conditions.\n>\n>Believe it or not, I am hoping to get this into 7.0. The ANSI syntax\n>requires a lot of optimizer changes, because it basically allows user\n>specification of the join order. In talking to Thomas, we hoped to\n>implement OUTER as a flag on the table that we could easily implement in\n>7.0. Let's see how it goes.\n\nHmmm...I have to question this wisdom of this, because once in and\nused there will be pressure to support it forever. How will this\nplay with the SQL 92 syntax? Order specification isn't a bad thing\ngiven the fact that outer joins aren't associative (SQL for smarties\ngives examples). \n\nI've been wanting outer joins, but in my porting efforts have managed\nto work around them without too much difficulty, even though 6.5's\nlimitations on subselects (not in target lists) requires that I\ncreate PL/pgSQL functions in some cases.\n\nI certainly can't speak for the majority of users, but as one data\npoint I'd personally rather see outer joins done right (SQL 92\nsyntax) and wait a bit.\n\nThen again, I tend to be a bit of a language purist...\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n", "msg_date": "Thu, 06 Jan 2000 12:40:01 -0800", "msg_from": "Don Baccus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Enhancing PGSQL to be compatible with Informix\n SQL" }, { "msg_contents": "At 02:39 PM 1/6/00 -0400, The Hermit Hacker wrote:\n\n>Just to clarify...\"A simple OUTER added before the column\" would be a\n>PostgreSQL-ism?\n\nSounds like an Informix-ism if I read the thread correctly.\n\n> Sort of like Oracle and all the rest have their own\n>special traits?\n\nThough I'm familiar with the Oracle syntax (far too familiar at the\nmoment, as I'm porting literally thousands of lines of queries many\nof which do Oracle outer joins!), the style described by Bruce seems\nnicer.\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n", "msg_date": "Thu, 06 Jan 2000 18:52:00 -0800", "msg_from": "Don Baccus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Enhancing PGSQL to be compatible with Informix\n SQL" }, { "msg_contents": "At 07:08 PM 1/6/00 +0000, Thomas Lockhart wrote:\n>> I've been wanting outer joins, but in my porting efforts have managed\n>> to work around them without too much difficulty, even though 6.5's\n>> limitations on subselects (not in target lists) requires that I\n>> create PL/pgSQL functions in some cases.\n>> I certainly can't speak for the majority of users, but as one data\n>> point I'd personally rather see outer joins done right (SQL 92\n>> syntax) and wait a bit.\n>\n>A bit of a misunderstanding here: we are using SQL92 syntax but will\n>try to implement the outer join operation using *internal* data\n>structures similar to what we have now.\n\nYes, I've seen the existing code, in particular regarding inner\njoins.\n\n>Any alternate syntaxes are just a diversion which slow us down on the\n>road to world domination ;)\n\nThat's my first feeling, too, as I hope I made clear.\n\nIf you don't mind my asking, just what are the difficulties? Bruce\nmentioned the optimizer. I noticed the executor code that does\nmerge joins has conditionalized stuff in it to insert the nulls\nrequired by outer join. And the parser has conditionalized stuff\nto deal with them. \n\nSo, is it (\"just\", he says :) the optimizer, or more?\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n", "msg_date": "Thu, 06 Jan 2000 18:57:41 -0800", "msg_from": "Don Baccus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Enhancing PGSQL to be compatible with InformixSQL" }, { "msg_contents": "> >A bit of a misunderstanding here: we are using SQL92 syntax but will\n> >try to implement the outer join operation using *internal* data\n> >structures similar to what we have now.\n> \n> Yes, I've seen the existing code, in particular regarding inner\n> joins.\n> \n> >Any alternate syntaxes are just a diversion which slow us down on the\n> >road to world domination ;)\n> \n> That's my first feeling, too, as I hope I made clear.\n> \n> If you don't mind my asking, just what are the difficulties? Bruce\n> mentioned the optimizer. I noticed the executor code that does\n> merge joins has conditionalized stuff in it to insert the nulls\n> required by outer join. And the parser has conditionalized stuff\n> to deal with them. \n> \n> So, is it (\"just\", he says :) the optimizer, or more?\n\nOK, let me summarize where we are. Thomas is the man on this.\n\nThomas is doing the ANSI syntax in gram.y and passing information around\nin the parser. We then need code in the executor for Merge/Hash/Nested\nLoop joins to do outer joins.\n\nThe requirement in the optimizer is to have the _outer_ column always in\nthe left/outer position in hash/nested loop joins. Mergejoin can have\nit either place. The ANSI syntax also specifies the exact join that\ngets the outer, and I am not sure how to get that information/control\ninto the optimizer.\n\nThomas is now redesigning the parser _outer_ code to pass around the\nouter information in a better way than his first cut at the code.\n\nThat is where we are. There are many people ready to get involved when\nthere is a need. I know many want this in 7.0.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 6 Jan 2000 22:22:26 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Enhancing PGSQL to be compatible with InformixSQL" }, { "msg_contents": "> If you don't mind my asking, just what are the difficulties? Bruce\n> mentioned the optimizer. I noticed the executor code that does\n> merge joins has conditionalized stuff in it to insert the nulls\n> required by outer join. And the parser has conditionalized stuff\n> to deal with them.\n\nThe conditional stuff is from my poking at it over the last few\nmonths. OK, the difficulties are (I'll probably leave something out):\n\n1) The parser is written to handle the traditional inner join syntax,\nwhich separates the FROM and WHERE clauses into two distinct pieces.\nThe outer join syntax (which of course can also do inner joins) has\nqualifiers and table and column \"aliases\" buried down in the FROM\nclause, and it is a pain to percolate that back up as it is\ntransformed by the parser backend.\n\n2) The optimizer usually feels free to try every combination of inner\njoins, since they are completely transitive. But outer joins are not:\nthey need to be done in a specific order since the *absence* of a\nmatch is significant.\n\n3) The executor needs to understand how to expand a left- or\nright-side tuple into a null-filled result. I've played with the\nmergejoin code and have taught it to walk the tables correctly, but it\nneeds code added which actually generates the result tuple. And the\nother join methods don't know anything about outer joins yet.\n\nEnough?\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Fri, 07 Jan 2000 06:47:46 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Enhancing PGSQL to be compatible with InformixSQL" }, { "msg_contents": "On Thu, 6 Jan 2000, Bruce Momjian wrote:\n\n[snip]\n\n> The move to make MONEY use decimal would add precision.\n> \n> > > > \t5/\tserial data type\n> > > > \t\to\tSerial type must return inserted key value\n> > > \n> > > How does Informix return the value?\n> > > \n> > \n> > >From a user standpoint it mystically appears in sqlca just after the\n> > insert statement is executed. Actually the informix engine recognises\n> > it's just done a serial insert, and sends it back in addition to the\n> > standard status packets.\n> \n> Yes, we have currval() which allows such retrieval _inside_ the\n> database, as well as in the application.\n> \n\nYes, but the interface cannot tell what it's operating on, so it doesn't\nknow to fetch curval; consider the following statement:\n\ninsert into mytable values('Hello',0,0,23,17.0,0.0);\n\nAre any of the inserted values insert into serial columns?\n\nYou have no way of knowing. In fact any one of the last 5 columsn could \npotentially be serial values being inserted (although if it's the third\nor forth column we don't need to do any extra processing (*)). In the same\nway the interface layer can see the SQL statement and not know if it has\nto do any extra work for informix compatibility in terms of fetching the\nextra values back from the sequence which Postgres has created for us.\n\n(*) Actually we probably do, since we need to ensure that the sequence\nvalue has passed the inserted value if we do a non-null insert on a serial\ncolumn, otherwise we may later regenerate the same serial number.\n\nThe above example is a relatively simple one to parse and analyze. A more\ncomplicated case that we'd also probably have to recognise would be\nsomething like\n\nselect x,y,z,p+1 from base_table insert into mytable\n\nshort of having an SQL parser how are you supposed to determine the\nrequired behaviour?\n\nThere are other issues with serial which suggest that better processing is\nprobably required; they are currently completely useful in the context of\ntemporary tables, since the underlying sequence is never dropped.\n\n> \n> > I can understand the situation here (one of the main reasons I raised the\n> > thread in the first place). Above all else the difficulty I have with\n> > serial at the moment is the impossibility of differentiating a serial with\n> > an int4 after creation (after all the database treats them identically).\n> > The catalog tables don't contain any information. The only way you can\n> > work out you created a serial column is by looking for an appropriately\n> > named sequence in the database on every int4 column that exists (or am I\n> > wrong?). This is not exactly something that appeals to me\n> \n> Yes, the SERIAL gets lost once it is created. This can cause confusion\n> because doing a \\dt on the table shows it as an INT4 with DEFAULT, and\n> not a serial. This can confuse people. I remember someone saying we\n> would need to keep the SERIAL understanding around so we would use it\n> for pg_dump, but I don't remember why we needed to do that.\n> \n\nThis is odd actually. I can't see why you'd need to do it either, since\nyou must already have the information you need to recreate the thing.\n\nThe confusion though is not that I can't work out it's a serial, but\nthat a program can't work out it's a serial.\n\n\n.............................Rod\n\n+-----------------------------------------------------------------------------+\n| Rod Chamberlin | [email protected] Tel +44 1703 232345 |\n| Software Engineer | Mob +44 7803 295406 |\n| QueriX | Fax +44 1703 399685 |\n+-----------------------------------------------------------------------------+\n| The views expressed in this document do not necessarily represent those of |\n| the management of QueriX (UK) Ltd. |\n+-----------------------------------------------------------------------------+\n\n", "msg_date": "Fri, 7 Jan 2000 11:19:24 +0000 (GMT)", "msg_from": "Rod Chamberlin <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Enhancing PGSQL to be compatible with Informix SQL" }, { "msg_contents": "> > Yes, we have currval() which allows such retrieval _inside_ the\n> > database, as well as in the application.\n> > \n> \n> Yes, but the interface cannot tell what it's operating on, so it doesn't\n> know to fetch curval; consider the following statement:\n> \n> insert into mytable values('Hello',0,0,23,17.0,0.0);\n> \n> Are any of the inserted values insert into serial columns?\n> \n> You have no way of knowing. In fact any one of the last 5 columsn could \n> potentially be serial values being inserted (although if it's the third\n> or forth column we don't need to do any extra processing (*)). In the same\n> way the interface layer can see the SQL statement and not know if it has\n> to do any extra work for informix compatibility in terms of fetching the\n> extra values back from the sequence which Postgres has created for us.\n> \n> (*) Actually we probably do, since we need to ensure that the sequence\n> value has passed the inserted value if we do a non-null insert on a serial\n> column, otherwise we may later regenerate the same serial number.\n\nYes, I see your point, and the fault is that Informix is doing some\nspecial things when 0 is inserted into the SERIAL column type. By doing\ndefaults and using that, we are being more constent. With the Informix\nsolution, we are losing information.\n\nIt is probably a good argument _not_ to implement the informix\nslight-of-hand.\n\nHowever, I also see your huge problem because we don't document the\nSERIAL, and we don't allow zero to trigger a nextval(). Very tough.\n\n\n> > Yes, the SERIAL gets lost once it is created. This can cause confusion\n> > because doing a \\dt on the table shows it as an INT4 with DEFAULT, and\n> > not a serial. This can confuse people. I remember someone saying we\n> > would need to keep the SERIAL understanding around so we would use it\n> > for pg_dump, but I don't remember why we needed to do that.\n> > \n> \n> This is odd actually. I can't see why you'd need to do it either, since\n> you must already have the information you need to recreate the thing.\n> \n> The confusion though is not that I can't work out it's a serial, but\n> that a program can't work out it's a serial.\n\nSERIAL was implemented as a nice workaround to prevent people from\ndefining a sequance and defining a default nextval(). I think I may\nhave suggested it because of my Informix background.\n\nThe issue is that SERIAL is just a shortcut. It doesn't have any\ninternal representation. It would need one only for pg_dump and for\nyour use, and I am not sure that is warranted. Other people would have\nto agree that keeping the SERIAL as its own type is good.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 7 Jan 2000 11:29:43 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Enhancing PGSQL to be compatible with Informix SQL" } ]
[ { "msg_contents": "This message was sent from Geocrawler.com by \"Adam Walczykiewicz\" <[email protected]>\nBe sure to reply to that address.\n\nHi!\n\nI've tried to create function in plpgsql in \npostgresql 6.5 (on Linux Suse 6.2).\n: i file.pl\nIn response I got on the screen an error message :\n\"gReadData()-- backend closed the channel \nunexpectadly...\"\nI was back in shell.\nIn /var/lib/pgsql/pgserver.log I saw :\nFATAL1 : btree: failed to add item to the page\nThe next thing I did was to cut some last 3 lines \n(those beginning with substr(...))\nAnd I run again\n: i file.pl\nIn response I got again on the screen an error \nmessage :\n\"gReadData()-- backend closed the channel \nunexpectadly...\"\nI was back in shell.\nIn /var/lib/pgsql/pgserver.log I saw :\nFATAL1 : my bits moved right off to the end of \nthe world!\nAfter all of it I cut some more lines (I thougt \nthat the function was\nto long to write it down in pg_proc).\nAnd execute once again:\n: i file.pl\nand it works.\nNext I add some more lines and start again to \ncreate this function and\nit failed again with the same effects. \n\n\nWhy it hapenned?? What can I do to solve that \nproblem.\nThanks for any help!!!!\nRegards\nAdam Walczykiewicz \n([email protected]) \n\n(file.pl)\ndrop function insklient(text);\ncreate function insklient(text) returns text as '\ndeclare\nkl_wa klient.wa%TYPE;\nkl_typk klient.typk%TYPE;\nkl_czas_od_ob klient.czas_od_ob%TYPE;\nkl_plec klient.plec%TYPE;\nkl_nazwisko klient.nazwisko%TYPE;\nkl_imie klient.imie%TYPE;\nkl_imied klient.imied%TYPE;\nkl_pesel klient.pesel%TYPE;\nkl_nip klient.nip%TYPE;\nkl_i_ojca klient.i_ojca%TYPE;\nkl_nazwisko_r klient.nazwisko_r%TYPE;\nkl_data_ur klient.data_ur%TYPE;\nkl_tel_dom klient.tel_dom%TYPE;\nkl_kod_p klient.kod_p%TYPE;\nkl_miasto klient.miasto%TYPE;\nkl_ulica klient.ulica%TYPE;\nkl_kr_tel klient.kr_tel%TYPE;\nkl_kr_kod_p klient.kr_kod_p%TYPE;\nkl_kr_miasto klient.kr_miasto%TYPE;\nkl_kr_ulica klient.kr_ulica%TYPE;\nkl_tel_pr klient.tel_pr%TYPE;\nkl_nzprac klient.nzprac%TYPE;\nkl_zp_kod klient.zp_kod_p%TYPE;\nkl_zp_miasto klient.zp_miasto%TYPE;\nkl_zp_ulica klient.zp_ulica%TYPE;\nkl_uwagi1 klient.uwagi1%TYPE;\nkl_uwagi2 klient.uwagi2%TYPE;\nstr text;\nkl_serial int4;\nbegin\n\t--str := $1;\n\tkl_wa := substr(str,1,textpos(str,'','')-\n1);\n\tstr := substr(str,textpos(str,'','')+1);\n\tkl_typk := substr(str,1,textpos\n(str,'','')-1);\n\tstr := substr(str,textpos(str,'','')+1);\n\tkl_czas_od_ob := substr(str,1,textpos\n(str,'','')-1);\n\tstr := substr(str,textpos(str,'','')+1);\n\tkl_plec := substr(str,1,textpos\n(str,'','')-1);\n\tstr := substr(str,textpos(str,'','')+1);\n\tkl_nazwisko := substr(str,1,textpos\n(str,'','')-1);\n\tstr := substr(str,textpos(str,'','')+1);\n\tkl_imie := substr(str,1,textpos\n(str,'','')-1);\n\tstr := substr(str,textpos(str,'','')+1);\n\tkl_imied := substr(str,1,textpos\n(str,'','')-1);\n\tstr := substr(str,textpos(str,'','')+1);\n\tkl_pesel := substr(str,1,textpos\n(str,'','')-1);\n\tstr := substr(str,textpos(str,'','')+1);\n\tkl_nip := substr(str,1,textpos(str,'','')-\n1);\n\tstr := substr(str,textpos(str,'','')+1);\n\tkl_i_ojca := substr(str,1,textpos\n(str,'','')-1);\n\tstr := substr(str,textpos(str,'','')+1);\n\tkl_nazwisko_r := substr(str,1,textpos\n(str,'','')-1);\n\tstr := substr(str,textpos(str,'','')+1);\n\tkl_data_ur := substr(str,1,textpos\n(str,'','')-1);\n\tstr := substr(str,textpos(str,'','')+1);\n\tkl_tel_dom := substr(str,1,textpos\n(str,'','')-1);\n\tstr := substr(str,textpos(str,'','')+1);\n\tkl_kod_p := substr(str,1,textpos\n(str,'','')-1);\n\tstr := substr(str,textpos(str,'','')+1);\n\tkl_miasto := substr(str,1,textpos\n(str,'','')-1);\n\tstr := substr(str,textpos(str,'','')+1);\n\tkl_ulica := substr(str,1,textpos\n(str,'','')-1);\n\tstr := substr(str,textpos(str,'','')+1);\n\tkl_kr_tel := substr(str,1,textpos\n(str,'','')-1);\n\tstr := substr(str,textpos(str,'','')-1);\n\tstr := substr(str,textpos(str,'','')-1);\n\tkl_kr_miasto := substr(str,1,texpos\n(str,'','')-1);\n\tstr := substr(str,textpos(str,'','')-1);\n\tkl_kr_ulica := substr(str,1,textpos\n(str,'','')-1);\n\tkl_tel_pr := substr(str,1,textpos\n(str,'','')-1);\n\treturn ''rrrr'';\nend;\n' language 'plpgsql';\n\nAfter executing that file (so similar to file.1 \nbut shorter) the function\nwas created without any errors. I did it again \nand it works.\n(file2.pl)\n-- wywolanie select ia('W^45^34');\ndrop function insdokument(text);\nCREATE FUNCTION insdokument(text) RETURNS int4 \nAS '\nDECLARE\ndok_wa dokument.wa%TYPE;\ndok_nrk dokument.nrk%TYPE;\ndok_rodzaj dokument.rodzaj%TYPE;\ndok_seria dokument.seria%TYPE;\ndok_numer dokument.numer%TYPE;\ndok_uwagi1 dokument.uwagi1%TYPE;\ndok_uwagi2 dokument.uwagi2%TYPE;\nstr text;\ndok_serial int4;\nbegin\n\tstr := $1;\n\tdok_wa := substr(str,1,textpos(str,''^'')-\n1); \n\tstr := substr(str,textpos(str,''^'')+1);\n\tdok_nrk := substr(str,1,textpos\n(str,''^'')-1);\n\tstr := substr(str,textpos(str,''^'')+1);\n\tdok_rodzaj := substr(str,1,textpos\n(str,''^'')-1);\n\tstr := substr(str,textpos(str,''^'')+1);\n\tdok_seria := substr(str,1,textpos\n(str,''^'')-1);\n\tstr := substr(str,textpos(str,''^'')+1);\n\tdok_numer := substr(str,1,textpos\n(str,''^'')-1);\n\tstr := substr(str,textpos(str,''^'')+1);\n\tdok_uwagi1 := substr(str,1,textpos\n(str,''^'')-1);\n\tstr := substr(str,textpos(str,''^'')+1);\n\tdok_uwagi2 := substr(str,1,textpos\n(str,''^'')-1);\n\tstr := substr(str,textpos(str,''^'')+1);\n\tinsert into dokument\n(wa,nrk,rodzaj,seria,numer,uwagi1,uwagi2) values\n(dok_wa,dok_nrk,dok_rodzaj,dok_seria,dok_numer,dok\n_uwagi1,dok_uwagi2);\n\tselect last_value into dok_serial from \ndokument_nr_seq;\n\treturn dok_serial;\nend;\n' language 'plpgsql';\n\n\n\nGeocrawler.com - The Knowledge Archive\n", "msg_date": "Thu, 6 Jan 2000 06:40:07 -0800", "msg_from": "\"Adam Walczykiewicz\" <[email protected]>", "msg_from_op": true, "msg_subject": "btree: failed to add item to " }, { "msg_contents": "Adam Walczykiewicz wrote:\n\n> Hi!\n>\n> I've tried to create function in plpgsql in\n> [...]\n> FATAL1 : my bits moved right off to the end of\n> the world!\n> After all of it I cut some more lines (I thougt\n> that the function was\n> to long to write it down in pg_proc).\n> And execute once again:\n> : i file.pl\n> and it works.\n\n Surely you ran into the pg_proc_prosrc_index problem with it.\n\n This is fixed in the CURRENT tree. I've placed the patch against v6.5.2 onto the FTP server now.\n Grab it from\n\n\n ftp://ftp.postgresql.org/pub/patches/v6.5/\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n\n", "msg_date": "Thu, 06 Jan 2000 16:40:30 +0100", "msg_from": "Jan Wieck <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] btree: failed to add item to" }, { "msg_contents": "Adam, I think you are running into the same problem with overly large\nprocedure definitions that's been discussed here recently. In 6.5.*\nit's not safe to create a procedure def that's more than 2700 bytes.\nWorkaround: split your code into smaller functions.\n\n7.0 will be better...\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 06 Jan 2000 11:43:46 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] btree: failed to add item to " } ]
[ { "msg_contents": "On Thu, 6 Jan 2000, Don Baccus wrote:\n\n> At 12:49 PM 1/6/00 +0000, Rod Chamberlin wrote:\n> \n> >\t4/\tInformix outer join syntax\n> >\t\to\tinformix uses outer joins of the form\n> >\t\t\tSELECT * FROM a, outer b where a.nr = b.nr\n> >\t\t\tThis will require some post-processing to determine\n> >\t\t\tthe actual join conditions.\n> \n> Rather than go blow-by-blow, why should Postgres adopt (say) Informix\n> syntax vs. Sybase or Oracle?\n> \n> If Postgres were to adopt a non-standard syntax for a feature like outer\n> joins, wouldn't it make more sense to pick the syntax used by the market\n> leader (Oracle), simply because it would make porting easier for a much\n> larger group of database users?\n> \n> Of course, my REAL feeling is that supporting SQL 92 outer join syntax - which\n> is the approach being taken by the developers - is the right answer.\n> \n> And, of course, that Oracle, Informix and the rest ought to get off their\n> collective asses and support SQL 92. After all, they undoubtably contributed\n> to the development of those standards - I can't believe they didn't fund\n> representatives to the committees.\n> \n> But if one were to want to mimic a commercial DB, one would presumably\n> mimic the market leader...\n\nActually what I'm proposing is more to support mutiple database syntaxes\nwherever possible. The INFORMIX style of outer join (and for that matter\nthe oracle style), are not gramatically exclusive. There is no reason why\nyou should not allow *all* sane outer join syntaxes, apart from the added\ncomplexity in the parser. \n\nThe same is true largely for the other changes I suggested. They are for\nportability with other systems to attempt to minimise the amount of work\nnecessary to migrate a given application.\n\nWhy is this interesting for Informix? Two reasons I can list\noffhand:\n\n1/\tInformix is currently deserting it's customer base of small\nbusiness users, instead trying to concetrate on larger organisations.\nThere are therefore vasts numbers of users crying out for something to\nfill that gap. This I will admit provides a commercial basis for any such\nattempt, since we have already got some of the other tools which\ninformix users will be interested in.\n\n2/\tThe datatypes already tie in much more closely in informix than\nthey do in oracle (I can't speak for any of the other major\ndatabases, I haven't actually looked at the type comparisons). Actually\ntrying creating a useable sensible level of compatability with oracle\nwould take considerably more work than doing the same for informix.\n\n\n.............................Rod\n\n+-----------------------------------------------------------------------------+\n| Rod Chamberlin | [email protected] Tel +44 1703 232345 |\n| Software Engineer | Mob +44 7803 295406 |\n| QueriX | Fax +44 1703 399685 |\n+-----------------------------------------------------------------------------+\n| The views expressed in this document do not necessarily represent those of |\n| the management of QueriX (UK) Ltd. |\n+-----------------------------------------------------------------------------+\n\n", "msg_date": "Thu, 6 Jan 2000 15:09:02 +0000 (GMT)", "msg_from": "Rod Chamberlin <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Enhancing PGSQL to be compatible with Informix SQL" }, { "msg_contents": "Then <[email protected]> spoke up and said:\n> But if one were to want to mimic a commercial DB, one would presumably\n> mimic the market leader...\n\nMarket leader or not, Oracle causes us sufficient pain here that I\nwould never consider it important. But then again, I'm bitter.\n\n-- \n=====================================================================\n| JAVA must have been developed in the wilds of West Virginia. |\n| After all, why else would it support only single inheritance?? |\n=====================================================================\n| Finger [email protected] for my public key. |\n=====================================================================", "msg_date": "6 Jan 2000 11:08:00 -0500", "msg_from": "Brian E Gallew <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Enhancing PGSQL to be compatible with Informix SQL" }, { "msg_contents": "At 12:49 PM 1/6/00 +0000, Rod Chamberlin wrote:\n\n>\t4/\tInformix outer join syntax\n>\t\to\tinformix uses outer joins of the form\n>\t\t\tSELECT * FROM a, outer b where a.nr = b.nr\n>\t\t\tThis will require some post-processing to determine\n>\t\t\tthe actual join conditions.\n\nRather than go blow-by-blow, why should Postgres adopt (say) Informix\nsyntax vs. Sybase or Oracle?\n\nIf Postgres were to adopt a non-standard syntax for a feature like outer\njoins, wouldn't it make more sense to pick the syntax used by the market\nleader (Oracle), simply because it would make porting easier for a much\nlarger group of database users?\n\nOf course, my REAL feeling is that supporting SQL 92 outer join syntax - which\nis the approach being taken by the developers - is the right answer.\n\nAnd, of course, that Oracle, Informix and the rest ought to get off their\ncollective asses and support SQL 92. After all, they undoubtably contributed\nto the development of those standards - I can't believe they didn't fund\nrepresentatives to the committees.\n\nBut if one were to want to mimic a commercial DB, one would presumably\nmimic the market leader...\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n", "msg_date": "Thu, 06 Jan 2000 09:23:53 -0800", "msg_from": "Don Baccus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Enhancing PGSQL to be compatible with Informix\n SQL" }, { "msg_contents": "At 03:09 PM 1/6/00 +0000, Rod Chamberlin wrote:\nI wrote:\n>> Rather than go blow-by-blow,\n\n(I meant blow-by-blow through Rod's list)...\n\n>Actually what I'm proposing is more to support mutiple database syntaxes\n>wherever possible. The INFORMIX style of outer join (and for that matter\n>the oracle style), are not gramatically exclusive. There is no reason why\n>you should not allow *all* sane outer join syntaxes, apart from the added\n>complexity in the parser. \n\nWell, there are reasons, actually. Documentation as well as the parser\nbecomes more complex, and ... well ... it's messy and ugly. \n\n>\n>The same is true largely for the other changes I suggested. They are for\n>portability with other systems to attempt to minimise the amount of work\n>necessary to migrate a given application.\n\nIn many cases you could write a pre-processor to bulk-translate stuff\nif you wanted. Indeed, friends and I porting the Ars Digita Community\nsystem have done some of that ourselves (moving it from Oracle).\n\n>Why is this interesting for Informix? Two reasons I can list\n>offhand:\n\n>1/\tInformix is currently deserting it's customer base of small\n>business users, instead trying to concetrate on larger organisations.\n>There are therefore vasts numbers of users crying out for something to\n>fill that gap. This I will admit provides a commercial basis for any such\n>attempt, since we have already got some of the other tools which\n>informix users will be interested in.\n\nSo write a portability tool to help them move their stuff.\n\nJust MHO, of course.\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n", "msg_date": "Thu, 06 Jan 2000 12:31:07 -0800", "msg_from": "Don Baccus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Enhancing PGSQL to be compatible with Informix\n SQL" } ]
[ { "msg_contents": "\nWell, we've finally gotten what seems to be a working version of this\ngoing, and, so far, I'm quite impressed...\n\nIf you go to http://www.postgresql.org/cgi/search.cgi (URL to change), you\ncan see it in action.\n\nThe WebMaster still has to do formatting work on it, and link it into the\nmain site, and the database is just being populated right now, and ...\n\n... but its a start.\n\nSo far, from what i've seen, its a nice tool ... I like the fact that\nthere is no such thing as \"Search downtime\", since the indexer can run\nwhile ppl are seaching. With ht/Dig, while it was indexing, the databases\nwere down and you couldn't search anything...\n\nAnd, I think, its much faster then the old, but am not sure about that...\n\nOh well, give her a go, but expect changes over the next little while as\nwe integrate it better...\n\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Thu, 6 Jan 2000 11:57:52 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "New Search Engine ... UdmSearch" }, { "msg_contents": "On Thu, 6 Jan 2000, The Hermit Hacker wrote:\n> So far, from what i've seen, its a nice tool ... I like the fact that\n> there is no such thing as \"Search downtime\", since the indexer can run\n> while ppl are seaching. With ht/Dig, while it was indexing, the databases\n> were down and you couldn't search anything...\n\n You are certailnly wrong, as htDig has a concept of \"work files\" (option -a).\n\nOleg.\n---- \n Oleg Broytmann http://members.xoom.com/phd2/ [email protected]\n Programmers don't die, they just GOSUB without RETURN.\n\n", "msg_date": "Thu, 6 Jan 2000 16:20:35 +0000 (GMT)", "msg_from": "Oleg Broytmann <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] New Search Engine ... UdmSearch" }, { "msg_contents": "On Thu, 6 Jan 2000, Oleg Broytmann wrote:\n\n> On Thu, 6 Jan 2000, The Hermit Hacker wrote:\n> > So far, from what i've seen, its a nice tool ... I like the fact that\n> > there is no such thing as \"Search downtime\", since the indexer can run\n> > while ppl are seaching. With ht/Dig, while it was indexing, the databases\n> > were down and you couldn't search anything...\n> \n> You are certailnly wrong, as htDig has a concept of \"work files\"\n> (option -a).\n\nAh, okay...\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Thu, 6 Jan 2000 12:42:37 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] New Search Engine ... UdmSearch" }, { "msg_contents": "The Hermit Hacker wrote:\n\n> Well, we've finally gotten what seems to be a working version of this\n> going, and, so far, I'm quite impressed...\n>\n> If you go to http://www.postgresql.org/cgi/search.cgi (URL to change), you\n> can see it in action.\n\nSite-wide search for pgsql yielded no results moments ago.\n\nCheers,\nEd Loehr\n\n", "msg_date": "Thu, 06 Jan 2000 14:30:39 -0600", "msg_from": "Ed Loehr <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] New Search Engine ... UdmSearch" }, { "msg_contents": "Ed Loehr wrote:\n\n> The Hermit Hacker wrote:\n>\n> > Well, we've finally gotten what seems to be a working version of this\n> > going, and, so far, I'm quite impressed...\n> >\n> > If you go to http://www.postgresql.org/cgi/search.cgi (URL to change), you\n> > can see it in action.\n>\n> Site-wide search for pgsql yielded no results moments ago.\n\nCorrection: Search in hackers/general for pgsql yields no results.\n\nCheers,\nEd Loehr\n\n", "msg_date": "Thu, 06 Jan 2000 14:32:12 -0600", "msg_from": "Ed Loehr <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] New Search Engine ... UdmSearch" }, { "msg_contents": "I said: \"There aren't any results for pgsql...\"\n\nThe Hermit Hacker wrote:\n\n> ... and the database is just being populated right now, and ...\n\nD'oh!!! Read more carefully! My apologies for the brain-dead spam...\n\n\n\n", "msg_date": "Thu, 06 Jan 2000 14:45:56 -0600", "msg_from": "Ed Loehr <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] New Search Engine ... UdmSearch" }, { "msg_contents": "On Thu, 6 Jan 2000, Ed Loehr wrote:\n\n> I said: \"There aren't any results for pgsql...\"\n> \n> The Hermit Hacker wrote:\n> \n> > ... and the database is just being populated right now, and ...\n> \n> D'oh!!! Read more carefully! My apologies for the brain-dead spam...\n\nya, still playing with things...am currently doing the docs directory, and\nthen will start hitting the mailing lists...\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Thu, 6 Jan 2000 17:30:14 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [GENERAL] New Search Engine ... UdmSearch" }, { "msg_contents": "The Hermit Hacker wrote:\n\n> Well, we've finally gotten what seems to be a working version of this\n> going, and, so far, I'm quite impressed...\n>\n> If you go to http://www.postgresql.org/cgi/search.cgi (URL to change), you\n> can see it in action.\n>\n\nSo I search the hackers mailing list for 'outer join' and get no results. I\nsearch the hackers mailing list for 'join' and still get\n\n Sorry, but search returned no\nresults.\n\n Try to produce less restrictive\nsearch query.\n\nAm I missing something?\n\nAdriaan\n\n", "msg_date": "Fri, 07 Jan 2000 11:22:56 +0200", "msg_from": "Adriaan Joubert <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] New Search Engine ... UdmSearch" }, { "msg_contents": "On Fri, 7 Jan 2000, Adriaan Joubert wrote:\n\n> The Hermit Hacker wrote:\n> \n> > Well, we've finally gotten what seems to be a working version of this\n> > going, and, so far, I'm quite impressed...\n> >\n> > If you go to http://www.postgresql.org/cgi/search.cgi (URL to change), you\n> > can see it in action.\n> >\n> \n> So I search the hackers mailing list for 'outer join' and get no results. I\n> search the hackers mailing list for 'join' and still get\n> \n> Sorry, but search returned no\n> results.\n> \n> Try to produce less restrictive\n> search query.\n> \n> Am I missing something?\n\nNo, the database just isn't fully populated yet. Due to the amount of\ndata that has to go in, it takes a long time to parse.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] http://www.pop4.net\n 128K ISDN: $24.95/mo or less - 56K Dialup: $17.95/mo or less at Pop4\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Fri, 7 Jan 2000 06:22:00 -0500 (EST)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] New Search Engine ... UdmSearch" }, { "msg_contents": "On Fri, 7 Jan 2000, Adriaan Joubert wrote:\n\n> The Hermit Hacker wrote:\n> \n> > Well, we've finally gotten what seems to be a working version of this\n> > going, and, so far, I'm quite impressed...\n> >\n> > If you go to http://www.postgresql.org/cgi/search.cgi (URL to change), you\n> > can see it in action.\n> >\n> \n> So I search the hackers mailing list for 'outer join' and get no results. I\n> search the hackers mailing list for 'join' and still get\n> \n> Sorry, but search returned no\n> results.\n> \n> Try to produce less restrictive\n> search query.\n> \n> Am I missing something?\n\nYa, the note right at the bottom of my announcement that stats that the\ndatabase is currently being populated :) Its currently up to 8500+\ndocuments indexed, and pgsql-sql and docs are the ones currently\ndone...-hackers is next ...\n\n\n", "msg_date": "Fri, 7 Jan 2000 08:59:53 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] New Search Engine ... UdmSearch" }, { "msg_contents": ">\n>\n> Ya, the note right at the bottom of my announcement that stats that the\n> database is currently being populated :) Its currently up to 8500+\n> documents indexed, and pgsql-sql and docs are the ones currently\n> done...-hackers is next ...\n>\n\nOoops, sorry, feel a right idiot now.... Let us know when it is all there and I'll\nhave another look.\n\nAdriaan\n\n", "msg_date": "Fri, 07 Jan 2000 17:03:47 +0200", "msg_from": "Adriaan Joubert <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] New Search Engine ... UdmSearch" } ]
[ { "msg_contents": "\nWorking on a database that has a table that looks like:\n\nTable = daily\n+----------------------------------+----------------------------------+-------+\n| Field | Type | Length|\n+----------------------------------+----------------------------------+-------+\n| n | int4 | 4 |\n| date | date | 4 |\n| cookie | char() | 1 |\n| id | text | var |\n+----------------------------------+----------------------------------+-------+\nIndices: zdaily_cookie\n zdaily_date_n\n\nWant to create a third index on id, so run:\n\nCREATE INDEX zdaily_id ON daily (id);\n\nEventually, the CREATE INDEX just crashes...\n\ndaily is big:\n\n-rw------- 1 postgres postgres 979255296 Jan 6 02:04 daily\n\nBut, the other two indices are fine:\n\n-rw------- 1 postgres postgres 241123328 Jan 6 02:26 zdaily_date_n\n-rw------- 1 postgres postgres 229220352 Jan 6 02:13 zdaily_cookie\n\nWe're currently using v6.5.1, since it used to work there, but have tried\nwith with v6.5.3 also, same results...\n\nI've thought about out of disk space problems, but the file system that\nits on has over 2gig free on it, and the table is <1gig to start with...\n\nI'm running it again right now, after running a vacuum on it, just in case\nthat picked up something, but the vacuum looks clean:\n\nwebusers=> vacuum verbose daily;\nNOTICE: --Relation daily--\nNOTICE: Pages 119538: Changed 0, Reapped 0, Empty 0, New 0; Tup 11358404: Vac 0, Keep/VTL 0/0, Crash 0, UnUsed 0, MinLen 53, MaxLen 3959; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0. Elapsed 9/2 sec.\nNOTICE: Index zdaily_date_n: Pages 29434; Tuples 11358404. Elapsed 2/18 sec.\nNOTICE: Index zdaily_cookie: Pages 27981; Tuples 11358404. Elapsed 3/18 sec.\n\nThe index is being created on disk, but doesn't grow beyond the 16k shown\nhere:\n\nls -lt *daily*\n-rw------- 1 postgres postgres 16384 Jan 6 13:32 zdaily_id\n-rw------- 1 postgres postgres 241123328 Jan 6 02:26 zdaily_date_n\n-rw------- 1 postgres postgres 229220352 Jan 6 02:13 zdaily_cookie\n-rw------- 1 postgres postgres 979255296 Jan 6 02:04 daily\n\nWe have no \"verbose\" mode for a create index, do we? Something that would\nnarrow down a record whose 'id' field has bad data in it?\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Thu, 6 Jan 2000 14:34:40 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "Still investigating, but ... CREATE INDEX problem in v6.5.x ..." } ]
[ { "msg_contents": "This oddity became apparent during the numeric test:\n\ntest=> create table atable (an int, some text);\nCREATE\ntest=> \\d atable\n Table \"atable\"\n Attribute | Type | Extra \n-----------+------+-------\n an | int4 | \n some | text | \n\ntest=> vacuum analyze atable;\nNOTICE: Vacuum: table not found\nVACUUM\n\n>From today's cvs. Have I missed something?\n\nCheers,\n\nPatrick\n", "msg_date": "Thu, 6 Jan 2000 20:27:00 +0000 (GMT)", "msg_from": "\"Patrick Welche,SCC,ext.35710,\" <[email protected]>", "msg_from_op": true, "msg_subject": "vacuum problem" } ]
[ { "msg_contents": "line 50 of pg_dumpall (cvs of today) has\n\npsql -l -A -q -t| tr '|' ' ' | grep -v '^template1 ' | \\\n\npsql -l -A -q -t| tr '|' ' '\n\nwill return a list of databases beginning with the two lines\n\nList of databases\nDatabase Owner\n\nand ending with\n\n(n rows)\n\nSo, should psql's -q option suppress these three lines, or should pg_dumpall\nget rid of them?\n\n(We don't want to connect to database \"List\" as user \"of\" with encoding\n\"databases\")\n\nCheers,\n\nPatrick\n", "msg_date": "Thu, 6 Jan 2000 20:48:26 +0000 (GMT)", "msg_from": "\"Patrick Welche,SCC,ext.35710,\" <[email protected]>", "msg_from_op": true, "msg_subject": "pg_dumpall prob" }, { "msg_contents": "> line 50 of pg_dumpall (cvs of today) has\n> \n> psql -l -A -q -t| tr '|' ' ' | grep -v '^template1 ' | \\\n> \n> psql -l -A -q -t| tr '|' ' '\n> \n> will return a list of databases beginning with the two lines\n\nOK, this is an artifact of the new psql format. I have changed to the\ncode to be:\n\n\tpsql -l -A -q -t | grep '|' | tr '|' ' ' | sed -n '2,$p' | \\\n\tgrep -v '^template1 ' | \\\n\nThis removes all lines with no pipe, changes pipe to space, and removes\nthe first line and the template1 line from the output. This should work.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 6 Jan 2000 16:17:05 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] pg_dumpall prob" } ]
[ { "msg_contents": ">> > Yes, OUTER is an Informix-ism. Oracle uses *=. I think the first is\n>> > easier to add and makes more sense for us. *= could be defined by\n>> > someone as an operator, and overloading our already complex operator\n>> > code to do *= for OUTER may be too complex for people to understand.\n>> > \n>> > It would be:\n>> > \n>> > \tSELECT *\n>> > \tFROM tab1, OUTER tab2\n>> > \tWHERE tab1.col1 = tab2.col2\n>> \n>> What about >2 table joins? Wish I had my book here, but I though tyou\n>> could do multiple OUTER joins, no?\n\nOracle uses a syntax which I quite like. The query above would become:\n\nSELECT *\nFROM tab, tab2\nWHERE tab1.col1 = tab2.col2 (+)\n\nI've actually used queries something like this:\n\nSELECT blah, blah, blah\nFROM t1, t2, t3, t4\nWHERE t1.start_date BETWEEN t2.start_date (+) AND t2.end_date (+)\nAND t1.y = t2.y (+)\nAND t3.x (+) = t1.x\nAND t3.y (+) = t1.y\nAND t4.x = t1.x;\n\nFor example...\n\nI realise that this is not standard, but it's easy to read, and easy to\ndevelop.\n\nThe problem with OUTER is: OUTER on which relationship? Does this matter?\nI haven't thought about it hugely, but it may not make sense when you try to\ndo this:\n\nSELECT * \nFROM t1, OUTER t2, t3\nWHERE t1.x = t2.x\nAND t2.y = t3.y\n\nWhich is the OUTER join? Outer joining to t1 and inner joining to t3 gives\n(I think) a different result to inner joining to t1 and outer joining to t3.\nThen you have to start creating language rules to help determine which join\nbecomes the outer join, and it becomes a bit of a mess. With Oracle's\nnotation, it's pretty clear (I think anyway).\n\nHope this adds some fuel to the process...\n\nMikeA\n", "msg_date": "Thu, 6 Jan 2000 23:08:33 +0200 ", "msg_from": "\"Ansley, Michael\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] Enhancing PGSQL to be compatible with Informix SQL" }, { "msg_contents": "> SELECT blah, blah, blah\n> FROM t1, t2, t3, t4\n> WHERE t1.start_date BETWEEN t2.start_date (+) AND t2.end_date (+)\n> AND t1.y = t2.y (+)\n> AND t3.x (+) = t1.x\n> AND t3.y (+) = t1.y\n> AND t4.x = t1.x;\n> \n> For example...\n> \n> I realise that this is not standard, but it's easy to read, and easy to\n> develop.\n> \n> The problem with OUTER is: OUTER on which relationship? Does this matter?\n> I haven't thought about it hugely, but it may not make sense when you try to\n> do this:\n> \n> SELECT * \n> FROM t1, OUTER t2, t3\n> WHERE t1.x = t2.x\n> AND t2.y = t3.y\n> \n> Which is the OUTER join? Outer joining to t1 and inner joining to t3 gives\n> (I think) a different result to inner joining to t1 and outer joining to t3.\n> Then you have to start creating language rules to help determine which join\n> becomes the outer join, and it becomes a bit of a mess. With Oracle's\n> notation, it's pretty clear (I think anyway).\n\nThis must be why the ANSI standard requires you to specify the join when\ndoing outer. Thomas says we are going only with ANSI syntax, and I can\nsee now why OUTER is just looking for problems.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 6 Jan 2000 17:04:16 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Enhancing PGSQL to be compatible with Informix SQL" }, { "msg_contents": "\"Ansley, Michael\" wrote:\n> \n> >> > Yes, OUTER is an Informix-ism. Oracle uses *=. I think the first is\n> >> > easier to add and makes more sense for us. *= could be defined by\n> >> > someone as an operator, and overloading our already complex operator\n> >> > code to do *= for OUTER may be too complex for people to understand.\n> >> >\n> >> > It would be:\n> >> >\n> >> > SELECT *\n> >> > FROM tab1, OUTER tab2\n> >> > WHERE tab1.col1 = tab2.col2\n> >>\n> >> What about >2 table joins? Wish I had my book here, but I though tyou\n> >> could do multiple OUTER joins, no?\n> \n> Oracle uses a syntax which I quite like. The query above would become:\n> \n> SELECT *\n> FROM tab, tab2\n> WHERE tab1.col1 = tab2.col2 (+)\n> \n> I've actually used queries something like this:\n> \n> SELECT blah, blah, blah\n> FROM t1, t2, t3, t4\n> WHERE t1.start_date BETWEEN t2.start_date (+) AND t2.end_date (+)\n> AND t1.y = t2.y (+)\n> AND t3.x (+) = t1.x\n> AND t3.y (+) = t1.y\n> AND t4.x = t1.x;\n> \n> For example...\n> \n> I realise that this is not standard, but it's easy to read, and easy to\n> develop.\n\nI completely agree that Oracle has got it in a very clear, readable and\nunderstandable way.\n\nWhen I used MS Access (supposedly ANSI) I always created the outer join\nqueries \nusing the graphical tool and also had to examine it using said tool, because\nall \nthese LEFT OUTER JOIN ON .... introduced too much line noise for me to be able \nto understand what was actually meant.\n\nOTOH, just marking the \"outer\" side with (+) was easy both to to read and\nwrite.\n\nSo I would very much like to have the Oracle syntax for outer joins as well.\n\nIMHO the ANSI standard (as anything designed by a committee) is not always the\nbest \nway to do things.\n\n--------------\nHannu\n", "msg_date": "Fri, 07 Jan 2000 01:47:06 +0200", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Enhancing PGSQL to be compatible with Informix SQL" }, { "msg_contents": "At 11:08 PM 1/6/00 +0200, Ansley, Michael wrote:\n\n>>> What about >2 table joins? Wish I had my book here, but I though tyou\n>>> could do multiple OUTER joins, no?\n>\n>Oracle uses a syntax which I quite like. The query above would become:\n>\n>SELECT *\n>FROM tab, tab2\n>WHERE tab1.col1 = tab2.col2 (+)\n>\n>I've actually used queries something like this:\n>\n>SELECT blah, blah, blah\n>FROM t1, t2, t3, t4\n>WHERE t1.start_date BETWEEN t2.start_date (+) AND t2.end_date (+)\n>AND t1.y = t2.y (+)\n>AND t3.x (+) = t1.x\n>AND t3.y (+) = t1.y\n>AND t4.x = t1.x;\n\nGood...you saved me the trouble of digging out some examples from the\ncode I'm porting, which occasionally due similar things :)\n\nI think the ANSI SQL 92 equivalent is something like:\n\nselect ...\nfrom t1 inner join t4 on t1.x=t4.x,\n t2 left outer join t1\n on t2.y=t1.y and\n (t1.start_date between t2.start_date and t1.start_date),\n t3 left outer join t1 on t3.x=t1.x and t3.y = t1.y;\n\nI've never used an ANSI SQL 92 compliant RDBMS, I'm not sure\nif t2/t1 become ambiguous and need to be given different names\nusing \"as foo\" in each case, etc. Actually, you would in \norder to build the target list unambiguously I guess...\n\nBut that's the general gist. I think - Thomas, am I at all\nclose?\n\nOf course, you can continue to write the inner join in the\nold way:\n\nselect ...\nfrom t1 inner join t2 on t1.x=t2.x;\n\nand\n\nselect ...\nfrom t1,t2 where t1.x=t2.x;\n\nwhere the last form of the inner join might be considered an\noptimization of a cross-join restricted by a boolean expression\nin the where clause rather than a proper inner join. In other\nwords, the two queries return the same rows and one would be \nvery disappointed if the second form formed the cartesian product\nof t1 and t2 and then filtered the resulting rows rather than do\nan inner join!\n\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n", "msg_date": "Thu, 06 Jan 2000 19:18:42 -0800", "msg_from": "Don Baccus <[email protected]>", "msg_from_op": false, "msg_subject": "RE: [HACKERS] Enhancing PGSQL to be compatible with Informix\n SQL" }, { "msg_contents": "> select ...\n> from t1 inner join t4 on t1.x=t4.x,\n> t2 left outer join t1\n> on t2.y=t1.y and\n> (t1.start_date between t2.start_date and t1.start_date),\n> t3 left outer join t1 on t3.x=t1.x and t3.y = t1.y;\n\nLet's be honest, folks. This is almost unreadable. I think we will\nneed some simpler way to access _outer_ in addition to the ANSI way.\n\nI can't imagine how I would answer a question: \"How do I do an ANSI\nouter join\". It would need its own FAQ page.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 6 Jan 2000 22:25:34 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Enhancing PGSQL to be compatible with Informix SQL" }, { "msg_contents": "At 01:47 AM 1/7/00 +0200, Hannu Krosing wrote:\n\n>IMHO the ANSI standard (as anything designed by a committee) is not always\nthe\n>best \n>way to do things.\n\nWell, generally standards committees don't design features. They're\nusually designed by one or two people, then submitted to the committee\nfor discussion and eventual adoption or rejection.\n\nMy understanding from reading Date is that one reason for not adopting\nthe common vendor hacks for outer joins is that outer joins aren't\nassociative and the result of complicated expressions in some cases\nwill depend on the order in which the RDBMS' optimizer chooses to\nexecute them.\n\nPutting on my compiler-writer hat, I can see where having joins\nexplicitly declared in the \"from\" clauses rather than derived from\nan analysis of the \"from\" and \"where\" clauses might well simplify\nthe development of a new SQL 92 RDBMS if one were to start from\nscratch. It's cleaner, IMO. This doesn't apply to Postgres,\nsince the outer joins are being shoe-horned into existing data\nstructures.\n\nOf course, I speak as someone without\na lot of experience writing Oracle or Informix SQL. If you're used\nto Oracle, then it's not surprising you find its means of specifying\nan outer join the most natural and easiest to understand...\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n", "msg_date": "Thu, 06 Jan 2000 19:32:32 -0800", "msg_from": "Don Baccus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Enhancing PGSQL to be compatible with Informix\n SQL" }, { "msg_contents": "On Thu, 6 Jan 2000, Bruce Momjian wrote:\n\n> > select ...\n> > from t1 inner join t4 on t1.x=t4.x,\n> > t2 left outer join t1\n> > on t2.y=t1.y and\n> > (t1.start_date between t2.start_date and t1.start_date),\n> > t3 left outer join t1 on t3.x=t1.x and t3.y = t1.y;\n> \n> Let's be honest, folks. This is almost unreadable. I think we will\n> need some simpler way to access _outer_ in addition to the ANSI way.\n> \n> I can't imagine how I would answer a question: \"How do I do an ANSI\n> outer join\". It would need its own FAQ page.\n\nHow do the \"books\" talk about JOINs? What is the semi-standard syntax\nthat is generally used in samples?\n\n", "msg_date": "Fri, 7 Jan 2000 00:00:41 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Enhancing PGSQL to be compatible with Informix SQL" }, { "msg_contents": "> >SELECT blah, blah, blah\n> >FROM t1, t2, t3, t4\n> >WHERE t1.start_date BETWEEN t2.start_date (+) AND t2.end_date (+)\n> >AND t1.y = t2.y (+)\n> >AND t3.x (+) = t1.x\n> >AND t3.y (+) = t1.y\n> >AND t4.x = t1.x;\n> I think the ANSI SQL 92 equivalent is something like:\n> select ...\n> from t1 inner join t4 on t1.x=t4.x,\n> t2 left outer join t1\n> on t2.y=t1.y and\n> (t1.start_date between t2.start_date and t1.start_date),\n> t3 left outer join t1 on t3.x=t1.x and t3.y = t1.y;\n\nHmm. I'm not sure what the Oracle example actually gives as a result,\nand I find the syntax as confusing as others find SQL92 syntax ;)\n\n> I've never used an ANSI SQL 92 compliant RDBMS, I'm not sure\n> if t2/t1 become ambiguous and need to be given different names\n> using \"as foo\" in each case, etc. Actually, you would in\n> order to build the target list unambiguously I guess...\n\nOnce two tables are mentioned in an \"outer join\", then individual\ncolumns can no longer be qualified by the original table names.\nInstead, you are allowed to put table and column aliases on the join\nexpression:\n\nselect a, b, c, z\n from (t1 left join t2 using (x)) as j1 (a, b, c)\n right join t3 on (j1.a = t3.y);\n\n(I think I have this right; I'm doing it from memory and have been\naway from it for a little while).\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Fri, 07 Jan 2000 06:56:40 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Enhancing PGSQL to be compatible with Informix SQL" }, { "msg_contents": "> > select ...\n> > from t1 inner join t4 on t1.x=t4.x,\n> > t2 left outer join t1\n> > on t2.y=t1.y and\n> > (t1.start_date between t2.start_date and t1.start_date),\n> > t3 left outer join t1 on t3.x=t1.x and t3.y = t1.y;\n> Let's be honest, folks. This is almost unreadable. I think we will\n> need some simpler way to access _outer_ in addition to the ANSI way.\n\nNonsense! Especially since this isn't quite SQL92. Here is an SQL92\nquery (I think ;) :\n\nselect a, b, c\n from (t1 left join t2 using (x)) as j1 (a, b)\n right join t3 on (j1.a = t3.y);\n\nSo you do a left join with t1 and t2, name the resulting intermediate\ntable and columns, and then do a right join of the result with t3. I\ncan't see other syntaxes being very much more obvious, particularly\nwrt predicting the actual result. Just because a query looks simpler\ndoesn't necessarily mean that the syntax alway produces a more robust\nquery.\n\n> I can't imagine how I would answer a question: \"How do I do an ANSI\n> outer join\". It would need its own FAQ page.\n\nWell, *you're* the one writing the book :))\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Fri, 07 Jan 2000 07:24:01 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Enhancing PGSQL to be compatible with Informix SQL" }, { "msg_contents": "On Fri, 7 Jan 2000, Thomas Lockhart wrote:\n\n> > > select ...\n> > > from t1 inner join t4 on t1.x=t4.x,\n> > > t2 left outer join t1\n> > > on t2.y=t1.y and\n> > > (t1.start_date between t2.start_date and t1.start_date),\n> > > t3 left outer join t1 on t3.x=t1.x and t3.y = t1.y;\n> > Let's be honest, folks. This is almost unreadable. I think we will\n> > need some simpler way to access _outer_ in addition to the ANSI way.\n> \n> Nonsense! Especially since this isn't quite SQL92. Here is an SQL92\n> query (I think ;) :\n> \n> select a, b, c\n> from (t1 left join t2 using (x)) as j1 (a, b)\n> right join t3 on (j1.a = t3.y);\n> \n> So you do a left join with t1 and t2, name the resulting intermediate\n> table and columns, and then do a right join of the result with t3. I\n> can't see other syntaxes being very much more obvious, particularly\n> wrt predicting the actual result. Just because a query looks simpler\n> doesn't necessarily mean that the syntax alway produces a more robust\n> query.\n> \n\nThis always strikes me as very much an each-to-his-own situation. I\ngenerally prefer the oracle syntax myself; whilst there are potential\nambiguities (which oracle gets around by not executing ambiguous queries),\nit's cleaner to write.\n\nThat said I don't particularly like SQL itself; if I wanted to program\nCOBOL I'd get a COBOL compiler:). The SQL92 syntax is more of an SQLism\nthan anything else, and the extra \"english\" words actually tend to obscure\nthe details of the join.\n\nIt certainly makes sense to use the SQL92 syntax; it is most important to\nbe compatible with the standards that anything else, but I would still\nargue that a more straightforward syntax in parallel is\nprobably worthwhile. \n\n> > I can't imagine how I would answer a question: \"How do I do an ANSI\n> > outer join\". It would need its own FAQ page.\n> \n> Well, *you're* the one writing the book :))\n> \n\nI'd have thought this gave him justtification to complain about your\nhorrible syntax then:)\n\n> - Thomas\n> \n> -- \n> Thomas Lockhart\t\t\t\[email protected]\n> South Pasadena, California\n> \n\n.............................Rod\n\n+-----------------------------------------------------------------------------+\n| Rod Chamberlin | [email protected] Tel +44 1703 232345 |\n| Software Engineer | Mob +44 7803 295406 |\n| QueriX | Fax +44 1703 399685 |\n+-----------------------------------------------------------------------------+\n| The views expressed in this document do not necessarily represent those of |\n| the management of QueriX (UK) Ltd. |\n+-----------------------------------------------------------------------------+\n\n", "msg_date": "Fri, 7 Jan 2000 10:59:23 +0000 (GMT)", "msg_from": "Rod Chamberlin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] SQL outer join syntax" }, { "msg_contents": "> > > I can't imagine how I would answer a question: \"How do I do an ANSI\n> > > outer join\". It would need its own FAQ page.\n> > \n> > Well, *you're* the one writing the book :))\n> > \n> \n> I'd have thought this gave him justtification to complain about your\n> horrible syntax then:)\n\nThe big problem is that is no Thomas's syntax, but the ANSI syntax, and\nthere doesn't seem to be any vendor-neutral solution for outer joins\nother than the ANSI one.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 7 Jan 2000 11:23:39 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] SQL outer join syntax" }, { "msg_contents": "At 12:00 AM 1/7/00 -0400, The Hermit Hacker wrote:\n>On Thu, 6 Jan 2000, Bruce Momjian wrote:\n>\n>> > select ...\n>> > from t1 inner join t4 on t1.x=t4.x,\n>> > t2 left outer join t1\n>> > on t2.y=t1.y and\n>> > (t1.start_date between t2.start_date and t1.start_date),\n>> > t3 left outer join t1 on t3.x=t1.x and t3.y = t1.y;\n>> \n>> Let's be honest, folks. This is almost unreadable. I think we will\n>> need some simpler way to access _outer_ in addition to the ANSI way.\n\nWell...it took a minute to digest the Oracle version, too. Most joins\nare far simpler than the example.\n\n>How do the \"books\" talk about JOINs? What is the semi-standard syntax\n>that is generally used in samples?\n\n\"SQL for smarties\" gives examples of vendor-specific syntax then talks\nabout outer joins more abstractly. It also points out that the existing\nvendor solutions have weaknesses.\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n", "msg_date": "Fri, 07 Jan 2000 13:35:52 -0800", "msg_from": "Don Baccus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Enhancing PGSQL to be compatible with Informix \n SQL" }, { "msg_contents": "At 06:56 AM 1/7/00 +0000, Thomas Lockhart wrote:\n\n>Hmm. I'm not sure what the Oracle example actually gives as a result,\n>and I find the syntax as confusing as others find SQL92 syntax ;)\n\nMe too :) As I pointed out in an earlier message, fortunately most\nof the outer join examples I've seen are simpler, and more readable\nin either style.\n\nThanks, BTW, for the status update, it's about what I gathered from\nlooking at the code.\n\n>Once two tables are mentioned in an \"outer join\", then individual\n>columns can no longer be qualified by the original table names.\n>Instead, you are allowed to put table and column aliases on the join\n>expression:\n>\n>select a, b, c, z\n> from (t1 left join t2 using (x)) as j1 (a, b, c)\n> right join t3 on (j1.a = t3.y);\n>\n>(I think I have this right; I'm doing it from memory and have been\n>away from it for a little while).\n\nYeah, I think this is right, I'd seen in the syntax where a general\ntable reference can be a join and hadn't thought about being able\nto table alias the entire result. This is useful, actually. Without\nthe column aliases something like:\n\nselect j1.a, j1.b, j2.foo ...\n\nmakes it clear as to which join a column comes from. This clarity's\noften lacking in the Oracle-style queries, as I've noticed when I\ndecipher them during my port-to-Postgres work. You need to unwind\nwhat comes from where, and often have to look at the data model to\nfigure it out if the names are unique to the different tables and\nnot fully qualified as \"table_name.column_name\".\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n", "msg_date": "Fri, 07 Jan 2000 13:45:58 -0800", "msg_from": "Don Baccus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Enhancing PGSQL to be compatible with Informix\n SQL" } ]
[ { "msg_contents": "% pg_dump -sv regression > foo\n...\n-- dumping out user-defined procedural languages \n-- dumping out user-defined functions \nSegmentation fault (core dumped)\n\n#0 0x4810dc02 in vfprintf ()\n#1 0x480c4963 in vsprintf ()\n#2 0x48078e8a in appendPQExpBuffer (str=0x616e746f, \n fmt=0x3d20656d <Address 0x3d20656d out of bounds>) at pqexpbuffer.c:197\n#3 0x6c732065 in ?? ()\n\n% tail foo\n end if;\n return new;\nend;\n' LANGUAGE 'plpgsql';\nCREATE FUNCTION \"tg_iface_biu\" ( ) RETURNS opaque AS '\ndeclare\n sname text;\n sysrec record;\nbegin\n select into sysrec * from\n\nie., it stops in mid flight, the line is\n\n select into sysrec * from system where name = new.sysname;\n\nso it seems that the PQExpBuffer may well be to full ?\n\n-s means dumpSchema(), getFuncs(), dumpFuncs(), dumpOneFunc()\n\npg_dump.c:dumpOneFunc():2346:\n\n appendPQExpBuffer(q, \" ) RETURNS %s%s AS '%s' LANGUAGE '%s';\\n\",\n (finfo[i].retset) ? \" SETOF \" : \"\",\n fmtId(findTypeByOid(tinfo, numTypes, finfo[i].prorettype), false),\n func_def, func_lang);\n\n\nso it cored while printing func_def ?! which is only 487 bytes long..\nRather confused.. Are any of you seeing this?\n\nCheers,\n\nPatrick\n", "msg_date": "Thu, 6 Jan 2000 22:56:22 +0000 (GMT)", "msg_from": "\"Patrick Welche,SCC,ext.35710,\" <[email protected]>", "msg_from_op": true, "msg_subject": "pg_dump problem" } ]
[ { "msg_contents": "This looks like a place where the sprintf is being caught with too long a\nstring. Although 487 bytes shouldn't be too long; perhaps the rest of\nwhat's in there is causing sprintf to hooch. The limit is supposed to be\naround 1kB. I'll try changing the fmt style string to a series of appends.\nIt's less efficient, but may work.\n\nThanks...\n\n\nMikeA\n\n\n\n\n-----Original Message-----\nFrom: Patrick Welche,SCC,ext.35710,\nTo: [email protected]\nSent: 00/01/07 12:56\nSubject: [HACKERS] pg_dump problem\n\n% pg_dump -sv regression > foo\n...\n-- dumping out user-defined procedural languages \n-- dumping out user-defined functions \nSegmentation fault (core dumped)\n\n#0 0x4810dc02 in vfprintf ()\n#1 0x480c4963 in vsprintf ()\n#2 0x48078e8a in appendPQExpBuffer (str=0x616e746f, \n fmt=0x3d20656d <Address 0x3d20656d out of bounds>) at\npqexpbuffer.c:197\n#3 0x6c732065 in ?? ()\n\n% tail foo\n end if;\n return new;\nend;\n' LANGUAGE 'plpgsql';\nCREATE FUNCTION \"tg_iface_biu\" ( ) RETURNS opaque AS '\ndeclare\n sname text;\n sysrec record;\nbegin\n select into sysrec * from\n\nie., it stops in mid flight, the line is\n\n select into sysrec * from system where name = new.sysname;\n\nso it seems that the PQExpBuffer may well be to full ?\n\n-s means dumpSchema(), getFuncs(), dumpFuncs(), dumpOneFunc()\n\npg_dump.c:dumpOneFunc():2346:\n\n appendPQExpBuffer(q, \" ) RETURNS %s%s AS '%s' LANGUAGE '%s';\\n\",\n (finfo[i].retset) ? \" SETOF \" : \"\",\n fmtId(findTypeByOid(tinfo, numTypes,\nfinfo[i].prorettype), false),\n func_def, func_lang);\n\n\nso it cored while printing func_def ?! which is only 487 bytes long..\nRather confused.. Are any of you seeing this?\n\nCheers,\n\nPatrick\n\n************\n", "msg_date": "Fri, 7 Jan 2000 01:14:06 +0200 ", "msg_from": "\"Ansley, Michael\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] pg_dump problem" } ]
[ { "msg_contents": "[Ok, I've been in touch with the author of the 'First Major Open Source\nDatabase' article. Here's what he wants to do. Let me know what you\nthink, and correct any misinformation I may have fed him.]\n\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n\n-------- Original Message --------\nFrom: Doc Searls <[email protected]>\nSubject: Re: First Major Open Source Database\nTo: Jason Kroll <[email protected]>\nCC: [email protected], [email protected]\n\nTo move this along quickly, I suggest this as a sidebar we can run as \na table in the piece at \nhttp://www2.linuxjournal.com/articles/conversations/010.html ...\n\n----------------\n\nCredit where due\n\nSince this interview went up, the response has been overwhelmingly \npositive. Some readers, however, have urged us to give full credit to \nthe other open source databases that are already out there and have \nprior claims to the \"major\" label. The strongest urgings have come \nfrom PostgreSQL developers, who have provided us with some points and \nlinks that we are happy to pass along here.\n\nThe points:\n\n- University Ingres, developed starting in 1977, qualifies for the \n'First Major Open Source Database' honor. Ingres is the direct \nancestor of PostgreSQL.\n\n- PostgreSQL is at version 6.5.3, and has been open source since the \nbeginning. \"The development is very open, the developers friendly, \nand the code is improving by leaps and bounds,\" writes Lamar Owen, \nRPM Package Maintainer with the PostgreSQL Global Development Group. \nHe says \"PostgreSQL has shipped with RedHat Linux as part of the \n'Official Boxed Set' since RedHat 5.0.\" He also recommends comparing \nRDBMSes by the \"ACID criteria.\" These are: \"Atomicity, Consistency, \nIsolation, Durability.\"\n\n- Hacking database code is not lightweight work. \"Kernel hacking is \nnot a walk in the park, nor is GUI hacking, library hacking, or any \nother tool hacking,\" Owen says, \"But, database hacking is a league \nunto itself....The learning curve for doing back-end database \ndevelopment is the steepest of any project of which I am aware.\"\n\nHere are two useful links:\n\n- The freshmeat.net appindex entry for databases \n<http://www.freshmeat.net/appindex/daemons/database.html>\n\n- PostgreSQL.org's comparison chart <http://www.postgresql.org>\n\nAlert us to more and we'll put them here.\n\n-- Doc Searls\n\n-------------\n\nHere is the same thing, in HTML:\n\n\n<!DOCTYPE HTML PUBLIC \"-//W3C//DTD HTML 3.2 Final//EN\">\n<html>\n\n\t<head>\n\t\t<title>Credit Where Due</title>\n\t</head>\n\n\t<body>\n\t\t<h2>Credit where due</h2>\n\t\t<p>Since this interview went up, the response has \nbeen overwhelmingly positive. Some readers, however, have urged us to \ngive full credit to the other open source databases that are already \nout there and have prior claims to the &quot;major&quot; label. The \nstrongest urgings have come from PostgreSQL developers, who have \nprovided us with some points and links that we are happy to pass \nalong here.</p>\n\t\t<p>The points:</p>\n\t\t<p>&#151; University Ingres, developed starting in \n1977, qualifies for the 'First Major Open Source Database' honor. \nIngres is the direct ancestor of PostgreSQL.</p>\n\t\t<p>&#151; PostgreSQL is at version 6.5.3, and has \nbeen open source since the beginning. &quot;The development is very \nopen, the developers friendly, and the code is improving by leaps and \nbounds,&quot; writes Lamar Owen, RPM Package Maintainer with the \nPostgreSQL Global Development Group. He says &quot;PostgreSQL has \nshipped with RedHat Linux as part of the 'Official Boxed Set' since \nRedHat 5.0.&quot; He also recommends comparing RDBMSes by the \n&quot;ACID criteria.&quot; These are: &quot;Atomicity, Consistency, \nIsolation, Durability.&quot;</p>\n\t\t<p>&#151; Hacking database code is not lightweight \nwork. &quot;Kernel hacking is not a walk in the park, nor is GUI \nhacking, library hacking, or any other tool hacking,&quot; Owen says, \n&quot;But, database hacking is a league unto itself....The learning \ncurve for doing back-end database development is the steepest of any \nproject of which I am aware.&quot;</p>\n\t\t<p>Here are two useful links:</p>\n\t\t<ul>\n\t\t\t<li><a \nhref=\"http:/www.freshmeat.net/ppindex/aemons/atabase.html\">The \nfreshmeat.net appindex entry for databases</a>\n\t\t\t<li><a \nhref=\"http:/www.postgresql.org\">PostgreSQL.org's comparison chart</a>\n\t\t</ul>\n\t\t<p>Alert us to more and we'll put them here.</p>\n\t\t<p>&#151; Doc Searls\n\t</body>\n\n</html>\n\n\n\n----------\n\nDoes that work? If so, let's get it up.\n\nDoc, in the basement of Moscone, in the surreal Macworld where Apple \nstill, amazingly, lives.\n\n----------\nDoc Searls\nSenior Editor, Linux Journal\[email protected]\nhttp://www.linuxjournal.com\nOffice: 544 Oak Park Way, Emerald Hills, CA 94062-4038\nPhone: (650) 361-1324 Cell: (206) 849-9586 Fax: (650) 361-1348\n----------\n", "msg_date": "Thu, 06 Jan 2000 18:29:57 -0500", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": true, "msg_subject": "[Fwd: Re: First Major Open Source Database]" }, { "msg_contents": "\nSounds good to me ...\n\nOn Thu, 6 Jan 2000, Lamar Owen wrote:\n\n> [Ok, I've been in touch with the author of the 'First Major Open Source\n> Database' article. Here's what he wants to do. Let me know what you\n> think, and correct any misinformation I may have fed him.]\n> \n> --\n> Lamar Owen\n> WGCR Internet Radio\n> 1 Peter 4:11\n> \n> -------- Original Message --------\n> From: Doc Searls <[email protected]>\n> Subject: Re: First Major Open Source Database\n> To: Jason Kroll <[email protected]>\n> CC: [email protected], [email protected]\n> \n> To move this along quickly, I suggest this as a sidebar we can run as \n> a table in the piece at \n> http://www2.linuxjournal.com/articles/conversations/010.html ...\n> \n> ----------------\n> \n> Credit where due\n> \n> Since this interview went up, the response has been overwhelmingly \n> positive. Some readers, however, have urged us to give full credit to \n> the other open source databases that are already out there and have \n> prior claims to the \"major\" label. The strongest urgings have come \n> from PostgreSQL developers, who have provided us with some points and \n> links that we are happy to pass along here.\n> \n> The points:\n> \n> - University Ingres, developed starting in 1977, qualifies for the \n> 'First Major Open Source Database' honor. Ingres is the direct \n> ancestor of PostgreSQL.\n> \n> - PostgreSQL is at version 6.5.3, and has been open source since the \n> beginning. \"The development is very open, the developers friendly, \n> and the code is improving by leaps and bounds,\" writes Lamar Owen, \n> RPM Package Maintainer with the PostgreSQL Global Development Group. \n> He says \"PostgreSQL has shipped with RedHat Linux as part of the \n> 'Official Boxed Set' since RedHat 5.0.\" He also recommends comparing \n> RDBMSes by the \"ACID criteria.\" These are: \"Atomicity, Consistency, \n> Isolation, Durability.\"\n> \n> - Hacking database code is not lightweight work. \"Kernel hacking is \n> not a walk in the park, nor is GUI hacking, library hacking, or any \n> other tool hacking,\" Owen says, \"But, database hacking is a league \n> unto itself....The learning curve for doing back-end database \n> development is the steepest of any project of which I am aware.\"\n> \n> Here are two useful links:\n> \n> - The freshmeat.net appindex entry for databases \n> <http://www.freshmeat.net/appindex/daemons/database.html>\n> \n> - PostgreSQL.org's comparison chart <http://www.postgresql.org>\n> \n> Alert us to more and we'll put them here.\n> \n> -- Doc Searls\n> \n> -------------\n> \n> Here is the same thing, in HTML:\n> \n> \n> <!DOCTYPE HTML PUBLIC \"-//W3C//DTD HTML 3.2 Final//EN\">\n> <html>\n> \n> \t<head>\n> \t\t<title>Credit Where Due</title>\n> \t</head>\n> \n> \t<body>\n> \t\t<h2>Credit where due</h2>\n> \t\t<p>Since this interview went up, the response has \n> been overwhelmingly positive. Some readers, however, have urged us to \n> give full credit to the other open source databases that are already \n> out there and have prior claims to the &quot;major&quot; label. The \n> strongest urgings have come from PostgreSQL developers, who have \n> provided us with some points and links that we are happy to pass \n> along here.</p>\n> \t\t<p>The points:</p>\n> \t\t<p>&#151; University Ingres, developed starting in \n> 1977, qualifies for the 'First Major Open Source Database' honor. \n> Ingres is the direct ancestor of PostgreSQL.</p>\n> \t\t<p>&#151; PostgreSQL is at version 6.5.3, and has \n> been open source since the beginning. &quot;The development is very \n> open, the developers friendly, and the code is improving by leaps and \n> bounds,&quot; writes Lamar Owen, RPM Package Maintainer with the \n> PostgreSQL Global Development Group. He says &quot;PostgreSQL has \n> shipped with RedHat Linux as part of the 'Official Boxed Set' since \n> RedHat 5.0.&quot; He also recommends comparing RDBMSes by the \n> &quot;ACID criteria.&quot; These are: &quot;Atomicity, Consistency, \n> Isolation, Durability.&quot;</p>\n> \t\t<p>&#151; Hacking database code is not lightweight \n> work. &quot;Kernel hacking is not a walk in the park, nor is GUI \n> hacking, library hacking, or any other tool hacking,&quot; Owen says, \n> &quot;But, database hacking is a league unto itself....The learning \n> curve for doing back-end database development is the steepest of any \n> project of which I am aware.&quot;</p>\n> \t\t<p>Here are two useful links:</p>\n> \t\t<ul>\n> \t\t\t<li><a \n> href=\"http:/www.freshmeat.net/ppindex/aemons/atabase.html\">The \n> freshmeat.net appindex entry for databases</a>\n> \t\t\t<li><a \n> href=\"http:/www.postgresql.org\">PostgreSQL.org's comparison chart</a>\n> \t\t</ul>\n> \t\t<p>Alert us to more and we'll put them here.</p>\n> \t\t<p>&#151; Doc Searls\n> \t</body>\n> \n> </html>\n> \n> \n> \n> ----------\n> \n> Does that work? If so, let's get it up.\n> \n> Doc, in the basement of Moscone, in the surreal Macworld where Apple \n> still, amazingly, lives.\n> \n> ----------\n> Doc Searls\n> Senior Editor, Linux Journal\n> [email protected]\n> http://www.linuxjournal.com\n> Office: 544 Oak Park Way, Emerald Hills, CA 94062-4038\n> Phone: (650) 361-1324 Cell: (206) 849-9586 Fax: (650) 361-1348\n> ----------\n> \n> ************\n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Thu, 6 Jan 2000 20:24:35 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] [Fwd: Re: First Major Open Source Database]" }, { "msg_contents": "The Hermit Hacker wrote:\n> \n> Sounds good to me ...\n> \n> On Thu, 6 Jan 2000, Lamar Owen wrote:\n> \n> > [Ok, I've been in touch with the author of the 'First Major Open Source\n> > Database' article. Here's what he wants to do. Let me know what you\n> > think, and correct any misinformation I may have fed him.]\n\nOk, I'm replying to him with a 'go'.\n\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Thu, 06 Jan 2000 19:47:15 -0500", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] [Fwd: Re: First Major Open Source Database]" }, { "msg_contents": "Thus spake Lamar Owen\n> [Ok, I've been in touch with the author of the 'First Major Open Source\n> Database' article. Here's what he wants to do. Let me know what you\n> think, and correct any misinformation I may have fed him.]\n> [...]\n> - University Ingres, developed starting in 1977, qualifies for the \n> 'First Major Open Source Database' honor. Ingres is the direct \n> ancestor of PostgreSQL.\n\nNot that it is so important but I think that Postgres was a different\nproject by the same person (Dr. Micheal Stonebraker) so it is probably\nmore accurate to call them siblings.\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 425 1212 (DoD#0082) (eNTP) | what's for dinner.\n", "msg_date": "Fri, 7 Jan 2000 09:34:58 -0500 (EST)", "msg_from": "[email protected] (D'Arcy J.M. Cain)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] [Fwd: Re: First Major Open Source Database]" }, { "msg_contents": "\"D'Arcy J.M. Cain\" wrote:\n> \n> Thus spake Lamar Owen\n> > [Ok, I've been in touch with the author of the 'First Major Open Source\n> > Database' article. Here's what he wants to do. Let me know what you\n> > think, and correct any misinformation I may have fed him.]\n> > [...]\n> > - University Ingres, developed starting in 1977, qualifies for the\n> > 'First Major Open Source Database' honor. Ingres is the direct\n> > ancestor of PostgreSQL.\n> \n> Not that it is so important but I think that Postgres was a different\n> project by the same person (Dr. Micheal Stonebraker) so it is probably\n> more accurate to call them siblings.\n\nHmmmm... You may be right -- I wasn't there (in 1987 I was still a\nsophomore in college, and was still hacking my old Z80-based computer). \nI was quoting Bruce's History of PostgreSQL document, which states:\n\"PostgreSQL began as Ingres, developed at the University of California\nat\nBerkeley(1977-1985). The Ingres code was taken and enhanced by\nRelational Technologies/Ingres Corporation, which produced one of the\nfirst commercially successful relational database servers. (Ingres\nCorp. was later purchased by Computer Associates.) Also at Berkeley,\nMichael Stonebraker lead a team to develop an object-relational database\nserver\ncalled Postgres(1986-1994). \"\n\nHmmm... On second read, that seems ambiguous. Does anyone know if the\nfirst Postgres codebase included any Ingres code (the criterion for\n'ancestry')? Or was Postgres (a play on words anyway -- Ingres used the\nQUEL language, others started using SEQUEL (later SQL), Postgres, being\ndifferent, used POSTQUEL) a complete rewrite from the ground up?\n\nNot that it is terribly important, but I am interested in accuracy.\n\nThe Official Documentation doesn't even mention Ingres in its Short\nHistory chapter.\n\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Fri, 07 Jan 2000 10:23:55 -0500", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] [Fwd: Re: First Major Open Source Database]" }, { "msg_contents": "\"D'Arcy J.M. Cain\" wrote:\n> \n> Thus spake Lamar Owen\n> > [Ok, I've been in touch with the author of the 'First Major Open Source\n> > Database' article. Here's what he wants to do. Let me know what you\n> > think, and correct any misinformation I may have fed him.]\n> > [...]\n> > - University Ingres, developed starting in 1977, qualifies for the\n> > 'First Major Open Source Database' honor. Ingres is the direct\n> > ancestor of PostgreSQL.\n> \n> Not that it is so important but I think that Postgres was a different\n> project by the same person (Dr. Micheal Stonebraker) so it is probably\n> more accurate to call them siblings.\n\nAFAIK it was a different project that built heavily on Ingres (and Postgres's\nquery language PostQUEL was an extended version of (University)Ingres' QUEL.\n\nBoth were later commercially extended\n\n + -UniversityIngres(with QUEL) --> Ingres(withSQL)\n |\n \\- Postgres(PostQuel) -+-> Illustra(SQL) -> InformixUDB(SQL)\n \\-> Postgres95(SQL) -> PostgreSQL(SQL)\n\nSo I think that Ancestor is more accurate than Sibling.\n\n------------\nHannu\n", "msg_date": "Fri, 07 Jan 2000 17:45:25 +0200", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] [Fwd: Re: First Major Open Source Database]" }, { "msg_contents": "> Thus spake Lamar Owen\n> > [Ok, I've been in touch with the author of the 'First Major Open Source\n> > Database' article. Here's what he wants to do. Let me know what you\n> > think, and correct any misinformation I may have fed him.]\n> > [...]\n> > - University Ingres, developed starting in 1977, qualifies for the \n> > 'First Major Open Source Database' honor. Ingres is the direct \n> > ancestor of PostgreSQL.\n> \n> Not that it is so important but I think that Postgres was a different\n> project by the same person (Dr. Micheal Stonebraker) so it is probably\n> more accurate to call them siblings.\n\nI am told _no_ Ingres code went into Postgres. It was a complete\nrewrite.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 7 Jan 2000 11:44:09 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] [Fwd: Re: First Major Open Source Database]" }, { "msg_contents": "> \"D'Arcy J.M. Cain\" wrote:\n> > \n> > Thus spake Lamar Owen\n> > > [Ok, I've been in touch with the author of the 'First Major Open Source\n> > > Database' article. Here's what he wants to do. Let me know what you\n> > > think, and correct any misinformation I may have fed him.]\n> > > [...]\n> > > - University Ingres, developed starting in 1977, qualifies for the\n> > > 'First Major Open Source Database' honor. Ingres is the direct\n> > > ancestor of PostgreSQL.\n> > \n> > Not that it is so important but I think that Postgres was a different\n> > project by the same person (Dr. Micheal Stonebraker) so it is probably\n> > more accurate to call them siblings.\n> \n> Hmmmm... You may be right -- I wasn't there (in 1987 I was still a\n> sophomore in college, and was still hacking my old Z80-based computer). \n> I was quoting Bruce's History of PostgreSQL document, which states:\n> \"PostgreSQL began as Ingres, developed at the University of California\n> at\n> Berkeley(1977-1985). The Ingres code was taken and enhanced by\n> Relational Technologies/Ingres Corporation, which produced one of the\n> first commercially successful relational database servers. (Ingres\n> Corp. was later purchased by Computer Associates.) Also at Berkeley,\n> Michael Stonebraker lead a team to develop an object-relational database\n> server\n> called Postgres(1986-1994). \"\n\n\n> \n> Hmmm... On second read, that seems ambiguous. Does anyone know if the\n> first Postgres codebase included any Ingres code (the criterion for\n> 'ancestry')? Or was Postgres (a play on words anyway -- Ingres used the\n> QUEL language, others started using SEQUEL (later SQL), Postgres, being\n> different, used POSTQUEL) a complete rewrite from the ground up?\n\nIt was purposely ambiguous.\n\nIt did not use any Ingres code, as told to me by Jolly, I think. My\nbook has Ingres mentioned as an \"ancestor\" of Postgres.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 7 Jan 2000 12:15:31 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] [Fwd: Re: First Major Open Source Database]" }, { "msg_contents": "Bruce Momjian wrote:\n> > Hmmm... On second read, that seems ambiguous. \n\n> It was purposely ambiguous.\n\nI was afraid of that. \n\n> It did not use any Ingres code, as told to me by Jolly, I think. My\n> book has Ingres mentioned as an \"ancestor\" of Postgres.\n\nI have e-mailed Doc again, asking him to remove the 'direct' in the line\n'Ingres was the direct ancestor of PostgreSQL' -- direct implies, IMO,\nshared code. Thanks for clarifying, Bruce...\n\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Fri, 07 Jan 2000 13:44:49 -0500", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] [Fwd: Re: First Major Open Source Database]" }, { "msg_contents": "Thus spake Lamar Owen\n> Bruce Momjian wrote:\n> > It did not use any Ingres code, as told to me by Jolly, I think. My\n> > book has Ingres mentioned as an \"ancestor\" of Postgres.\n> \n> I have e-mailed Doc again, asking him to remove the 'direct' in the line\n> 'Ingres was the direct ancestor of PostgreSQL' -- direct implies, IMO,\n> shared code. Thanks for clarifying, Bruce...\n\nI still think that since there is no shared code you can't say that\nIngres was the parent to Postgres, more like an older brother. Guess\nthat makes Ingres PostgreSQL's great uncle. :-)\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 425 1212 (DoD#0082) (eNTP) | what's for dinner.\n", "msg_date": "Fri, 7 Jan 2000 16:03:23 -0500 (EST)", "msg_from": "[email protected] (D'Arcy J.M. Cain)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] [Fwd: Re: First Major Open Source Database]" }, { "msg_contents": "On Fri, 7 Jan 2000, D'Arcy J.M. Cain wrote:\n\n> I still think that since there is no shared code you can't say that\n> Ingres was the parent to Postgres, more like an older brother. Guess\n> that makes Ingres PostgreSQL's great uncle. :-)\n\nStupid question...does it *really* matter? :)\n \nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Fri, 7 Jan 2000 17:17:52 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] [Fwd: Re: First Major Open Source Database]" }, { "msg_contents": "\"D'Arcy J.M. Cain\" wrote:\n> > I have e-mailed Doc again, asking him to remove the 'direct' in the line\n> > 'Ingres was the direct ancestor of PostgreSQL' -- direct implies, IMO,\n> > shared code. Thanks for clarifying, Bruce...\n\n> I still think that since there is no shared code you can't say that\n> Ingres was the parent to Postgres, more like an older brother. Guess\n> that makes Ingres PostgreSQL's great uncle. :-)\n\nROTFL\n\nThe guys at Linux Journal are very apologetic that they overlooked\nPostgreSQL -- if the consensus is to change from 'ancestor' to some\nother usage (maybe step-ancestor??), then they can do it -- it's not set\nin stone.\n\nI personally am comfortable with 'ancestor' in this usage -- there are\ninstances of where a program was completely rewritten and only a version\nnumber change happened, even with no shared codebase (the webserver\nlogfile analyzer 'analog' has had this happen more than once -- in\nparticular, the code was completely rewritten from scratch between\nversion 2.11 and 3.0. Analog 3.0 shares no code at all with analog 2.11\n-- not necessarily the best software design, but, it's Steven's codebase\nto play with.).\n\nIt's like the relationship between the CERN, NCSA, and Apache\nwebservers. \n\nThey will be at least giving credit where credit is due (like you said,\nit's not a major point).\n\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Fri, 07 Jan 2000 16:21:13 -0500", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] [Fwd: Re: First Major Open Source Database]" }, { "msg_contents": "> It did not use any Ingres code, as told to me by Jolly, I think. My\n> book has Ingres mentioned as an \"ancestor\" of Postgres.\n\nI suppose we could have figured this out ourselves, since Postgres was\noriginally written in Lisp, and afaik Ingres was always C or somesuch\ntraditional compiled-only code. We still see evidence of this in our\ncode tree with the way lists and parser nodes are handled.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Sat, 08 Jan 2000 03:00:00 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] [Fwd: Re: First Major Open Source Database]" }, { "msg_contents": "Thomas Lockhart <[email protected]> writes:\n>> It did not use any Ingres code, as told to me by Jolly, I think. My\n>> book has Ingres mentioned as an \"ancestor\" of Postgres.\n\n> I suppose we could have figured this out ourselves, since Postgres was\n> originally written in Lisp, and afaik Ingres was always C or somesuch\n> traditional compiled-only code. We still see evidence of this in our\n> code tree with the way lists and parser nodes are handled.\n\nIt's clear from both the comments and remnants of coding conventions\nthat the planner/optimizer was originally Lisp code, and was hand-\ntranslated to C at some point in the dim mists of prehistory (early\n1990s, possibly ;-)). That Lisp heritage is responsible for some of\nthe better things about the code, and also some of the worse things.\n\nBut I'm not sure I believe that *all* of the code was originally\nLisp. I've never heard of a Lisp interface for yacc-generated\nparsers, for example. The parts of the executor I've looked at\ndon't seem nearly as Lispy as the parser/planner/optimizer, either.\nSo it seems possible that parts of Postgres were written afresh in\nLisp while other parts were lifted from an older C implementation.\n\n</idle speculation>\n\nDoes anyone here still recall the origins of Postgres? I'm curious\nto know more about the history of this beast.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 08 Jan 2000 00:37:30 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Historical trivia (was Re: First Major Open Source Database)" }, { "msg_contents": "I am CC'ing Jolly and Andrew on this. They may know the answer.\n\n---------------------------------------------------------------------------\n\n> Thomas Lockhart <[email protected]> writes:\n> >> It did not use any Ingres code, as told to me by Jolly, I think. My\n> >> book has Ingres mentioned as an \"ancestor\" of Postgres.\n> \n> > I suppose we could have figured this out ourselves, since Postgres was\n> > originally written in Lisp, and afaik Ingres was always C or somesuch\n> > traditional compiled-only code. We still see evidence of this in our\n> > code tree with the way lists and parser nodes are handled.\n> \n> It's clear from both the comments and remnants of coding conventions\n> that the planner/optimizer was originally Lisp code, and was hand-\n> translated to C at some point in the dim mists of prehistory (early\n> 1990s, possibly ;-)). That Lisp heritage is responsible for some of\n> the better things about the code, and also some of the worse things.\n> \n> But I'm not sure I believe that *all* of the code was originally\n> Lisp. I've never heard of a Lisp interface for yacc-generated\n> parsers, for example. The parts of the executor I've looked at\n> don't seem nearly as Lispy as the parser/planner/optimizer, either.\n> So it seems possible that parts of Postgres were written afresh in\n> Lisp while other parts were lifted from an older C implementation.\n> \n> </idle speculation>\n> \n> Does anyone here still recall the origins of Postgres? I'm curious\n> to know more about the history of this beast.\n> \n> \t\t\tregards, tom lane\n> \n> ************\n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 8 Jan 2000 01:31:40 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Historical trivia (was Re: First Major Open Source\n\tDatabase)" } ]
[ { "msg_contents": ">> >IMHO the ANSI standard (as anything designed by a committee) is not\n>> always\n>> the\n>> >best \n>> >way to do things.\nHmmm, quite frequently, yes.\n\n< clip >\n\n>> My understanding from reading Date is that one reason for not adopting\n>> the common vendor hacks for outer joins is that outer joins aren't\n>> associative and the result of complicated expressions in some cases\n>> will depend on the order in which the RDBMS' optimizer chooses to\n>> execute them.\nYes, that was what I thought as well. I'm not sure what the ANSI standard\nspecifies, but if it's the style used below (which is what Access uses),\nthen it's not overly complicated, but can be a little difficult to read\nsometimes.\n\n>> Putting on my compiler-writer hat, I can see where having joins\n>> explicitly declared in the \"from\" clauses rather than derived from\n>> an analysis of the \"from\" and \"where\" clauses might well simplify\n>> the development of a new SQL 92 RDBMS if one were to start from\n>> scratch. It's cleaner, IMO. This doesn't apply to Postgres,\n>> since the outer joins are being shoe-horned into existing data\n>> structures.\nDoes that make any difference?\n\n>> Of course, I speak as someone without\n>> a lot of experience writing Oracle or Informix SQL. If you're used\n>> to Oracle, then it's not surprising you find its means of specifying\n>> an outer join the most natural and easiest to understand...\nYes, you're probably right. Although, I learnt SQL using Access and SQL\nServer, doing the 'A INNER JOIN B ON A.x = B.x' syntax, and I still prefer\nOracle's way of doing it. From the developers point of view, it's pretty\neasy to read (mainly because it doesn't clutter your FROM clause). Perhaps\nthe best would be to implement ANSI outer joins, and then use the rewriter\nto allow for the Oracle syntax, or something similar, just to add\nreadability to the SQL.\n\nMikeA\n", "msg_date": "Fri, 7 Jan 2000 09:06:23 +0200 ", "msg_from": "\"Ansley, Michael\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] Enhancing PGSQL to be compatible with Informix SQL" }, { "msg_contents": "> the best would be to implement ANSI outer joins, and then use the \n> rewriter to allow for the Oracle syntax, or something similar, just to \n> add readability to the SQL.\n\nWhen I tried adding the syntax to gram.y a while ago it gave\nshift/reduce errors on the \"(+)\" fields. I would guess that we would\nneed to have this become a token in the lexer :((\n\nI'll have the code in gram.y (commented out) when I commit my next\nchanges for this; someone can play with it if they want.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Fri, 07 Jan 2000 14:32:59 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Enhancing PGSQL to be compatible with Informix SQL" }, { "msg_contents": "On Fri, 7 Jan 2000, Thomas Lockhart wrote:\n\n> > the best would be to implement ANSI outer joins, and then use the \n> > rewriter to allow for the Oracle syntax, or something similar, just to \n> > add readability to the SQL.\n> \n> When I tried adding the syntax to gram.y a while ago it gave\n> shift/reduce errors on the \"(+)\" fields. I would guess that we would\n> need to have this become a token in the lexer :((\n> \n\nI'd actually expect a potential reduce/reduce conflict between:\n\na_expr:\n\tfunc_name '(' ... ')'\n\tand a_expr '(' '+' ')'\n\nsince func_name is a ColID as is a_expr potentially, so the system must\nwork out whether the ColID reduces to an a_expr when it sees then '('\nwhich it can't do (it needs another token to work that out).\n\nSo yes, you probably need a new lexer token.\n\n> I'll have the code in gram.y (commented out) when I commit my next\n> changes for this; someone can play with it if they want.\n> \n> - Thomas\n> \n> -- \n> Thomas Lockhart\t\t\t\[email protected]\n> South Pasadena, California\n> \n\n.............................Rod\n\n+-----------------------------------------------------------------------------+\n| Rod Chamberlin | [email protected] Tel +44 1703 232345 |\n| Software Engineer | Mob +44 7803 295406 |\n| QueriX | Fax +44 1703 399685 |\n+-----------------------------------------------------------------------------+\n| The views expressed in this document do not necessarily represent those of |\n| the management of QueriX (UK) Ltd. |\n+-----------------------------------------------------------------------------+\n\n\n", "msg_date": "Fri, 7 Jan 2000 14:44:57 +0000 (GMT)", "msg_from": "Rod Chamberlin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Enhancing PGSQL to be compatible with Informix SQL" } ]
[ { "msg_contents": "Sorry for the off-topic question, but...\n\nI've got a (laptop) system running Mandrake 6.1 which is configured\nout of the box to disallow core dumps from users. root is allowed to\nincrease the size limit (from tcsh, use \"limit coredumpsize\nunlimited\") but users are not allowed to do this for themselves.\n\nWhere does one specify this parameter on a system-wide basis? My older\nRedHat boxes all have a non-zero limit for this parameter, and allow\nsetting the limit to infinity by users. Don't know if Mandrake is\nconfigured differently from RH6.1, but until I get this adjusted it\ndoesn't make a reasonable development machine...\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Fri, 07 Jan 2000 07:35:57 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": true, "msg_subject": "(OT) Linux limits" }, { "msg_contents": "> > I've got a (laptop) system running Mandrake 6.1 which is configured\n> > out of the box to disallow core dumps from users. root is allowed to\n> > increase the size limit (from tcsh, use \"limit coredumpsize\n> > unlimited\") but users are not allowed to do this for themselves.\n> are you looking for /etc/security/limits.conf ?\n\nThanks for the tip, and it looks like the right thing, but adding\nentries for core and rebooting does not help. I then tried upping a\nbrute-force limit of zero imposed in the daemon startup function in\n/etc/rc.d/init.d/functions thinking that inetd or loginout or somesuch\nprocess might need to be higher (since all children inherit these\nlimits apparently), but that does not seem to help. \n\nIt is set to zero in /etc/profile, and commented out in\n/etc/csh.cshrc, but afaik anything set at that point should be able to\nbe set higher later. There is a cryptic comment in /etc/profile saying\nthat \"for bash2 it can't be set higher for user processes\", but I\ndon't know what that's about.\n\nCan someone running a Mandrake6.1 or RH6.1 system take a look at their\nsystem limits (for csh use \"limit\", for bash use \"ulimit -a\"). Are\nthey greater than zero for the coredumpsize??\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Fri, 07 Jan 2000 15:39:37 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] (OT) Linux limits" }, { "msg_contents": "Thomas Lockhart wrote:\n> setting the limit to infinity by users. Don't know if Mandrake is\n> configured differently from RH6.1, but until I get this adjusted it\n> doesn't make a reasonable development machine...\n\nMy experience has been that starting with version 6.0 Mandrake is\ndiverging from RedHat. Mandrake 5.3 can properly be called 'RedHat\n5.2+KDE+enhancements' -- Mandrake 6.0 and 6.1, being released before\ntheir RedHat counterparts, are not nearly as close.\n\nI tried using Mandrake 6.0 to build RPMs, and quickly replaced it with\nRedHat 6.0 -- Mandrake 6.0 used pgcc instead of egcs, for one. Caused\nme all manner of grief. Mandrake 6.1 may be better in this regard, but\nI am sticking with RedHat for the time being, as it is the current\nbaseline target of the RPM distribution. From what I understand, the\nRedHat binary RPM's still work with Mandrake.\n\nMandrake is now a full-fledged distribution, not just another RedHat\nknock-off.\n\nI'm going to have to get my home machine into a multidevelopment mode,\nwith RedHat, Caldera, SuSE, and Mandrake multibooting, as each of these\nRPM-based distributions is different, although Mandrake and RedHat are\nmore alike than SuSE and Caldera. Or, you can help me with Mandrake\nissues in both the source and binary RPM's, just as I am getting\nassistance from others with the Alpha patches, building/installing the\nRPM's under SuSE and Caldera, and other architecture (ARM and MIPS come\nto mind) issues.\n\nPortability amongst Linux distributions is becoming nearly as big of\nissue as portability amongst different Unices.\n\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Fri, 07 Jan 2000 10:42:09 -0500", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] (OT) Linux limits" }, { "msg_contents": "Thomas Lockhart wrote:\n> Can someone running a Mandrake6.1 or RH6.1 system take a look at their\n> system limits (for csh use \"limit\", for bash use \"ulimit -a\"). Are\n> they greater than zero for the coredumpsize??\n\nMandrake 5.3:\n[lowen@www lowen]$ ulimit -a\ncore file size (blocks) 1000000\ndata seg size (kbytes) unlimited\nfile size (blocks) unlimited\nmax memory size (kbytes) unlimited\nstack size (kbytes) 8192\ncpu time (seconds) unlimited\nmax user processes 256\npipe size (512 bytes) 8\nopen files 256\nvirtual memory (kbytes) 2105343\n[lowen@www lowen]$\n\nRedHat 6.0:\n[lowen@backup lowen]$ ulimit -a\ncore file size (blocks) 1000000\ndata seg size (kbytes) unlimited\nfile size (blocks) unlimited\nmax memory size (kbytes) unlimited\nstack size (kbytes) 8192\ncpu time (seconds) unlimited\nmax user processes 256\npipe size (512 bytes) 8\nopen files 1024\nvirtual memory (kbytes) 2105343\n[lowen@backup lowen]$\n\nRedHat 6.1:\n[lowen@utility lowen]$ ulimit -a\ncore file size (blocks) 1000000\ndata seg size (kbytes) unlimited\nfile size (blocks) unlimited\nmax memory size (kbytes) unlimited\nstack size (kbytes) 8192\ncpu time (seconds) unlimited\nmax user processes 2048\npipe size (512 bytes) 8\nopen files 1024\nvirtual memory (kbytes) 2105343\n[lowen@utility lowen]$\n\nDon't have a Mandrake 6.1 system up.\n\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Fri, 07 Jan 2000 10:46:54 -0500", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] (OT) Linux limits" }, { "msg_contents": "On Fri, 7 Jan 2000, Thomas Lockhart wrote:\n\n> > > I've got a (laptop) system running Mandrake 6.1 which is configured\n> > > out of the box to disallow core dumps from users. root is allowed to\n> > > increase the size limit (from tcsh, use \"limit coredumpsize\n> > > unlimited\") but users are not allowed to do this for themselves.\n> > are you looking for /etc/security/limits.conf ?\n> \n> Thanks for the tip, and it looks like the right thing, but adding\n> entries for core and rebooting does not help. I then tried upping a\n> brute-force limit of zero imposed in the daemon startup function in\n> /etc/rc.d/init.d/functions thinking that inetd or loginout or somesuch\n> process might need to be higher (since all children inherit these\n> limits apparently), but that does not seem to help. \n\nUnder FreeBSD, we have a similar file: login.conf ... after modifying it,\nthough, you have to run a command to \"compile\" it ... do you have\nsomething similar?\n\n\n", "msg_date": "Fri, 7 Jan 2000 11:49:24 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] (OT) Linux limits" }, { "msg_contents": "Hello Thomas,\n \n> Where does one specify this parameter on a system-wide basis? My older\n> RedHat boxes all have a non-zero limit for this parameter, and allow\n> setting the limit to infinity by users. Don't know if Mandrake is\n> configured differently from RH6.1, but until I get this adjusted it\n> doesn't make a reasonable development machine...\n\nI have Mandrake 6.0 though I think I know your problem. In \n/etc/profile, there's an entry there where it limits core dumps to \n100000 I think for root. You might want to remove that or make it \nunlimited.\n\nRegards,\n\nNeil D. Quiogue\nSTO - dotPH, Inc.\n\n \"Nothing great was ever achieved without enthusiasm.\"\n - Ralph Waldo Emerson\n", "msg_date": "Fri, 7 Jan 2000 16:08:29 +0000", "msg_from": "\"neil d. quiogue\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] (OT) Linux limits" } ]
[ { "msg_contents": "I had written a pgm in Java on a Windows m/c\nwhich accesses data from a linux m/c.The database\nis stored in the linux m/c.I added postgresql.jar file(from Linux m/c)\nfor my proj.But when I run the pgm,the following Exception \nis displayed.\n The postgresql.jar file does not contain the correct\nJDBC classes for this JVM.\nTry rebuilding.\nException thrown \n java.lang.ClassNotFound Exception.\nI want to know is there any JDBC driver for Windows\nin postgresql??\n If so,would u please let me know where is it available??\n\nThanking you,\nShanthala.\n\n\n\n\n\n\n\n\n\nI had written a pgm in Java on a Windows m/c\nwhich accesses data from a linux m/c.The database\nis stored in the linux m/c.I added postgresql.jar file(from Linux \nm/c)\nfor my proj.But when I run the pgm,the following Exception \nis displayed.\n The postgresql.jar file does not contain the correct\nJDBC classes for this JVM.\nTry rebuilding.\nException thrown \n   java.lang.ClassNotFound Exception.\nI want to know is there any JDBC driver for Windows\nin postgresql??\n If so,would u please let me know where is it available??\n \nThanking you,\nShanthala.", "msg_date": "Fri, 7 Jan 2000 17:17:35 +0900", "msg_from": "\"Shanthala Rao\" <[email protected]>", "msg_from_op": true, "msg_subject": "please help" } ]
[ { "msg_contents": " Hi,\n\n Besides the docs says I can do an \"update table*\" or \"delete table*\" \nit doesn't work.\n Is it intended to work soon ?\n\n I'm in a project with six levels of inheritance and would be easier\nif it worked.\n\n []'s\n\nMateus Cordeiro Inssa\n---------------------\nLinux User: 76186 Kernel: 2.3.36\nICQ (Licq): 15243895\n---------------------\[email protected]\[email protected]\n\nFri Jan 7 10:55:42 EDT 2000\n", "msg_date": "Fri, 7 Jan 2000 10:55:43 -0200 (EDT)", "msg_from": "Mateus Cordeiro Inssa <[email protected]>", "msg_from_op": true, "msg_subject": "Inheritance" }, { "msg_contents": "Mateus Cordeiro Inssa <[email protected]> writes:\n> Besides the docs says I can do an \"update table*\" or \"delete table*\" \n> it doesn't work.\n> Is it intended to work soon ?\n\nIt's on the to-do list, but I don't think anyone is planning to get\naround to it for this release. Hard to say when it might happen.\n\nUsually the way these kinds of things work is that someone who\nreally needs the feature goes in and writes the code for it...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 07 Jan 2000 10:39:54 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Inheritance " } ]
[ { "msg_contents": "Hello,\n\n> I have Mandrake 6.0 though I think I know your problem. In \n> /etc/profile, there's an entry there where it limits core dumps to \n> 100000 I think for root. You might want to remove that or make it \n> unlimited.\n\nI forgot.... it's ulimit -c 100000 that's limiting the core dumps.\n\nRegards, \n\nNeil D. Quiogue\nSTO - dotPH, Inc.\n\n \"Nothing great was ever achieved without enthusiasm.\"\n - Ralph Waldo Emerson\n", "msg_date": "Fri, 7 Jan 2000 16:11:22 +0000", "msg_from": "\"neil d. quiogue\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] (OT) Linux limits" } ]
[ { "msg_contents": "1. The AOL car would have a TOP speed of 40 MPH yet have a 200 MPH Speedometer. \n\n2. The AOL car would come equipped with a NEW and fantastic 8-Track tape player. \n\n3. The car would often refuse to start and owners would just expect this and try again later. \n\n4. The windshield would have an extra dark tint to protect the driver from seeing better cars. \n\n5. AOL would sell the same model car year after year and claim it's the NEW model. \n\n6. Every now and then the brakes on the AOL car would just \"lock-up\" for no apparent reason. \n\n7. The AOL car would have a very plain body style but would have lots'a of pretty colors and lights. \n\n8. The AOL car would have only one door but it would have 5 extra seats for family members. \n\n9. AOL car mechanics would have no experience whatsoever in car repair.\n\n10. If an AOL car owner received 3 parking tickets AOL would take the car off of them. \n\n11. The AOL car would have an AOL Cell phone that can only place calls to other AOL car cell phones. \n\n12. AOL would pass a new car law forbidding AOL car owners from driving near other car dealerships. \n13. Younger AOL car drivers would be able to make other peoples AOL cars stall just for fun. \n\n14. It would not be possible to upgrade your AOL car stereo. \n\n15. AOL cars would be forced to use AOL gas that cost 20% more and gave worse mileage. \n\n16. Anytime an AOL car owner saw another AOL car owner he would wonder, M/F/age? \n\n17. It would be common for AOL car owners to divorce just to marry another AOL car owner. \n\n18. AOL car owners would always claim to be older or younger than they really are. \n\n19. AOL cars would come with a steering wheel and AOL would claim no other cars have them. \n\n20. Every time you close the door on the AOL car it would say,\"Good-Bye.\" \n\nA ofr pr ke fk\nllea o pzzs kbt yusf\nuso etp mqy zeo\nlem kjeefv mmdrhm fbwzes oeru kwiof?\n\nKele slfeei eifp efybmo szrtev pb!\n\nGdju aii smeucue osbeua lnpep\npuai gmsoz nml lxq\nfed qs efz pmc kam byhs\nbffkw bue pprbde edif naxpb?\n\nO bnlaayn sd ukuunic mlmu ekkfu\npabuf epkue eyi o nfki\nlsys xsbn okzbo ucfnu fu\nkxf nqsc eab bpb mnps\nmmyevp aoeba pyeikh nmk ibmmx zw\nsp izkkz mlmb psml iifm!\n\nLlmfgp iqb dlrb fphmr cdm jnk\nfn bg ljea qlu jlalb\ncdi eelnkle syk uez i olsi\ncauiic mwki xsvus jpgm\nefqi ceef iabar jyees slr jas?\n\n\n\n", "msg_date": "Fri, 07 Jan 2000 16:39:05 GMT", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": ".,.IF AOL WAS A CAR..,," } ]
[ { "msg_contents": "I'm trying to compile the src/interfaces/jdbc postgresql.jar file but I\nget a syntax error (pg 6.5.2):\n/bin/sh: syntax error at line 1: '(' unexpected\nmake: *** [all] Error 2\n\nFunny thing is that I'm under the C shell.....using gcc, Solaris 7 on a\nSparc.\n\nI tried to replace $( ) with ' ' per the instructions but there is some\nsyntax that I am not familiar with: $$($(JAVA) makeVersion). How should\nthis look after the replacement?\n\nReally appreciate help. I've developed 3k SLOC under Visual Cafe\n(managed to rewrite out all Symantec classes). It's running fine in the\nVisual Cafe environment. Now I need to field it under the Netscape\nSuitespot server.\n\nThanks\n\nAllan in Belgium\n\n\n", "msg_date": "Fri, 07 Jan 2000 17:40:36 +0100", "msg_from": "\"Allan Huffman\" <[email protected]>", "msg_from_op": true, "msg_subject": "[HACKERS] make JDBC postgresql.jar error" }, { "msg_contents": "Allan Huffman wrote:\n\n> I'm trying to compile the src/interfaces/jdbc postgresql.jar file but I\n> get a syntax error (pg 6.5.2):\n> /bin/sh: syntax error at line 1: '(' unexpected\n> make: *** [all] Error 2\n\nThis has bitten me also. I'm using a sparc, solaris 7, bash , gcc version\n2.95.2. This is a KLUDGE, but here is a patch that works for BASH:\n\nvlad: diff -w3c Makefile /opt/java/pgsql/Makefile\n*** Makefile Wed Jun 23 00:56:17 1999\n--- /opt/java/pgsql/Makefile Tue Oct 5 09:20:17 1999\n***************\n*** 16,21 ****\n--- 16,22 ----\n JAVADOC = javadoc\n RM = rm -f\n TOUCH = touch\n+ SHELL = /bin/bash\n\n # This defines how to compile a java class\n .java.class:\n\n\n--\nBrian Millett\nEnterprise Consulting Group \"Heaven can not exist,\n(314) 205-9030 If the family is not eternal\"\[email protected] F. Ballard Washburn\n\n\n\n", "msg_date": "Fri, 07 Jan 2000 11:17:05 -0600", "msg_from": "Brian P Millett <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] make JDBC postgresql.jar error" } ]
[ { "msg_contents": "\nJust has a support call where the client couldn't drop a table, giving him\na 'No such file or directory' error...yet a \\d showed him that the table\nexisted...\n\nTo \"fix\" the problem, we got him to shutdown his server, touch the file\nthat it says is missing, bring the server back up and then drop\nit...which, of course, succeeded...\n\nBut...does it make sense to error-out in this case? The user wants to get\nrid of the table, the table is already gone physically, just not\nvirtually...so why not just get rid of the virtual entries also?\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n\n", "msg_date": "Fri, 7 Jan 2000 16:44:16 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "Table drop that fails ... \"No such file or directory\"" }, { "msg_contents": "> \n> Just has a support call where the client couldn't drop a table, giving him\n> a 'No such file or directory' error...yet a \\d showed him that the table\n> existed...\n> \n> To \"fix\" the problem, we got him to shutdown his server, touch the file\n> that it says is missing, bring the server back up and then drop\n> it...which, of course, succeeded...\n> \n> But...does it make sense to error-out in this case? The user wants to get\n> rid of the table, the table is already gone physically, just not\n> virtually...so why not just get rid of the virtual entries also?\n\nIt shows something very strange happened to him. We don't want this\nkind of thing to just happen.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 7 Jan 2000 16:05:36 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Table drop that fails ... \"No such file or directory\"" }, { "msg_contents": "On Fri, 7 Jan 2000, Bruce Momjian wrote:\n\n> > But...does it make sense to error-out in this case? The user wants to get\n> > rid of the table, the table is already gone physically, just not\n> > virtually...so why not just get rid of the virtual entries also?\n> \n> It shows something very strange happened to him. We don't want this\n> kind of thing to just happen.\n\nOkay...what should be done? How do you trace something like this back?\n\nThe scenario for this particular table, as it was explained to me, was\nthat its the result of a join of two other tables...they find that its\neasier to do teh join into one table periodically and use that for\nselects, then doing SELECT/JOINS on the fly ... my thought was that what\nmay have happened is they ran out of disk space on the JOIN, the file was\nremoved, but not the traces in the systems files, is this a possibility?\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Fri, 7 Jan 2000 17:13:50 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Table drop that fails ... \"No such file or directory\"" }, { "msg_contents": "> On Fri, 7 Jan 2000, Bruce Momjian wrote:\n> \n> > > But...does it make sense to error-out in this case? The user wants to get\n> > > rid of the table, the table is already gone physically, just not\n> > > virtually...so why not just get rid of the virtual entries also?\n> > \n> > It shows something very strange happened to him. We don't want this\n> > kind of thing to just happen.\n> \n> Okay...what should be done? How do you trace something like this back?\n\nThat is the big question. We need to hear about these problems, because\nthe represent something very strange happening. Somehow, the physical\ntable was deleted, but it still existed in the system tables.\n\n> \n> The scenario for this particular table, as it was explained to me, was\n> that its the result of a join of two other tables...they find that its\n> easier to do teh join into one table periodically and use that for\n> selects, then doing SELECT/JOINS on the fly ... my thought was that what\n> may have happened is they ran out of disk space on the JOIN, the file was\n> removed, but not the traces in the systems files, is this a possibility?\n\nMaybe a SELECT INTO failed. You would think it could delete the entries\ntoo, or at least the transaction that created the table would be marked\nas aborted.\n\nNot sure how to debug that one.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 7 Jan 2000 16:16:26 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Table drop that fails ... \"No such file or directory\"" }, { "msg_contents": "On Fri, 7 Jan 2000, Bruce Momjian wrote:\n\n> > On Fri, 7 Jan 2000, Bruce Momjian wrote:\n> > \n> > > > But...does it make sense to error-out in this case? The user wants to get\n> > > > rid of the table, the table is already gone physically, just not\n> > > > virtually...so why not just get rid of the virtual entries also?\n> > > \n> > > It shows something very strange happened to him. We don't want this\n> > > kind of thing to just happen.\n> > \n> > Okay...what should be done? How do you trace something like this back?\n> \n> That is the big question. We need to hear about these problems, because\n> the represent something very strange happening. Somehow, the physical\n> table was deleted, but it still existed in the system tables.\n\nHmmm...problem is, I would think, in most cases it wouldn't be noticed\nuntil a later time, this case as an example...\n\n> > The scenario for this particular table, as it was explained to me, was\n> > that its the result of a join of two other tables...they find that its\n> > easier to do teh join into one table periodically and use that for\n> > selects, then doing SELECT/JOINS on the fly ... my thought was that what\n> > may have happened is they ran out of disk space on the JOIN, the file was\n> > removed, but not the traces in the systems files, is this a possibility?\n> \n> Maybe a SELECT INTO failed. You would think it could delete the entries\n> too, or at least the transaction that created the table would be marked\n> as aborted.\n\nOr it could be nothing more then a hard crash of the server :( \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Fri, 7 Jan 2000 17:59:25 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Table drop that fails ... \"No such file or directory\"" }, { "msg_contents": "> > > The scenario for this particular table, as it was explained to me, was\n> > > that its the result of a join of two other tables...they find that its\n> > > easier to do teh join into one table periodically and use that for\n> > > selects, then doing SELECT/JOINS on the fly ... my thought was that what\n> > > may have happened is they ran out of disk space on the JOIN, the file was\n> > > removed, but not the traces in the systems files, is this a possibility?\n> > \n> > Maybe a SELECT INTO failed. You would think it could delete the entries\n> > too, or at least the transaction that created the table would be marked\n> > as aborted.\n> \n> Or it could be nothing more then a hard crash of the server :( \n\nThat could not mark the transaction that completed the table as\ncommitted.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 7 Jan 2000 17:01:32 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Table drop that fails ... \"No such file or directory\"" }, { "msg_contents": "> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]]On Behalf Of The Hermit\n> Hacker\n>\n> On Fri, 7 Jan 2000, Bruce Momjian wrote:\n>\n> > > On Fri, 7 Jan 2000, Bruce Momjian wrote:\n> > >\n> > > > > But...does it make sense to error-out in this case? The\n> user wants to get\n> > > > > rid of the table, the table is already gone physically, just not\n> > > > > virtually...so why not just get rid of the virtual entries also?\n> > > >\n> > > > It shows something very strange happened to him. We don't want this\n> > > > kind of thing to just happen.\n> > >\n> > > Okay...what should be done? How do you trace something like\n> this back?\n> >\n> > That is the big question. We need to hear about these problems, because\n> > the represent something very strange happening. Somehow, the physical\n> > table was deleted, but it still existed in the system tables.\n>\n> Hmmm...problem is, I would think, in most cases it wouldn't be noticed\n> until a later time, this case as an example...\n>\n\nIsn't it the result of a DROP TABLE failure ?\nmdunlink() removes the base file of a relation immediately and\nthe file couldn't be rollbacked in case of abort.\n\nI have already made it possible to DROP TABLE even though there\naren't base files of relation/indexes in current tree 2 months ago.\n\nRegards.\n\nHiroshi Inoue\[email protected]\n\n", "msg_date": "Sat, 8 Jan 2000 08:41:47 +0900", "msg_from": "\"Hiroshi Inoue\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: [HACKERS] Table drop that fails ... \"No such file or directory\"" }, { "msg_contents": "Hiroshi Inoue wrote:\n> > > That is the big question. We need to hear about these problems, because\n> > > the represent something very strange happening. Somehow, the physical\n> > > table was deleted, but it still existed in the system tables.\n> >\n> > Hmmm...problem is, I would think, in most cases it wouldn't be noticed\n> > until a later time, this case as an example...\n> >\n> \n> Isn't it the result of a DROP TABLE failure ?\n> mdunlink() removes the base file of a relation immediately and\n> the file couldn't be rollbacked in case of abort.\n> \n> I have already made it possible to DROP TABLE even though there\n> aren't base files of relation/indexes in current tree 2 months ago.\n> \n> Regards.\n> \n> Hiroshi Inoue\n>\n\nSounds like it was caused by DDL statements in aborted\ntransactions to me....\n\nMike (implicit commit, again) Mascari\n", "msg_date": "Fri, 07 Jan 2000 20:11:06 -0500", "msg_from": "Mike Mascari <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Table drop that fails ... \"No such file or directory\"" } ]
[ { "msg_contents": "I don't think there is any question about implementing ANSI syntax for outer\njoins. There are two compelling reasons for this:\n\na) it is a stated goal of the PostgreSQL project to be ANSI compliant, and\nto be a testbed for new ANSI features (this implies adherence to ANSI before\nyou can go extending it with new features)\n\nb) it seems to be the most accurate (if not the prettiest) way of specifying\nan outer join\n\nFor these reasons, ANSI should be used for the implementation. If, however,\nwe manage to provide a second, short-cut method for getting the same result,\nso be it. However, this then begs the question: which short method to use.\n\nWell, I propose that once outer joins have been implemented (using ANSI\nsyntax, many thanks to Bruce and Thomas) we have a quick vote to see what\npeople like, and whoever feels like implementing it can go for it. I would\nalso suggest that only one short method is used, because otherwise we'll\nland up having to support n different syntaxes, each of which is not widely\nenough used to justify the work.\n\n\n\nMikeA\n\n\n\n\n-----Original Message-----\nFrom: Bruce Momjian\nTo: Rod Chamberlin\nCc: Thomas Lockhart; Don Baccus; Ansley, Michael; 'The Hermit Hacker ';\n'[email protected] '\nSent: 1/7/00 6:23 PM\nSubject: Re: [HACKERS] SQL outer join syntax\n\n> > > I can't imagine how I would answer a question: \"How do I do an\nANSI\n> > > outer join\". It would need its own FAQ page.\n> > \n> > Well, *you're* the one writing the book :))\n> > \n> \n> I'd have thought this gave him justtification to complain about your\n> horrible syntax then:)\n\nThe big problem is that is no Thomas's syntax, but the ANSI syntax, and\nthere doesn't seem to be any vendor-neutral solution for outer joins\nother than the ANSI one.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania\n19026\n", "msg_date": "Sat, 8 Jan 2000 00:08:53 +0200 ", "msg_from": "\"Ansley, Michael\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] SQL outer join syntax" } ]
[ { "msg_contents": "\n This is a transcript of what I am trying:\n\n# setenv PGDATA2 /web/sites/1.192/db-data\n# initlocation PGDATA2\ninitlocation: input argument points to /web/sites/1.192/db-data\nWe are initializing the database area with username root (uid=0).\nThis user will own all the files and must also own the server process.\n\nCreating Postgres database system directory /web/sites/1.192/db-data\n\nCreating Postgres database system directory /web/sites/1.192/db-data/base\n\n# createdb -D PGDATA2\nERROR: Unable to locate path 'PGDATA2/root'\n This may be due to a missing environment variable in the server\ncreatedb: database creation failed on root.\n\n\n", "msg_date": "Fri, 07 Jan 2000 22:02:33 -0800", "msg_from": "Chris Griffin <[email protected]>", "msg_from_op": true, "msg_subject": "createdb -D xxxx not working" }, { "msg_contents": "Chris Griffin <[email protected]> writes:\n> # setenv PGDATA2 /web/sites/1.192/db-data\n> # initlocation PGDATA2\n> # createdb -D PGDATA2\n\nSurely you want $PGDATA2 in the latter two lines?\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 08 Jan 2000 01:47:58 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [SQL] createdb -D xxxx not working " }, { "msg_contents": "On 2000-01-08, Tom Lane mentioned:\n\n> Chris Griffin <[email protected]> writes:\n> > # setenv PGDATA2 /web/sites/1.192/db-data\n> > # initlocation PGDATA2\n> > # createdb -D PGDATA2\n> \n> Surely you want $PGDATA2 in the latter two lines?\n\nYou might recall me mentioning that the whole alternative location\nbusiness doesn't work in the first place. I'll fix that this week; Chris\nwill have to wait until the next release.\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n\n", "msg_date": "Tue, 11 Jan 2000 14:26:42 +0100", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [SQL] createdb -D xxxx not working " }, { "msg_contents": "[hackers added to cc: list]\n\nPeter Eisentraut <[email protected]> writes:\n> On 2000-01-08, Tom Lane mentioned:\n>> Chris Griffin <[email protected]> writes:\n>>>> # setenv PGDATA2 /web/sites/1.192/db-data\n>>>> # initlocation PGDATA2\n>>>> # createdb -D PGDATA2\n>> \n>> Surely you want $PGDATA2 in the latter two lines?\n\n> You might recall me mentioning that the whole alternative location\n> business doesn't work in the first place. I'll fix that this week; Chris\n> will have to wait until the next release.\n\nFine. BTW, am I the only one who thinks that it's really silly for\ninitlocation to expect to be given an unexpanded environment variable\nname? That's not normal practice (either elsewhere in Postgres, or\nanywhere else that I know of). It's confusing because it's not how\nan ordinary Unix user would expect a shell command to behave, and it\ndoesn't buy any functionality that I can see. I'd vote for taking\nout the auto-expansion, so that the correct invocation becomes\n\tinitlocation $PGDATA2\nwhich is what an average user would expect.\n\nActually, after looking at what initlocation really does, which is\ndarn near nothing, I wonder whether we shouldn't dispense with it\naltogether...\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 11 Jan 2000 10:14:10 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [SQL] createdb -D xxxx not working " }, { "msg_contents": "> >> Surely you want $PGDATA2 in the latter two lines?\n> > You might recall me mentioning that the whole alternative location\n> > business doesn't work in the first place.\n\nI'm not recalling the details here; what is the problem? It works for\nme (but then I wrote it ;)\n\n> Fine. BTW, am I the only one who thinks that it's really silly for\n> initlocation to expect to be given an unexpanded environment variable\n> name? That's not normal practice (either elsewhere in Postgres, or\n> anywhere else that I know of). It's confusing because it's not how\n> an ordinary Unix user would expect a shell command to behave, and it\n> doesn't buy any functionality that I can see.\n\nIt is not silly at all, unless you want to shoot holes in the basic\npremise of the alternate location. This is described in the docs, but\nthe short form is:\n\ninitlocation is used to create the directory structure *with correct\npermissions* for an alternate location. It takes an environment\nvariable because usually the backend should have that same environment\nvariable defined to ensure consistancy and as a security measure. The\nenvironment variable can be expanded or not; initlocation does the\nright thing in either case.\n\ncreatedb takes a \"-D\" argument, which specifies the alternate\nlocation. Unless allowed at compile-time, via the\nALLOW_ABSOLUTE_DBPATHS variable, all references to an alternate\nlocation must refer to an environment variable, to give the dbadmin\ncontrol over *where* alternate locations will appear. The mechanism is\nenforced by the backend, by looking for a directory delimiter in the\ndatpath field of pg_database, then expanding the first field as an\nenvironment variable. On my system, with one database in an alternate\nlocation, this table looks like:\n\ntest=# select * from pg_database;\n datname | datdba | encoding | datpath \n------------+--------+----------+--------------\n template1 | 100 | 0 | template1\n postgres | 100 | 0 | postgres\n regression | 100 | 0 | regression\n test | 100 | 0 | PGDATA2/test\n(4 rows)\n\nSo, this works:\n\n[postgres@golem parser]$ createdb -D PGDATA2 test2\nCREATE DATABASE\n\nBut this does not (but can be enabled at compile-time if the sysadmin\nwants to allow users to scatter databases everywhere (?!!)):\n\n[postgres@golem regress]$ createdb -D /opt/postgres/data2 test2\nERROR: The path '/opt/postgres/data2/test2' is invalid.\nThis may be due to a missing environment variable on the server.\ncreatedb: Database creation failed.\n\n> I'd vote for taking\n> out the auto-expansion, so that the correct invocation becomes\n> initlocation $PGDATA2\n> which is what an average user would expect.\n\nBut the average user does not necessarily have access to the PGDATA2\nenvironment variable! This is usually a sysadmin function.\n\n> Actually, after looking at what initlocation really does, which is\n> darn near nothing, I wonder whether we shouldn't dispense with it\n> altogether...\n\n?? I hope the explanation above helps...\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Tue, 11 Jan 2000 15:57:09 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [SQL] createdb -D xxxx not working" }, { "msg_contents": "On 2000-01-11, Thomas Lockhart mentioned:\n\n> > >> Surely you want $PGDATA2 in the latter two lines?\n> > > You might recall me mentioning that the whole alternative location\n> > > business doesn't work in the first place.\n> \n> I'm not recalling the details here; what is the problem? It works for\n> me (but then I wrote it ;)\n> \n\nDarn, now I already rewrote the thing to where I considered it working. We\nhad several on-and-off discussions (mostly Tom and I) about it during the\nlast two months at least, including users complaining.\n\nThe sort of scheme I came up with scraps the environment variable stuff\n(since it's not quite safe either and has several bugs in the code that\ncreate a bunch of problems along the way) and lets the following\nhappen (example paths):\n\nCREATE DATABASE foo;\t--> /usr/local/pgsql/data/base/foo/pg_class\nCREATE DATABASE foo WITH LOCATION 'bar';\n\t\t\t--> /usr/local/pgsql/data/base/bar/pg_class\nCREATE DATABASE foo WITH LOCATION '/some/where';\n\t\t\t--> /some/where/pg_class\nCREATE DATABASE foo WITH LOCATION 'foo/bar';\n\t\t\tis disabled. I suppose I could stick the\nenvironment variable deal back in so that users don't have to remember any\ncomplicated paths, but this is the dbadmin's job anyway, so he should be\nable to remember it.\n\nAnyway, in order for a path to be allowed it has to be listed in\nPG_ALTLOC, which is a colon-delimited, marginally wildcard enabled list of\nallowed locations. In some future life this could be included in a\nconfiguration file.\n\n\n> initlocation is used to create the directory structure *with correct\n> permissions* for an alternate location. It takes an environment\n\nI think one of the problems was actually that the path initlocation\nassumed and the one createdb worked with were different.\n\n> variable because usually the backend should have that same environment\n> variable defined to ensure consistancy and as a security measure. The\n> environment variable can be expanded or not; initlocation does the\n> right thing in either case.\n\nIt's false security though, since for complete security the dbadmin would\nhave to delete all other environment variables altogether.\n\n> > I'd vote for taking\n> > out the auto-expansion, so that the correct invocation becomes\n> > initlocation $PGDATA2\n> > which is what an average user would expect.\n> \n> But the average user does not necessarily have access to the PGDATA2\n> environment variable! This is usually a sysadmin function.\n\nThe average user doesn't even have access to the machine the database is\nrunning on, so he can't do any initlocation either way. Then again the\naverage user doesn't create databases either. But there is no real point\nfor initlocation since a mere mkdir() call in createdb() will do the job.\n\n\nNow I just got done with that coding 10 minutes ago but now that someone\nactually spoke up in defence of this mechanism I'm going to wait and see\nwhat you think about the revised (or any) scheme.\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n\n", "msg_date": "Wed, 12 Jan 2000 04:30:25 +0100", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [SQL] createdb -D xxxx not working" }, { "msg_contents": "> Darn, now I already rewrote the thing to where I considered it working. We\n> had several on-and-off discussions (mostly Tom and I) about it during the\n> last two months at least, including users complaining.\n\nafaik I've been on the hackers list most of the time, and don't recall\nsignificant discussion.\n\n> The sort of scheme I came up with scraps the environment variable stuff\n> (since it's not quite safe either and has several bugs in the code that\n> create a bunch of problems along the way) and lets the following\n> happen (example paths):\n> \n> CREATE DATABASE foo; --> /usr/local/pgsql/data/base/foo/pg_class\n> CREATE DATABASE foo WITH LOCATION 'bar';\n> --> /usr/local/pgsql/data/base/bar/pg_class\n> CREATE DATABASE foo WITH LOCATION '/some/where';\n> --> /some/where/pg_class\n> CREATE DATABASE foo WITH LOCATION 'foo/bar';\n> is disabled. I suppose I could stick the\n> environment variable deal back in so that users don't have to remember any\n> complicated paths, but this is the dbadmin's job anyway, so he should be\n> able to remember it.\n\nHuh? The \"environment variable deal\" is intended to provide security\nand easier operation. The examples above which allow specifying\nabsolute paths (e.g. \"/some/where/pg_class\") are specifically\ndisallowed by default, for a reason. If there were bugs in that, let's\ntalk about it, but I'm *very* uncomfortable making wholesale changes\nwithout a discussion. And what exactly were \"several bugs in the\ncode\"?? Better be specific; that's my code we're talking about :/\n\nIt sounds like you and Tom Lane have had discussions on this, but I'm\nnot certain it was with a clear understanding of what the environment\nvariables provided. Before killing the capability, I'd like to make\nsure that we understand what it did.\n\notoh, you have a new proposal (the first I've heard of it afaik)...\n\n> Anyway, in order for a path to be allowed it has to be listed in\n> PG_ALTLOC, which is a colon-delimited, marginally wildcard enabled list of\n> allowed locations. In some future life this could be included in a\n> configuration file.\n\nThat sounds like a good capability. There is no reason (yet) why the\nenvironment variable mechanism can not use this too; e.g.\n\n setenv PG_ALTLOC PGDATA2:/home/peter/unsafe/unprotected/open/path\n\n(just had to put that last one in :)))\n\n> > initlocation is used to create the directory structure *with correct\n> > permissions* for an alternate location. It takes an environment\n> I think one of the problems was actually that the path initlocation\n> assumed and the one createdb worked with were different.\n\nI'm not sure to what you are referring. To the fact that there is an\nimplicit \"/base\" in the actual path? e.g. \n\ninitlocation PGDATA2\ncreatedb -D PGDATA2 test\n\ngives an actual path $PGDATA2/base/pg_class ? That's a\nsecurity/integrity benefit since:\n\n1) there is not likely to be a random environment variable which\nhappens to point to a valid directory which *also* has a subdirectory\ncalled \"base\".\n\n2) it reduces the chance that a user/dbadmin will not use initlocation\nto create the database area, hence reducing the chance that a\nuser/dbadmin has not set the permissions correctly.\n\n> > variable because usually the backend should have that same environment\n> > variable defined to ensure consistancy and as a security measure. The\n> > environment variable can be expanded or not; initlocation does the\n> > right thing in either case.\n> It's false security though, since for complete security the dbadmin would\n> have to delete all other environment variables altogether.\n\nTheoretically true (though low-risk per above discussion), but this\nwould be addressed by your PG_ALTLOC augmentation, which I like btw.\n\n> > > I'd vote for taking\n> > > out the auto-expansion, so that the correct invocation becomes\n> > > initlocation $PGDATA2\n> > > which is what an average user would expect.\n> > But the average user does not necessarily have access to the PGDATA2\n> > environment variable! This is usually a sysadmin function.\n> The average user doesn't even have access to the machine the database is\n> running on, so he can't do any initlocation either way. Then again the\n> average user doesn't create databases either. But there is no real point\n> for initlocation since a mere mkdir() call in createdb() will do the job.\n\nHmm. Until I see what you are actually doing, I can't comment on\nwhether this is indeed adequate. But decoupling the location creation\nfrom database creation give the dbadmin control over how permissions\nare set and where databases can be created. I'm not sure that is the\ncase if it is all done within the backend at database creation time.\n\n> Now I just got done with that coding 10 minutes ago but now that someone\n> actually spoke up in defence of this mechanism I'm going to wait and see\n> what you think about the revised (or any) scheme.\n\nPG_ALTLOC seems to be a great feature. But afaict the environment\nvariable feature is useful, safe (as much or more so than absolute\npaths, anyway), can coexist, and is a convenience. \n\nI'd like to continue the discussion until I'm convinced, or least\nbeaten into submission ;)\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Wed, 12 Jan 2000 16:26:05 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [SQL] createdb -D xxxx not working" }, { "msg_contents": "On 2000-01-12, Thomas Lockhart mentioned:\n\n> Huh? The \"environment variable deal\" is intended to provide security\n> and easier operation. The examples above which allow specifying\n> absolute paths (e.g. \"/some/where/pg_class\") are specifically\n> disallowed by default, for a reason. If there were bugs in that, let's\n> talk about it, but I'm *very* uncomfortable making wholesale changes\n> without a discussion. And what exactly were \"several bugs in the\n> code\"?? Better be specific; that's my code we're talking about :/\n\nThe starting point for my investigation was actually the issue of\narbitrary characters in database names, which lead me to fix up the\ncreatedb and dropdb code to make it more error proof, which led me to all\nkinds of areas of the code that looked ages old, with comments not\nmatching the code, dead code, etc. The location issue was just one of\nthose. (Especially those /* this is work arround only !!! */ sections from\nmore than 5 years ago concern me a little ...) It's my idea of a learning\nexperience.\n\n\n> > > initlocation is used to create the directory structure *with correct\n> > > permissions* for an alternate location. It takes an environment\n\nIf I create a database with a normal name, the code makes a directory\n/usr/local/pgsql/data/base/testdb with the proper permissions. There's no\nreason why it wouldn't be able to do the same somewhere else. It's\nredundant.\n\n> 1) there is not likely to be a random environment variable which\n> happens to point to a valid directory which *also* has a subdirectory\n> called \"base\".\n\nForgive me, but \"not likely\" means zero security to me. If you want to\nmake it secure, be for real. Perhaps I'm a little envvar-phobic in\ngeneral. I got my reasons, but that shouldn't stand in other's ways.\n\n> 2) it reduces the chance that a user/dbadmin will not use initlocation\n> to create the database area, hence reducing the chance that a\n> user/dbadmin has not set the permissions correctly.\n\nThe prototype code I got lying around insists on creating the directory\nitself, so the chances of using an insecure directory are zero. But there\nare also other ways to ensure this rather than another utility.\n\n\nOkay, now to the \"formal\" proposal:\n\n* A database name can contain any character (up to NAMEDATALEN)\n\n* By default, the database name is appended to DataDir/base/ to form the\nfile system location of the database.\n\n* There are certain characters that will not be allowed in database paths\nbecause they are potentially \"shell'ishly dangerous\". In my current layout\nthis includes everything from 1 to 31 ascii as well as single-quote\n(') and dot (.). If you choose to name a database this way, you need to\noverride the path to something else.\n\n* If you override the path to a name not containing a slash, the name will\nin the same fashion be appended to DataDir/base/. Any future attempts to\nuse the same path will fail.\n\n* If you override the path to a name including a slash but not at the\nstart, the part up to the first slash will be interpreted as an\nenvironment variable. The database directory will be the immediate\ndirectory specified and will be created by createdb() (which must have\npermission to do so). If it already exists, it will attempt to fix the\npermissions.\n\n* If you specify an absolute path then it will use that very path and it\nwill create the directory as above.\n\n* Either way, in order to use a path outside DataDir, it must be listed in\nsome configuration option (such as PG_ALTLOC). Environment variable based\npaths can be included in the natural way, e.g., to allow a path\n'PGDATA2/foo' write PG_ALTLOC=$PGDATA/foo, to allow anything under\nPGDATA2, write PG_ALTLOC=$PGDATA/*.\n\n\nIn practice that would mean:\n\n1. If you don't use this feature, nothing changes.\n\n2. If you want to use this feature to \"recode\" funny characters, you\nmay. (e.g., CREATE DATABASE \"bbb'''...\" WITH LOCATON = 'bbb______';)\n\n3. If you want to stick one particular database somewhere else, create the\ndirectory (or at least the directory above it), include it in PG_ALTLOC\nand go ahead.\n\n4. If you want to provide users the option of storing databases at several\nalternative locations, set up mnemonic environment variables, create the\ndirectories corresponding to them, and put something like\nPGDATA2/*:PGDATA3/*:... in PG_ALTLOC.\n\nAre there other circumstances where this would be used?\n\n\nWhat could be discussed it the exact interpretation of a path such as\nLOCATION = '/mnt/somewhere/foo' with regards to whether 'base' should be\nappended or not or the name of the database should be appended to it or\nnot or how one could otherwise do the name \"recoding\" in that\ncircumstance.\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n\n", "msg_date": "Thu, 13 Jan 2000 00:29:25 +0100 (CET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "proposal -- Re: [HACKERS] Re: [SQL] createdb -D xxxx not working" } ]
[ { "msg_contents": "\nHrmmm...if I'm reading this right, its more costly to create an index then\nto leave it as a sequential scan, but it returns more rows? Yet, it\nreturns, if I do the query with a count() around the return value, 288\nrows, not 334 or 1154...\n\nudmsearch=> explain select next_index_time from url where next_index_time <= 947317073;\nNOTICE: QUERY PLAN:\n\nSeq Scan on url (cost=43.00 rows=334 width=4)\n\nEXPLAIN\nudmsearch=> create index url_next_index_time on url using btree ( next_index_time);\nCREATE\nudmsearch=> explain select next_index_time from url where next_index_time <= 947317073;\nNOTICE: QUERY PLAN:\n\nIndex Scan using url_next_index_time on url (cost=271.68 rows=1154 width=4)\n\nEXPLAIN\n\n\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Sat, 8 Jan 2000 02:58:56 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "Costs: Index vs Non-Index" }, { "msg_contents": "The Hermit Hacker <[email protected]> writes:\n> Hrmmm...if I'm reading this right, its more costly to create an index then\n> to leave it as a sequential scan, but it returns more rows? Yet, it\n> returns, if I do the query with a count() around the return value, 288\n> rows, not 334 or 1154...\n\nThis doesn't have anything to do with index vs sequential scan, but it\ndoes have to do with whether you've done a VACUUM ANALYZE lately.\nYou haven't ;-)\n\n> udmsearch=> explain select next_index_time from url where next_index_time <= 947317073;\n> NOTICE: QUERY PLAN:\n> Seq Scan on url (cost=43.00 rows=334 width=4)\n\nIIRC, rows=334 is the default estimate of result rows you will get for\nthis query in the absence of any information whatever. (Default table\nsize guess is 1000 rows, and default selectivity guess for <= is 1/3,\nso...) If you have not vacuumed, it's sheer coincidence that this is\neven within hailing distance of the correct figure of 288.\n\n> udmsearch=> create index url_next_index_time on url using btree ( next_index_time);\n> CREATE\n> udmsearch=> explain select next_index_time from url where next_index_time <= 947317073;\n> NOTICE: QUERY PLAN:\n> Index Scan using url_next_index_time on url (cost=271.68 rows=1154 width=4)\n\nI believe that a side-effect of CREATE INDEX is to update the\nnumber-of-pages-and-rows statistics in pg_class for the target table.\nSo after you do that, the optimizer has a correct idea of the table's\nsize, but still no more info about the selectivity of the WHERE clause.\n(I infer that your table has size 1154*3 rows.) If you now drop the\nindex and repeat EXPLAIN, it'll go back to a seq scan, but it will now\nsay 1154 rows --- and the cost estimate will be higher, too.\n\nIf you do VACUUM ANALYZE, then the optimizer will also know the min and\nmax values of next_index_time, and will have some shot at making a\ncorrect estimate of the output row count. I'd be interested to know\nwhat it predicts then...\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 08 Jan 2000 11:15:16 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Costs: Index vs Non-Index " }, { "msg_contents": "\nOkay, I had remembered to VACUUM, but I always forget to VACUUM ANALYZE :(\nresults come out much better now:\n\nudmsearch=> explain select next_index_time from url where next_index_time <= 947317073;\nNOTICE: QUERY PLAN:\n\nSeq Scan on url (cost=3368.58 rows=12623 width=4)\n\nEXPLAIN\nudmsearch=> select (next_index_time) from url where next_index_time <= 947317073; \nnext_index_time\n---------------\n(0 rows)\n\nudmsearch=> create index url_next_index_time on url using btree ( next_index_time);\nCREATE\nudmsearch=> explain select next_index_time from url where next_index_time <= 947317073;\nNOTICE: QUERY PLAN:\n\nIndex Scan using url_next_index_time on url (cost=1364.10 rows=12623 width=4)\n\nEXPLAIN\n\n\n\nOn Sat, 8 Jan 2000, Tom Lane wrote:\n\n> The Hermit Hacker <[email protected]> writes:\n> > Hrmmm...if I'm reading this right, its more costly to create an index then\n> > to leave it as a sequential scan, but it returns more rows? Yet, it\n> > returns, if I do the query with a count() around the return value, 288\n> > rows, not 334 or 1154...\n> \n> This doesn't have anything to do with index vs sequential scan, but it\n> does have to do with whether you've done a VACUUM ANALYZE lately.\n> You haven't ;-)\n> \n> > udmsearch=> explain select next_index_time from url where next_index_time <= 947317073;\n> > NOTICE: QUERY PLAN:\n> > Seq Scan on url (cost=43.00 rows=334 width=4)\n> \n> IIRC, rows=334 is the default estimate of result rows you will get for\n> this query in the absence of any information whatever. (Default table\n> size guess is 1000 rows, and default selectivity guess for <= is 1/3,\n> so...) If you have not vacuumed, it's sheer coincidence that this is\n> even within hailing distance of the correct figure of 288.\n> \n> > udmsearch=> create index url_next_index_time on url using btree ( next_index_time);\n> > CREATE\n> > udmsearch=> explain select next_index_time from url where next_index_time <= 947317073;\n> > NOTICE: QUERY PLAN:\n> > Index Scan using url_next_index_time on url (cost=271.68 rows=1154 width=4)\n> \n> I believe that a side-effect of CREATE INDEX is to update the\n> number-of-pages-and-rows statistics in pg_class for the target table.\n> So after you do that, the optimizer has a correct idea of the table's\n> size, but still no more info about the selectivity of the WHERE clause.\n> (I infer that your table has size 1154*3 rows.) If you now drop the\n> index and repeat EXPLAIN, it'll go back to a seq scan, but it will now\n> say 1154 rows --- and the cost estimate will be higher, too.\n> \n> If you do VACUUM ANALYZE, then the optimizer will also know the min and\n> max values of next_index_time, and will have some shot at making a\n> correct estimate of the output row count. I'd be interested to know\n> what it predicts then...\n> \n> \t\t\tregards, tom lane\n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Sat, 8 Jan 2000 14:58:54 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Costs: Index vs Non-Index " } ]
[ { "msg_contents": "This problem was posted 12/24/1999 to GENERAL with no\nanswer...hoping one of you know the easy answer...\n\nI am seeing the following error during a DB rebuild. It is\noccuring during the execution of a PL/pgSQL procedure which is\ncalled from a trigger procedure on an AFTER INSERT trigger...\n\n ERROR: out of free buffers: time to abort !\n\nThe insert fails. This is under pgsql 6.5.2, redhat 6.1, built\nfrom tgz, running under \"postmaster -i -N 15 -o -F -S 4096\"...\n\nAny ideas?\n\n\nCheers,\nEd Loehr\n\n\n", "msg_date": "Sat, 08 Jan 2000 01:04:38 -0600", "msg_from": "Ed Loehr <[email protected]>", "msg_from_op": true, "msg_subject": "ERROR: out of free buffers: time to abort !" }, { "msg_contents": "> I am seeing the following error during a DB rebuild. It is\n> occuring during the execution of a PL/pgSQL procedure which is\n> called from a trigger procedure on an AFTER INSERT trigger...\n>\n> ERROR: out of free buffers: time to abort !\n>\n> The insert fails. This is under pgsql 6.5.2, redhat 6.1, built\n> from tgz, running under \"postmaster -i -N 15 -o -F -S 4096\"...\n>\n> Any ideas?\n\nThis problem disappears when I up the number of shared mem buffers\nwith the -B flag from default of 64 to 256.\n\nCheers,\nEd Loehr\n\n", "msg_date": "Sat, 08 Jan 2000 01:14:52 -0600", "msg_from": "Ed Loehr <[email protected]>", "msg_from_op": true, "msg_subject": "Re: ERROR: out of free buffers: time to abort !" }, { "msg_contents": "Ed Loehr <[email protected]> writes:\n>> I am seeing the following error during a DB rebuild. It is\n>> occuring during the execution of a PL/pgSQL procedure which is\n>> called from a trigger procedure on an AFTER INSERT trigger...\n>> \n>> ERROR: out of free buffers: time to abort !\n>> \n>> The insert fails. This is under pgsql 6.5.2, redhat 6.1, built\n>> from tgz, running under \"postmaster -i -N 15 -o -F -S 4096\"...\n\n> This problem disappears when I up the number of shared mem buffers\n> with the -B flag from default of 64 to 256.\n\nThat's the message you get if all the disk buffers are marked as\n\"in use\" (ref count > 0) so that there is noplace to read in another\ndatabase page. I fixed several nasty buffer-ref-count-leakage bugs\na couple of months ago, so I think this problem may be gone in current\nsources. (I'd appreciate it if you'd try this test case as soon as\nwe are ready for 7.0 beta...)\n\nIn the meantime, upping the number of buffers will at least postpone the\nproblem. But I'm worried that it may not solve it completely --- you\nmay still find that the error occurs after you've been running long\nenough.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 08 Jan 2000 11:03:59 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: ERROR: out of free buffers: time to abort ! " }, { "msg_contents": "Tom Lane wrote:\n\n> >> I am seeing the following error during a DB rebuild.\n\n> >> ERROR: out of free buffers: time to abort !\n> >>\n> > This problem disappears when I up the number of shared mem buffers\n> > with the -B flag from default of 64 to 256.\n>\n> That's the message you get if all the disk buffers are marked as\n> \"in use\" (ref count > 0) so that there is noplace to read in another\n> database page. I fixed several nasty buffer-ref-count-leakage bugs\n> a couple of months ago, so I think this problem may be gone in current\n> sources. (I'd appreciate it if you'd try this test case as soon as\n> we are ready for 7.0 beta...)\n\nGreat. Thanks again, Tom.\n\n> In the meantime, upping the number of buffers will at least postpone the\n> problem. But I'm worried that it may not solve it completely --- you\n> may still find that the error occurs after you've been running long\n> enough.\n\nCan I postpone/workaround the problem by periodic server restarts to reset\nthe counts? Or is this a persistent thing across server restarts?\n\nCheers,\nEd Loehr\n\n", "msg_date": "Sat, 08 Jan 2000 12:29:40 -0600", "msg_from": "Ed Loehr <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Re: ERROR: out of free buffers: time to abort !" }, { "msg_contents": "Ed Loehr <[email protected]> writes:\n>> In the meantime, upping the number of buffers will at least postpone the\n>> problem. But I'm worried that it may not solve it completely --- you\n>> may still find that the error occurs after you've been running long\n>> enough.\n\n> Can I postpone/workaround the problem by periodic server restarts to reset\n> the counts? Or is this a persistent thing across server restarts?\n\nYes, a postmaster restart would clean up the buffer reference counts.\nI think there were also some less drastic code paths that would clean\nthem up --- you might try something as simple as deliberately inducing\nan SQL error now and then, so that error cleanup runs.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 08 Jan 2000 15:07:35 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: ERROR: out of free buffers: time to abort ! " }, { "msg_contents": "Tom Lane wrote:\n\n> > Can I postpone/workaround the problem by periodic server restarts to reset\n> > the counts? Or is this a persistent thing across server restarts?\n>\n> Yes, a postmaster restart would clean up the buffer reference counts.\n> I think there were also some less drastic code paths that would clean\n> them up --- you might try something as simple as deliberately inducing\n> an SQL error now and then, so that error cleanup runs.\n\nWhat *kind* of SQL error would trigger the cleanup? I've certainly had\nnumerous SQL errors prior to this problem showing up (parse errors, misnamed\nattributes, ...), but that didn't apparently fix the problem system wide.\n\nAlso, are these buffer counts per backend or per postmaster? In other words,\ndoes a particular kind of SQL error need to occur on each backend?\n\nCheers,\nEd Loehr\n\n", "msg_date": "Sat, 08 Jan 2000 16:27:53 -0600", "msg_from": "Ed Loehr <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Re: ERROR: out of free buffers: time to abort !" }, { "msg_contents": "Ed Loehr <[email protected]> writes:\n> Tom Lane wrote:\n>> Yes, a postmaster restart would clean up the buffer reference counts.\n>> I think there were also some less drastic code paths that would clean\n>> them up --- you might try something as simple as deliberately inducing\n>> an SQL error now and then, so that error cleanup runs.\n\n> What *kind* of SQL error would trigger the cleanup?\n\nActually, on looking at the code it doesn't seem that error recovery\nwill fix things --- nothing short of a postmaster restart will do it.\n\nInstead of hacking up your application code to work around this problem,\nwhy don't you try applying the following patch to the 6.5.3 sources.\nYou may get some \"Buffer Leak\" notice messages, but it ought to work\nbetter than it does now. (I think --- this is off-the-cuff and not\ntested ... but the complete changes that I put into current sources are\nmuch too large to risk back-patching.)\n\nKeep us posted.\n\n\t\t\tregards, tom lane\n\n*** src/backend/storage/buffer/bufmgr.c~\tSat Jan 8 17:44:58 2000\n--- src/backend/storage/buffer/bufmgr.c\tSat Jan 8 17:49:15 2000\n***************\n*** 1202,1213 ****\n \tfor (i = 1; i <= NBuffers; i++)\n \t{\n \t\tCommitInfoNeedsSave[i - 1] = 0;\n \t\tif (BufferIsValid(i))\n \t\t{\n \t\t\twhile (PrivateRefCount[i - 1] > 0)\n \t\t\t\tReleaseBuffer(i);\n \t\t}\n- \t\tLastRefCount[i - 1] = 0;\n \t}\n \n \tResetLocalBufferPool();\n--- 1202,1218 ----\n \tfor (i = 1; i <= NBuffers; i++)\n \t{\n \t\tCommitInfoNeedsSave[i - 1] = 0;\n+ \t\t/*\n+ \t\t * quick hack: any refcount still being held in LastRefCount\n+ \t\t * needs to be released.\n+ \t\t */\n+ \t\tPrivateRefCount[i - 1] += LastRefCount[i - 1];\n+ \t\tLastRefCount[i - 1] = 0;\n \t\tif (BufferIsValid(i))\n \t\t{\n \t\t\twhile (PrivateRefCount[i - 1] > 0)\n \t\t\t\tReleaseBuffer(i);\n \t\t}\n \t}\n \n \tResetLocalBufferPool();\n***************\n*** 1228,1233 ****\n--- 1233,1244 ----\n \n \tfor (i = 1; i <= NBuffers; i++)\n \t{\n+ \t\t/*\n+ \t\t * quick hack: any refcount still being held in LastRefCount\n+ \t\t * needs to be released.\n+ \t\t */\n+ \t\tPrivateRefCount[i - 1] += LastRefCount[i - 1];\n+ \t\tLastRefCount[i - 1] = 0;\n \t\tif (BufferIsValid(i))\n \t\t{\n \t\t\tBufferDesc *buf = &(BufferDescriptors[i - 1]);\n", "msg_date": "Sat, 08 Jan 2000 17:57:15 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: ERROR: out of free buffers: time to abort ! " }, { "msg_contents": "Tom Lane wrote:\n\n> Instead of hacking up your application code to work around this problem,\n> why don't you try applying the following patch to the 6.5.3 sources.\n\nI am running 6.5.2. Were there any other pertinent changes from 6.5.2 to 6.5.3\nthat would make you uncomfortable about applying that patch to 6.5.2?\n\nCheers,\nEd Loehr\n\n\n", "msg_date": "Sat, 08 Jan 2000 17:48:56 -0600", "msg_from": "Ed Loehr <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Re: ERROR: out of free buffers: time to abort !" }, { "msg_contents": "Ed Loehr <[email protected]> writes:\n> Tom Lane wrote:\n>> Instead of hacking up your application code to work around this problem,\n>> why don't you try applying the following patch to the 6.5.3 sources.\n\n> I am running 6.5.2. Were there any other pertinent changes from 6.5.2 to 6.5.3\n> that would make you uncomfortable about applying that patch to 6.5.2?\n\nNo, but I would recommend trying it in a playpen installation, in any\ncase, not straight into production servers ;-)\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 08 Jan 2000 21:46:23 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: ERROR: out of free buffers: time to abort ! " } ]
[ { "msg_contents": "\nQuery is:\n\nSELECT url.status,url2.url,url.url \n FROM url,url url2 \n WHERE url.referrer=url2.rec_id;\n\nThere is an index on rec_id and one on referrer ... shouldn't one of the\nbe used? Like, I can see it having to go through every url2.rec_id, but\nshouldn't the url.referrer= be abe to make use of an index? I thought\nabout changing the above to something like:\n\nexplain SELECT url.status,url2.url,url.url\n FROM url,url url2\n WHERE url.referrer IN ( SELECT rec_id FROM url );\n\nbut that didn't win me anything else :) \n\n ======\n\nudmsearch=> create index url_rec_id on url using btree ( rec_id );\nCREATE\nudmsearch=> create index url_referrer on url using btree ( referrer ); \nCREATE\nudmsearch=> explain SELECT url.status,url2.url,url.url FROM url,url url2 WHERE\nudmsearch-> url.referrer=url2.rec_id;\nNOTICE: QUERY PLAN:\n\nHash Join (cost=2045.81 rows=4544 width=36)\n -> Seq Scan on url (cost=863.95 rows=4544 width=20)\n -> Hash (cost=863.95 rows=4544 width=16)\n -> Seq Scan on url url2 (cost=863.95 rows=4544 width=16)\n\nEXPLAIN\nudmsearch=> \\d url\nTable = url\n+----------------------------------+----------------------------------+-------+\n| Field | Type | Length|\n+----------------------------------+----------------------------------+-------+\n| rec_id | int4 not null default nextval ( | 4 |\n| status | int4 not null default 0 | 4 |\n| url | varchar() not null | 128 |\n| content_type | varchar() not null default '' | 32 |\n| last_modified | varchar() not null default '' | 32 |\n| title | varchar() not null default '' | 128 |\n| txt | varchar() not null default '' | 255 |\n| docsize | int4 not null default 0 | 4 |\n| last_index_time | int4 not null | 4 |\n| next_index_time | int4 not null | 4 |\n| referrer | int4 not null default 0 | 4 |\n| tag | int4 not null default 0 | 4 |\n| hops | int4 not null default 0 | 4 |\n| keywords | varchar() not null default '' | 255 |\n| description | varchar() not null default '' | 100 |\n| crc | varchar() not null default '' | 33 |\n+----------------------------------+----------------------------------+-------+\nIndices: url_crc\n url_pkey\n url_rec_id\n url_referrer\n url_url\n\n\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Sat, 8 Jan 2000 03:17:38 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "Another index \"buglet\"?" }, { "msg_contents": "The Hermit Hacker <[email protected]> writes:\n> SELECT url.status,url2.url,url.url \n> FROM url,url url2 \n> WHERE url.referrer=url2.rec_id;\n\n> There is an index on rec_id and one on referrer ... shouldn't one of the\n> be used?\n\nNot necessarily --- hash join is a perfectly respectable alternative\nchoice. I'd expect to see either a hash or a merge join here (the\nmerge *would* use both indexes).\n\nNow it could be that the optimizer is misestimating the relative costs\nof merge and hash join. If you're interested in checking that, do\nthis (*after* running VACUUM ANALYZE, ahem):\n\n1. Start psql with environment variable PGOPTIONS=\"-fh\" (forbid hash).\n Do the EXPLAIN --- it'll probably give a mergejoin plan now. Note\n the estimated total cost. Run the query itself, and note the runtime.\n\n2. Start psql with environment variable PGOPTIONS=\"-fm\" (forbid merge),\n and repeat the experiment to get the estimated cost and actual time\n for the hash join.\n\nI'd be interested to know what you find out. I'm in the middle of\nrejiggering the optimizer's cost estimates right now, so more data\npoints would be helpful.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 08 Jan 2000 11:23:17 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Another index \"buglet\"? " }, { "msg_contents": "\nAfter the VACUUM ANALYZE:\n\nStraight start up:\nHash Join (cost=9994.31 rows=2740488 width=36)\n -> Seq Scan on url (cost=3368.58 rows=37866 width=20)\n -> Hash (cost=3368.58 rows=37866 width=16)\n -> Seq Scan on url url2 (cost=3368.58 rows=37866 width=16)\n\n788u 0.327s 0:03.89 28.2% 104+14868k 0+179io 0pf+0w\n\n\nForbid merge:\nHash Join (cost=9994.31 rows=2740488 width=36)\n -> Seq Scan on url (cost=3368.58 rows=37866 width=20)\n -> Hash (cost=3368.58 rows=37866 width=16)\n -> Seq Scan on url url2 (cost=3368.58 rows=37866 width=16)\n\n0.900u 0.217s 0:04.19 26.4% 103+14638k 0+175io 0pf+0w\n\n\nForbid Hash:\nMerge Join (cost=11188.76 rows=2740488 width=36)\n -> Index Scan using url_pkey on url url2 (cost=4347.30 rows=37866 width=16)\n -> Index Scan using url_referrer on url (cost=4342.30 rows=37866 width=20)\n\n0.897u 0.210s 0:03.19 34.4% 106+15120k 0+179io 0pf+0w\n\nOn Sat, 8 Jan 2000, Tom Lane wrote:\n\n> The Hermit Hacker <[email protected]> writes:\n> > SELECT url.status,url2.url,url.url \n> > FROM url,url url2 \n> > WHERE url.referrer=url2.rec_id;\n> \n> > There is an index on rec_id and one on referrer ... shouldn't one of the\n> > be used?\n> \n> Not necessarily --- hash join is a perfectly respectable alternative\n> choice. I'd expect to see either a hash or a merge join here (the\n> merge *would* use both indexes).\n> \n> Now it could be that the optimizer is misestimating the relative costs\n> of merge and hash join. If you're interested in checking that, do\n> this (*after* running VACUUM ANALYZE, ahem):\n> \n> 1. Start psql with environment variable PGOPTIONS=\"-fh\" (forbid hash).\n> Do the EXPLAIN --- it'll probably give a mergejoin plan now. Note\n> the estimated total cost. Run the query itself, and note the runtime.\n> \n> 2. Start psql with environment variable PGOPTIONS=\"-fm\" (forbid merge),\n> and repeat the experiment to get the estimated cost and actual time\n> for the hash join.\n> \n> I'd be interested to know what you find out. I'm in the middle of\n> rejiggering the optimizer's cost estimates right now, so more data\n> points would be helpful.\n> \n> \t\t\tregards, tom lane\n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Sat, 8 Jan 2000 15:05:12 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Another index \"buglet\"? " }, { "msg_contents": "The Hermit Hacker <[email protected]> writes:\n> Forbid merge:\n> Hash Join (cost=9994.31 rows=2740488 width=36)\n> -> Seq Scan on url (cost=3368.58 rows=37866 width=20)\n> -> Hash (cost=3368.58 rows=37866 width=16)\n> -> Seq Scan on url url2 (cost=3368.58 rows=37866 width=16)\n\n> 0.900u 0.217s 0:04.19 26.4% 103+14638k 0+175io 0pf+0w\n\n> Forbid Hash:\n> Merge Join (cost=11188.76 rows=2740488 width=36)\n> -> Index Scan using url_pkey on url url2 (cost=4347.30 rows=37866 width=16)\n> -> Index Scan using url_referrer on url (cost=4342.30 rows=37866 width=20)\n\n> 0.897u 0.210s 0:03.19 34.4% 106+15120k 0+179io 0pf+0w\n\nThanks, but I'm confused about what I'm looking at here. Are those\ntime outputs for the backend, or for psql?\n\nAlso, how large are these two tables, and how many rows do you actually\nget from the query?\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 08 Jan 2000 15:13:33 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Another index \"buglet\"? " }, { "msg_contents": "On Sat, 8 Jan 2000, Tom Lane wrote:\n\n> The Hermit Hacker <[email protected]> writes:\n> > Forbid merge:\n> > Hash Join (cost=9994.31 rows=2740488 width=36)\n> > -> Seq Scan on url (cost=3368.58 rows=37866 width=20)\n> > -> Hash (cost=3368.58 rows=37866 width=16)\n> > -> Seq Scan on url url2 (cost=3368.58 rows=37866 width=16)\n> \n> > 0.900u 0.217s 0:04.19 26.4% 103+14638k 0+175io 0pf+0w\n> \n> > Forbid Hash:\n> > Merge Join (cost=11188.76 rows=2740488 width=36)\n> > -> Index Scan using url_pkey on url url2 (cost=4347.30 rows=37866 width=16)\n> > -> Index Scan using url_referrer on url (cost=4342.30 rows=37866 width=20)\n> \n> > 0.897u 0.210s 0:03.19 34.4% 106+15120k 0+179io 0pf+0w\n> \n> Thanks, but I'm confused about what I'm looking at here. Are those\n> time outputs for the backend, or for psql?\n\njust from psql ...\n\n> Also, how large are these two tables, and how many rows do you actually\n> get from the query?\n\npgsql> grep http query.out | wc -l\n 37825\n\nCan't give you a count on the tables though, since I've since had to\nrebuiild them :( Or, rather, the two tables are the same table...\n\n\n", "msg_date": "Sat, 8 Jan 2000 16:45:23 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Another index \"buglet\"? " } ]
[ { "msg_contents": "I just noticed that this patch added an attribute 'pertinent' to the _defines\nstructure. However, I cannot find a reference to this attribute anywhere\nelse. Since this happened before I'm afraid I removed some part of the patch\nby committing my own changes I'd like to know what this is supposed to do.\n\nAnd since I do not know who send the patch I just ask here.\n\nThanks anyway for this patch. \n\nMichael\n-- \nMichael Meskes | Go SF 49ers!\nTh.-Heuss-Str. 61, D-41812 Erkelenz | Go Rhein Fire!\nTel.: (+49) 2431/72651 | Use Debian GNU/Linux!\nEmail: [email protected] | Use PostgreSQL!\n", "msg_date": "Sat, 8 Jan 2000 13:56:16 +0100", "msg_from": "Michael Meskes <[email protected]>", "msg_from_op": true, "msg_subject": "ECPG patch for exec sql ifdef etc." }, { "msg_contents": "> I just noticed that this patch added an attribute 'pertinent' to the _defines\n> structure. However, I cannot find a reference to this attribute anywhere\n> else. Since this happened before I'm afraid I removed some part of the patch\n> by committing my own changes I'd like to know what this is supposed to do.\n> \n> And since I do not know who send the patch I just ask here.\n> \n> Thanks anyway for this patch. \n\nOK, 1.19 version of type.h shows this, and the diff is attached too.\n\n---------------------------------------------------------------------------\n\nrevision 1.19\ndate: 1999/12/21 17:42:16; author: momjian; state: Exp; lines: +1 -0\nThe first fix is to allow an input file with a relative path and without\na \".pgc \" extension. The second patch fixes a coredump when there is\nmore than one input file (in that case, cur and types were not set to\nNULL before processing the second f ile)\n\nThe patch below modifies the accepted grammar of ecpg to accept\n\n FETCH [direction] [amount] cursor name\n\ni.e. the IN|FROM clause becomes optional (as in Oracle and Informix).\nThis removes the incompatibility mentioned in section \"Porting From\nOther RDBMS Packages\" p169, PostgreSQL Programmer's Guide. The grammar\nis modified in such a way as to avoid shift/reduce conflicts. It does\nnot accept the statement \"EXEC SQL FETCH;\" anymore, as the old grammar\ndid (this seems to be a bug of the old grammar anyway).\n\nThis patch cleans up the handling of space characters in the scanner;\nsome patte rns require \\n to be in {space}, some do not. A second fix is\nthe handling of cpp continuati on lines; the old pattern did not match\nthese. The parser is patched to fix an off-by-one error in the #line\ndirectives. The pa rser is also enhanced to report the correct location\nof errors in declarations in the \"E XEC SQL DECLARE SECTION\". Finally,\nsome right recursions in the parser were replaced by left-recursions.\n\n\nThis patch adds preprocessor directives to ecpg; in particular\n\nEXEC SQL IFDEF, EXEC SQL IFNDEF, EXEC SQL ELSE, EXEC SQL ELIF and EXEC SQL ENDIF\n\n\"EXEC SQL IFDEF\" is used with defines made with \"EXEC SQL DEFINE\" and\ndefines, specified on the command line with -D. Defines, specified on\nthe command line are persistent across multiple input files. Defines can\nbe nested up to a maximum level of 128 (see patch). There is a fair\namount of error checking to make sure directives are matched properly. I\nneed preprocessor directives for porting code, that is written for an\nInformix database, to a PostgreSQL database, while maintaining\ncompatibility with the original code. I decided not to extend the\nalready large ecpg grammar. Everything is done in the scanner by adding\nsome states, e.g. to skip all input except newlines and directives. The\npreprocessor commands are compatible with Informix. Oracle uses a cpp\nreplacement.\n\nRene Hogendoorn\n\n\n---------------------------------------------------------------------------\n\nand the diff shows:\n\n$ pgcvs diff -c -r 1.18 -r 1.19 type.h|less\nIndex: type.h\n===================================================================\nRCS file: /usr/local/cvsroot/pgsql/src/interfaces/ecpg/preproc/type.h,v\nretrieving revision 1.18\nretrieving revision 1.19\ndiff -c -r1.18 -r1.19\n*** type.h 1999/05/25 16:15:04 1.18\n--- type.h 1999/12/21 17:42:16 1.19\n***************\n*** 119,124 ****\n--- 119,125 ----\n {\n char *old;\n char *new;\n+ int pertinent;\n struct _defines *next;\n };\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 8 Jan 2000 11:10:01 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] ECPG patch for exec sql ifdef etc." } ]
[ { "msg_contents": "Get Paid to Surf the Web\nDid I tell you about AllAdvantage.com?\nhttp://www.alladvantage.com/go.asp?refid=bil601\n\nEvery day, AllAdvantage.com's revolutionary Viewbar? service generates cash\nfor our members. As of today, we're paying members in the US, UK, Canada,\nFrance, Germany, Australia, New Zealand, and various US Territories to surf\nthe Web with the Viewbar? service, and we'll keep adding more and more\ncountries. AllAdvantage.com has made getting paid to surf the Web an\ninternational phenomenon!\n\n\n Maybe I haven't told you yet, but I get paid to surf the Web.\nReally!\n\n I recently joined AllAdvantage.com, a new company that pays\nits members to surf the Web - and they've been paying out hundreds of\nthousands of dollars\nto members since July.\n\n What's the catch? There is no catch. Membership is totally\nfree and private. To earn money you agree to download a small message\nwindow -- called a\nViewbar - on your desktop. The Viewbar delivers information about products\nand\nservices available online.\n\n AllAdvantage.com is for real:\n\n Their Web site was the 12th most-visited property on the\nWeb in October. Last month, more than 30 AllAdvantage.com members earned\nwell over US$1,000 EACH and the top earner pulled in over US$4,400!\n The company has more than 3 million members worldwide,\nbut there are still 75 million active online users (in the US and Canada\nalone) who are\nstill waiting to hear about AllAdvantage.com and become members. Be sure\nyou're the\nfirst to tell them about it!\n\n The sooner you join, the sooner you'll get paid. Please use my\nreferral ID number (bil601), because I get paid when you sign up and surf.\nBe\nsure to tell all your riends who use the Internet -- the more referrals we\nget, the more\nmoney we can earn. You can sign up with AllAdvantage.com right away at\n http://www.alladvantage.com/go.asp?refid=bil601\n\n This is a really great deal with no strings attached!\n\n\n Member ID# BIL601\n\n\n\n\n\n\n\n\n", "msg_date": "Sat, 8 Jan 2000 16:17:03 +0100", "msg_from": "\"HEALTH&MONEY$!!!\" <[email protected]>", "msg_from_op": true, "msg_subject": "MAKE MONEY AT HOME" } ]
[ { "msg_contents": "\nDoes anyone have anything against me applying this to the current source\ntree? \n\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n---------- Forwarded message ----------\nDate: Fri, 17 Dec 1999 13:51:50 -0800 (PST)\nFrom: Alfred Perlstein <[email protected]>\nTo: The Hermit Hacker <[email protected]>\nSubject: Re: pctrackd updates and such\n\nOn Fri, 17 Dec 1999, The Hermit Hacker wrote:\n\n> \n> Okay, first thing...can you redo these as context diffs? We generally\n> refuse *any* patches that aren't context...\n\nsure.\n\n> \n> Second, are these against a reasonably current snapshot of PostgreSQL\n> (aka. the upcoming v7), or v6.5.3 release? If v6.5.3, we're gonna need to\n> get these v7.x ready before we can commit them...\n\nthey are against a checked out cvs copy as of a couple days ago,\nand should apply cleanly to what's in the current repo.\n\n> Once both of the above conditions are in place, and after I get back from\n> BC, I'll work on getting these into the v7.0 release...or, at least,\n> talked/commented about if there are any objections...\n> \n> I'm outta here for 10 days...Happy Holidays and talk with ya when I get\n> back...\n\nok, cool see you soon. :)\n\n-Alfred\n\ndon't forget the problem with sending queries that may occur:\n\ni'm not sure if handlesendfailure() can cope with only sending\na 'Q' to the backend, we may have to work out reservations or\nsomething for space, another idea would be to implement a \npqWritev() of some sort that would take an array of pointers\nand lengths to send to the backend and only allow any data to\ngo into the backend if the entire string can fit.\n\nthen again, handlesendfailure may work, but doing reservations\nfor the send buffer seems cleaner...\n\ndiff's contexted against pgsql-'current':\n\n\nIndex: fe-connect.c\n===================================================================\nRCS file: /home/pgcvs/pgsql/src/interfaces/libpq/fe-connect.c,v\nretrieving revision 1.108\ndiff -u -c -r1.108 fe-connect.c\ncvs diff: conflicting specifications of output style\n*** fe-connect.c\t1999/12/02 00:26:15\t1.108\n--- fe-connect.c\t1999/12/14 09:42:24\n***************\n*** 595,625 ****\n \treturn 0;\n }\n \n- \n- /* ----------\n- * connectMakeNonblocking -\n- * Make a connection non-blocking.\n- * Returns 1 if successful, 0 if not.\n- * ----------\n- */\n- static int\n- connectMakeNonblocking(PGconn *conn)\n- {\n- #ifndef WIN32\n- \tif (fcntl(conn->sock, F_SETFL, O_NONBLOCK) < 0)\n- #else\n- \tif (ioctlsocket(conn->sock, FIONBIO, &on) != 0)\n- #endif\n- \t{\n- \t\tprintfPQExpBuffer(&conn->errorMessage,\n- \t\t\t\t\t\t \"connectMakeNonblocking -- fcntl() failed: errno=%d\\n%s\\n\",\n- \t\t\t\t\t\t errno, strerror(errno));\n- \t\treturn 0;\n- \t}\n- \n- \treturn 1;\n- }\n- \n /* ----------\n * connectNoDelay -\n * Sets the TCP_NODELAY socket option.\n--- 595,600 ----\n***************\n*** 792,798 ****\n \t * Ewan Mellor <[email protected]>.\n \t * ---------- */\n #if (!defined(WIN32) || defined(WIN32_NON_BLOCKING_CONNECTIONS)) && !defined(USE_SSL)\n! \tif (!connectMakeNonblocking(conn))\n \t\tgoto connect_errReturn;\n #endif\t\n \n--- 767,773 ----\n \t * Ewan Mellor <[email protected]>.\n \t * ---------- */\n #if (!defined(WIN32) || defined(WIN32_NON_BLOCKING_CONNECTIONS)) && !defined(USE_SSL)\n! \tif (PQsetnonblocking(conn, TRUE) != 0)\n \t\tgoto connect_errReturn;\n #endif\t\n \n***************\n*** 904,910 ****\n \t/* This makes the connection non-blocking, for all those cases which forced us\n \t not to do it above. */\n #if (defined(WIN32) && !defined(WIN32_NON_BLOCKING_CONNECTIONS)) || defined(USE_SSL)\n! \tif (!connectMakeNonblocking(conn))\n \t\tgoto connect_errReturn;\n #endif\t\n \n--- 879,885 ----\n \t/* This makes the connection non-blocking, for all those cases which forced us\n \t not to do it above. */\n #if (defined(WIN32) && !defined(WIN32_NON_BLOCKING_CONNECTIONS)) || defined(USE_SSL)\n! \tif (PQsetnonblocking(conn, TRUE) != 0)\n \t\tgoto connect_errReturn;\n #endif\t\n \n***************\n*** 1702,1707 ****\n--- 1677,1683 ----\n \tconn->inBuffer = (char *) malloc(conn->inBufSize);\n \tconn->outBufSize = 8 * 1024;\n \tconn->outBuffer = (char *) malloc(conn->outBufSize);\n+ \tconn->nonblocking = FALSE;\n \tinitPQExpBuffer(&conn->errorMessage);\n \tinitPQExpBuffer(&conn->workBuffer);\n \tif (conn->inBuffer == NULL ||\n***************\n*** 1811,1816 ****\n--- 1787,1793 ----\n \tconn->lobjfuncs = NULL;\n \tconn->inStart = conn->inCursor = conn->inEnd = 0;\n \tconn->outCount = 0;\n+ \tconn->nonblocking = FALSE;\n \n }\n \nIndex: fe-exec.c\n===================================================================\nRCS file: /home/pgcvs/pgsql/src/interfaces/libpq/fe-exec.c,v\nretrieving revision 1.86\ndiff -u -c -r1.86 fe-exec.c\ncvs diff: conflicting specifications of output style\n*** fe-exec.c\t1999/11/11 00:10:14\t1.86\n--- fe-exec.c\t1999/12/14 05:55:11\n***************\n*** 13,18 ****\n--- 13,19 ----\n */\n #include <errno.h>\n #include <ctype.h>\n+ #include <fcntl.h>\n \n #include \"postgres.h\"\n #include \"libpq-fe.h\"\n***************\n*** 24,30 ****\n #include <unistd.h>\n #endif\n \n- \n /* keep this in same order as ExecStatusType in libpq-fe.h */\n const char *const pgresStatus[] = {\n \t\"PGRES_EMPTY_QUERY\",\n--- 25,30 ----\n***************\n*** 574,580 ****\n--- 574,588 ----\n \t * we will NOT block waiting for more input.\n \t */\n \tif (pqReadData(conn) < 0)\n+ \t{\n+ \t\t/*\n+ \t\t * try to flush the send-queue otherwise we may never get a \n+ \t\t * resonce for something that may not have already been sent\n+ \t\t * because it's in our write buffer!\n+ \t\t */\n+ \t\tpqFlush(conn);\n \t\treturn 0;\n+ \t}\n \t/* Parsing of the data waits till later. */\n \treturn 1;\n }\n***************\n*** 1088,1095 ****\n--- 1096,1112 ----\n {\n \tPGresult *result;\n \tPGresult *lastResult;\n+ \tbool\tsavedblocking;\n \n \t/*\n+ \t * we assume anyone calling PQexec wants blocking behaviour,\n+ \t * we force the blocking status of the connection to blocking\n+ \t * for the duration of this function and restore it on return\n+ \t */\n+ \tsavedblocking = PQisnonblocking(conn);\n+ \tPQsetnonblocking(conn, FALSE);\n+ \n+ \t/*\n \t * Silently discard any prior query result that application didn't\n \t * eat. This is probably poor design, but it's here for backward\n \t * compatibility.\n***************\n*** 1102,1115 ****\n \t\t\tPQclear(result);\n \t\t\tprintfPQExpBuffer(&conn->errorMessage,\n \t\t\t\t\"PQexec: you gotta get out of a COPY state yourself.\\n\");\n! \t\t\treturn NULL;\n \t\t}\n \t\tPQclear(result);\n \t}\n \n \t/* OK to send the message */\n \tif (!PQsendQuery(conn, query))\n! \t\treturn NULL;\n \n \t/*\n \t * For backwards compatibility, return the last result if there are\n--- 1119,1133 ----\n \t\t\tPQclear(result);\n \t\t\tprintfPQExpBuffer(&conn->errorMessage,\n \t\t\t\t\"PQexec: you gotta get out of a COPY state yourself.\\n\");\n! \t\t\t/* restore blocking status */\n! \t\t\tgoto errout;\n \t\t}\n \t\tPQclear(result);\n \t}\n \n \t/* OK to send the message */\n \tif (!PQsendQuery(conn, query))\n! \t\tgoto errout;\n \n \t/*\n \t * For backwards compatibility, return the last result if there are\n***************\n*** 1142,1148 ****\n--- 1160,1172 ----\n \t\t\tresult->resultStatus == PGRES_COPY_OUT)\n \t\t\tbreak;\n \t}\n+ \n+ \tPQsetnonblocking(conn, savedblocking);\n \treturn lastResult;\n+ \n+ errout:\n+ \tPQsetnonblocking(conn, savedblocking);\n+ \treturn NULL;\n }\n \n \n***************\n*** 1431,1438 ****\n \t\t\t \"PQendcopy() -- I don't think there's a copy in progress.\\n\");\n \t\treturn 1;\n \t}\n \n! \t(void) pqFlush(conn);\t\t/* make sure no data is waiting to be sent */\n \n \t/* Return to active duty */\n \tconn->asyncStatus = PGASYNC_BUSY;\n--- 1455,1468 ----\n \t\t\t \"PQendcopy() -- I don't think there's a copy in progress.\\n\");\n \t\treturn 1;\n \t}\n+ \n+ \t/* make sure no data is waiting to be sent */\n+ \tif (pqFlush(conn))\n+ \t\treturn (1);\n \n! \t/* non blocking connections may have to abort at this point. */\n! \tif (PQisnonblocking(conn) && PQisBusy(conn))\n! \t\treturn (1);\n \n \t/* Return to active duty */\n \tconn->asyncStatus = PGASYNC_BUSY;\n***************\n*** 2025,2028 ****\n--- 2055,2126 ----\n \t\treturn 1;\n \telse\n \t\treturn 0;\n+ }\n+ \n+ /* PQsetnonblocking:\n+ \t sets the PGconn's database connection non-blocking if the arg is TRUE\n+ \t or makes it non-blocking if the arg is FALSE, this will not protect\n+ \t you from PQexec(), you'll only be safe when using the non-blocking\n+ \t API\n+ \t Needs to be called only on a connected database connection.\n+ */\n+ \n+ int\n+ PQsetnonblocking(PGconn *conn, int arg)\n+ {\n+ \tint\tfcntlarg;\n+ \n+ \targ = (arg == TRUE) ? 1 : 0;\n+ \tif (arg == conn->nonblocking)\n+ \t\treturn (0);\n+ \n+ #ifdef USE_SSL\n+ \tif (conn->ssl)\n+ \t{\n+ \t\tprintfPQExpBuffer(&conn->errorMessage,\n+ \t\t\t\"PQsetnonblocking() -- not supported when using SSL\\n\");\n+ \t\treturn (-1);\n+ \t}\n+ #endif /* USE_SSL */\n+ \n+ #ifndef WIN32\n+ \tfcntlarg = fcntl(conn->sock, F_GETFL, 0);\n+ \tif (fcntlarg == -1)\n+ \t\treturn (-1);\n+ \n+ \tif ((arg == TRUE && \n+ \t\tfcntl(conn->sock, F_SETFL, fcntlarg | O_NONBLOCK) == -1) ||\n+ \t\t(arg == FALSE &&\n+ \t\tfcntl(conn->sock, F_SETFL, fcntlarg & ~O_NONBLOCK) == -1)) \n+ #else\n+ \tfcntlarg = arg;\n+ \tif (ioctlsocket(conn->sock, FIONBIO, &fcntlarg) != 0)\n+ #endif\n+ \t{\n+ \t\tprintfPQExpBuffer(&conn->errorMessage,\n+ \t\t\t\"PQsetblocking() -- unable to set nonblocking status to %s\\n\",\n+ \t\t\targ == TRUE ? \"TRUE\" : \"FALSE\");\n+ \t\treturn (-1);\n+ \t}\n+ \n+ \tconn->nonblocking = arg;\n+ \treturn (0);\n+ }\n+ \n+ /* return the blocking status of the database connection, TRUE == nonblocking,\n+ \t FALSE == blocking\n+ */\n+ int\n+ PQisnonblocking(PGconn *conn)\n+ {\n+ \n+ \treturn (conn->nonblocking);\n+ }\n+ \n+ /* try to force data out, really only useful for non-blocking users */\n+ int\n+ PQflush(PGconn *conn)\n+ {\n+ \n+ \treturn (pqFlush(conn));\n }\nIndex: fe-misc.c\n===================================================================\nRCS file: /home/pgcvs/pgsql/src/interfaces/libpq/fe-misc.c,v\nretrieving revision 1.33\ndiff -u -c -r1.33 fe-misc.c\ncvs diff: conflicting specifications of output style\n*** fe-misc.c\t1999/11/30 03:08:19\t1.33\n--- fe-misc.c\t1999/12/14 08:21:09\n***************\n*** 86,91 ****\n--- 86,119 ----\n {\n \tsize_t avail = Max(conn->outBufSize - conn->outCount, 0);\n \n+ \t/*\n+ \t * if we are non-blocking and the send queue is too full to buffer this\n+ \t * request then try to flush some and return an error \n+ \t */\n+ \tif (PQisnonblocking(conn) && nbytes > avail && pqFlush(conn))\n+ \t{\n+ \t\t/* \n+ \t\t * even if the flush failed we may still have written some\n+ \t\t * data, recalculate the size of the send-queue relative\n+ \t\t * to the amount we have to send, we may be able to queue it\n+ \t\t * afterall even though it's not sent to the database it's\n+ \t\t * ok, any routines that check the data coming from the\n+ \t\t * database better call pqFlush() anyway.\n+ \t\t */\n+ \t\tif (nbytes > Max(conn->outBufSize - conn->outCount, 0))\n+ \t\t{\n+ \t\t\tprintfPQExpBuffer(&conn->errorMessage,\n+ \t\t\t\t\"pqPutBytes -- pqFlush couldn't flush enough\"\n+ \t\t\t\t\" data: space available: %d, space needed %d\\n\",\n+ \t\t\t\tMax(conn->outBufSize - conn->outCount, 0), nbytes);\n+ \t\t\treturn EOF;\n+ \t\t}\n+ \t}\n+ \n+ \t/* \n+ \t * the non-blocking code above makes sure that this isn't true,\n+ \t * essentially this is no-op\n+ \t */\n \twhile (nbytes > avail)\n \t{\n \t\tmemcpy(conn->outBuffer + conn->outCount, s, avail);\n***************\n*** 548,553 ****\n--- 576,589 ----\n \t\treturn EOF;\n \t}\n \n+ \t/* \n+ \t * don't try to send zero data, allows us to use this function\n+ \t * without too much worry about overhead\n+ \t */\n+ \tif (len == 0)\n+ \t\treturn (0);\n+ \n+ \t/* while there's still data to send */\n \twhile (len > 0)\n \t{\n \t\t/* Prevent being SIGPIPEd if backend has closed the connection. */\n***************\n*** 556,561 ****\n--- 592,598 ----\n #endif\n \n \t\tint sent;\n+ \n #ifdef USE_SSL\n \t\tif (conn->ssl) \n \t\t sent = SSL_write(conn->ssl, ptr, len);\n***************\n*** 585,590 ****\n--- 622,629 ----\n \t\t\t\tcase EWOULDBLOCK:\n \t\t\t\t\tbreak;\n #endif\n+ \t\t\t\tcase EINTR:\n+ \t\t\t\t\tcontinue;\n \n \t\t\t\tcase EPIPE:\n #ifdef ECONNRESET\n***************\n*** 616,628 ****\n \t\t\tptr += sent;\n \t\t\tlen -= sent;\n \t\t}\n \t\tif (len > 0)\n \t\t{\n \t\t\t/* We didn't send it all, wait till we can send more */\n \n- \t\t\t/* At first glance this looks as though it should block. I think\n- \t\t\t * that it will be OK though, as long as the socket is\n- \t\t\t * non-blocking. */\n \t\t\tif (pqWait(FALSE, TRUE, conn))\n \t\t\t\treturn EOF;\n \t\t}\n--- 655,685 ----\n \t\t\tptr += sent;\n \t\t\tlen -= sent;\n \t\t}\n+ \n \t\tif (len > 0)\n \t\t{\n \t\t\t/* We didn't send it all, wait till we can send more */\n+ \n+ \t\t\t/* \n+ \t\t\t * if the socket is in non-blocking mode we may need\n+ \t\t\t * to abort here \n+ \t\t\t */\n+ #ifdef USE_SSL\n+ \t\t\t/* can't do anything for our SSL users yet */\n+ \t\t\tif (conn->ssl == NULL)\n+ \t\t\t{\n+ #endif\n+ \t\t\t\tif (PQisnonblocking(conn))\n+ \t\t\t\t{\n+ \t\t\t\t\t/* shift the contents of the buffer */\n+ \t\t\t\t\tmemmove(conn->outBuffer, ptr, len);\n+ \t\t\t\t\tconn->outCount = len;\n+ \t\t\t\t\treturn EOF;\n+ \t\t\t\t}\n+ #ifdef USE_SSL\n+ \t\t\t}\n+ #endif\n \n \t\t\tif (pqWait(FALSE, TRUE, conn))\n \t\t\t\treturn EOF;\n \t\t}\nIndex: libpq-fe.h\n===================================================================\nRCS file: /home/pgcvs/pgsql/src/interfaces/libpq/libpq-fe.h,v\nretrieving revision 1.53\ndiff -u -c -r1.53 libpq-fe.h\ncvs diff: conflicting specifications of output style\n*** libpq-fe.h\t1999/11/30 03:08:19\t1.53\n--- libpq-fe.h\t1999/12/14 01:30:01\n***************\n*** 269,274 ****\n--- 269,281 ----\n \textern int\tPQputnbytes(PGconn *conn, const char *buffer, int nbytes);\n \textern int\tPQendcopy(PGconn *conn);\n \n+ \t/* Set blocking/nonblocking connection to the backend */\n+ \textern int\tPQsetnonblocking(PGconn *conn, int arg);\n+ \textern int\tPQisnonblocking(PGconn *conn);\n+ \n+ \t/* Force the write buffer to be written (or at least try) */\n+ \textern int\tPQflush(PGconn *conn);\n+ \n \t/*\n \t * \"Fast path\" interface --- not really recommended for application\n \t * use\nIndex: libpq-int.h\n===================================================================\nRCS file: /home/pgcvs/pgsql/src/interfaces/libpq/libpq-int.h,v\nretrieving revision 1.14\ndiff -u -c -r1.14 libpq-int.h\ncvs diff: conflicting specifications of output style\n*** libpq-int.h\t1999/11/30 03:08:19\t1.14\n--- libpq-int.h\t1999/12/14 01:30:01\n***************\n*** 215,220 ****\n--- 215,223 ----\n \tint\t\t\tinEnd;\t\t\t/* offset to first position after avail\n \t\t\t\t\t\t\t\t * data */\n \n+ \tint\t\t\tnonblocking;\t/* whether this connection is using a blocking\n+ \t\t\t\t\t\t\t\t * socket to the backend or not */\n+ \n \t/* Buffer for data not yet sent to backend */\n \tchar\t *outBuffer;\t\t/* currently allocated buffer */\n \tint\t\t\toutBufSize;\t\t/* allocated size of buffer */\n\n\n\n\n", "msg_date": "Sat, 8 Jan 2000 17:15:57 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "LIBPQ patches ..." }, { "msg_contents": "Looks fine. I have talked to someone about doing no-blocking\nconnections in the past. Maybe this the same person.\n\nI will let someone else comment on whether the protocol changes are\ncorrect.\n\n\n> \n> Does anyone have anything against me applying this to the current source\n> tree? \n> \n> \n> Marc G. Fournier ICQ#7615664 IRC Nick: Scrappy\n> Systems Administrator @ hub.org \n> primary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n> \n> ---------- Forwarded message ----------\n> Date: Fri, 17 Dec 1999 13:51:50 -0800 (PST)\n> From: Alfred Perlstein <[email protected]>\n> To: The Hermit Hacker <[email protected]>\n> Subject: Re: pctrackd updates and such\n> \n> On Fri, 17 Dec 1999, The Hermit Hacker wrote:\n> \n> > \n> > Okay, first thing...can you redo these as context diffs? We generally\n> > refuse *any* patches that aren't context...\n> \n> sure.\n> \n> > \n> > Second, are these against a reasonably current snapshot of PostgreSQL\n> > (aka. the upcoming v7), or v6.5.3 release? If v6.5.3, we're gonna need to\n> > get these v7.x ready before we can commit them...\n> \n> they are against a checked out cvs copy as of a couple days ago,\n> and should apply cleanly to what's in the current repo.\n> \n> > Once both of the above conditions are in place, and after I get back from\n> > BC, I'll work on getting these into the v7.0 release...or, at least,\n> > talked/commented about if there are any objections...\n> > \n> > I'm outta here for 10 days...Happy Holidays and talk with ya when I get\n> > back...\n> \n> ok, cool see you soon. :)\n> \n> -Alfred\n> \n> don't forget the problem with sending queries that may occur:\n> \n> i'm not sure if handlesendfailure() can cope with only sending\n> a 'Q' to the backend, we may have to work out reservations or\n> something for space, another idea would be to implement a \n> pqWritev() of some sort that would take an array of pointers\n> and lengths to send to the backend and only allow any data to\n> go into the backend if the entire string can fit.\n> \n> then again, handlesendfailure may work, but doing reservations\n> for the send buffer seems cleaner...\n> \n> diff's contexted against pgsql-'current':\n> \n> \n> Index: fe-connect.c\n> ===================================================================\n> RCS file: /home/pgcvs/pgsql/src/interfaces/libpq/fe-connect.c,v\n> retrieving revision 1.108\n> diff -u -c -r1.108 fe-connect.c\n> cvs diff: conflicting specifications of output style\n> *** fe-connect.c\t1999/12/02 00:26:15\t1.108\n> --- fe-connect.c\t1999/12/14 09:42:24\n> ***************\n> *** 595,625 ****\n> \treturn 0;\n> }\n> \n> - \n> - /* ----------\n> - * connectMakeNonblocking -\n> - * Make a connection non-blocking.\n> - * Returns 1 if successful, 0 if not.\n> - * ----------\n> - */\n> - static int\n> - connectMakeNonblocking(PGconn *conn)\n> - {\n> - #ifndef WIN32\n> - \tif (fcntl(conn->sock, F_SETFL, O_NONBLOCK) < 0)\n> - #else\n> - \tif (ioctlsocket(conn->sock, FIONBIO, &on) != 0)\n> - #endif\n> - \t{\n> - \t\tprintfPQExpBuffer(&conn->errorMessage,\n> - \t\t\t\t\t\t \"connectMakeNonblocking -- fcntl() failed: errno=%d\\n%s\\n\",\n> - \t\t\t\t\t\t errno, strerror(errno));\n> - \t\treturn 0;\n> - \t}\n> - \n> - \treturn 1;\n> - }\n> - \n> /* ----------\n> * connectNoDelay -\n> * Sets the TCP_NODELAY socket option.\n> --- 595,600 ----\n> ***************\n> *** 792,798 ****\n> \t * Ewan Mellor <[email protected]>.\n> \t * ---------- */\n> #if (!defined(WIN32) || defined(WIN32_NON_BLOCKING_CONNECTIONS)) && !defined(USE_SSL)\n> ! \tif (!connectMakeNonblocking(conn))\n> \t\tgoto connect_errReturn;\n> #endif\t\n> \n> --- 767,773 ----\n> \t * Ewan Mellor <[email protected]>.\n> \t * ---------- */\n> #if (!defined(WIN32) || defined(WIN32_NON_BLOCKING_CONNECTIONS)) && !defined(USE_SSL)\n> ! \tif (PQsetnonblocking(conn, TRUE) != 0)\n> \t\tgoto connect_errReturn;\n> #endif\t\n> \n> ***************\n> *** 904,910 ****\n> \t/* This makes the connection non-blocking, for all those cases which forced us\n> \t not to do it above. */\n> #if (defined(WIN32) && !defined(WIN32_NON_BLOCKING_CONNECTIONS)) || defined(USE_SSL)\n> ! \tif (!connectMakeNonblocking(conn))\n> \t\tgoto connect_errReturn;\n> #endif\t\n> \n> --- 879,885 ----\n> \t/* This makes the connection non-blocking, for all those cases which forced us\n> \t not to do it above. */\n> #if (defined(WIN32) && !defined(WIN32_NON_BLOCKING_CONNECTIONS)) || defined(USE_SSL)\n> ! \tif (PQsetnonblocking(conn, TRUE) != 0)\n> \t\tgoto connect_errReturn;\n> #endif\t\n> \n> ***************\n> *** 1702,1707 ****\n> --- 1677,1683 ----\n> \tconn->inBuffer = (char *) malloc(conn->inBufSize);\n> \tconn->outBufSize = 8 * 1024;\n> \tconn->outBuffer = (char *) malloc(conn->outBufSize);\n> + \tconn->nonblocking = FALSE;\n> \tinitPQExpBuffer(&conn->errorMessage);\n> \tinitPQExpBuffer(&conn->workBuffer);\n> \tif (conn->inBuffer == NULL ||\n> ***************\n> *** 1811,1816 ****\n> --- 1787,1793 ----\n> \tconn->lobjfuncs = NULL;\n> \tconn->inStart = conn->inCursor = conn->inEnd = 0;\n> \tconn->outCount = 0;\n> + \tconn->nonblocking = FALSE;\n> \n> }\n> \n> Index: fe-exec.c\n> ===================================================================\n> RCS file: /home/pgcvs/pgsql/src/interfaces/libpq/fe-exec.c,v\n> retrieving revision 1.86\n> diff -u -c -r1.86 fe-exec.c\n> cvs diff: conflicting specifications of output style\n> *** fe-exec.c\t1999/11/11 00:10:14\t1.86\n> --- fe-exec.c\t1999/12/14 05:55:11\n> ***************\n> *** 13,18 ****\n> --- 13,19 ----\n> */\n> #include <errno.h>\n> #include <ctype.h>\n> + #include <fcntl.h>\n> \n> #include \"postgres.h\"\n> #include \"libpq-fe.h\"\n> ***************\n> *** 24,30 ****\n> #include <unistd.h>\n> #endif\n> \n> - \n> /* keep this in same order as ExecStatusType in libpq-fe.h */\n> const char *const pgresStatus[] = {\n> \t\"PGRES_EMPTY_QUERY\",\n> --- 25,30 ----\n> ***************\n> *** 574,580 ****\n> --- 574,588 ----\n> \t * we will NOT block waiting for more input.\n> \t */\n> \tif (pqReadData(conn) < 0)\n> + \t{\n> + \t\t/*\n> + \t\t * try to flush the send-queue otherwise we may never get a \n> + \t\t * resonce for something that may not have already been sent\n> + \t\t * because it's in our write buffer!\n> + \t\t */\n> + \t\tpqFlush(conn);\n> \t\treturn 0;\n> + \t}\n> \t/* Parsing of the data waits till later. */\n> \treturn 1;\n> }\n> ***************\n> *** 1088,1095 ****\n> --- 1096,1112 ----\n> {\n> \tPGresult *result;\n> \tPGresult *lastResult;\n> + \tbool\tsavedblocking;\n> \n> \t/*\n> + \t * we assume anyone calling PQexec wants blocking behaviour,\n> + \t * we force the blocking status of the connection to blocking\n> + \t * for the duration of this function and restore it on return\n> + \t */\n> + \tsavedblocking = PQisnonblocking(conn);\n> + \tPQsetnonblocking(conn, FALSE);\n> + \n> + \t/*\n> \t * Silently discard any prior query result that application didn't\n> \t * eat. This is probably poor design, but it's here for backward\n> \t * compatibility.\n> ***************\n> *** 1102,1115 ****\n> \t\t\tPQclear(result);\n> \t\t\tprintfPQExpBuffer(&conn->errorMessage,\n> \t\t\t\t\"PQexec: you gotta get out of a COPY state yourself.\\n\");\n> ! \t\t\treturn NULL;\n> \t\t}\n> \t\tPQclear(result);\n> \t}\n> \n> \t/* OK to send the message */\n> \tif (!PQsendQuery(conn, query))\n> ! \t\treturn NULL;\n> \n> \t/*\n> \t * For backwards compatibility, return the last result if there are\n> --- 1119,1133 ----\n> \t\t\tPQclear(result);\n> \t\t\tprintfPQExpBuffer(&conn->errorMessage,\n> \t\t\t\t\"PQexec: you gotta get out of a COPY state yourself.\\n\");\n> ! \t\t\t/* restore blocking status */\n> ! \t\t\tgoto errout;\n> \t\t}\n> \t\tPQclear(result);\n> \t}\n> \n> \t/* OK to send the message */\n> \tif (!PQsendQuery(conn, query))\n> ! \t\tgoto errout;\n> \n> \t/*\n> \t * For backwards compatibility, return the last result if there are\n> ***************\n> *** 1142,1148 ****\n> --- 1160,1172 ----\n> \t\t\tresult->resultStatus == PGRES_COPY_OUT)\n> \t\t\tbreak;\n> \t}\n> + \n> + \tPQsetnonblocking(conn, savedblocking);\n> \treturn lastResult;\n> + \n> + errout:\n> + \tPQsetnonblocking(conn, savedblocking);\n> + \treturn NULL;\n> }\n> \n> \n> ***************\n> *** 1431,1438 ****\n> \t\t\t \"PQendcopy() -- I don't think there's a copy in progress.\\n\");\n> \t\treturn 1;\n> \t}\n> \n> ! \t(void) pqFlush(conn);\t\t/* make sure no data is waiting to be sent */\n> \n> \t/* Return to active duty */\n> \tconn->asyncStatus = PGASYNC_BUSY;\n> --- 1455,1468 ----\n> \t\t\t \"PQendcopy() -- I don't think there's a copy in progress.\\n\");\n> \t\treturn 1;\n> \t}\n> + \n> + \t/* make sure no data is waiting to be sent */\n> + \tif (pqFlush(conn))\n> + \t\treturn (1);\n> \n> ! \t/* non blocking connections may have to abort at this point. */\n> ! \tif (PQisnonblocking(conn) && PQisBusy(conn))\n> ! \t\treturn (1);\n> \n> \t/* Return to active duty */\n> \tconn->asyncStatus = PGASYNC_BUSY;\n> ***************\n> *** 2025,2028 ****\n> --- 2055,2126 ----\n> \t\treturn 1;\n> \telse\n> \t\treturn 0;\n> + }\n> + \n> + /* PQsetnonblocking:\n> + \t sets the PGconn's database connection non-blocking if the arg is TRUE\n> + \t or makes it non-blocking if the arg is FALSE, this will not protect\n> + \t you from PQexec(), you'll only be safe when using the non-blocking\n> + \t API\n> + \t Needs to be called only on a connected database connection.\n> + */\n> + \n> + int\n> + PQsetnonblocking(PGconn *conn, int arg)\n> + {\n> + \tint\tfcntlarg;\n> + \n> + \targ = (arg == TRUE) ? 1 : 0;\n> + \tif (arg == conn->nonblocking)\n> + \t\treturn (0);\n> + \n> + #ifdef USE_SSL\n> + \tif (conn->ssl)\n> + \t{\n> + \t\tprintfPQExpBuffer(&conn->errorMessage,\n> + \t\t\t\"PQsetnonblocking() -- not supported when using SSL\\n\");\n> + \t\treturn (-1);\n> + \t}\n> + #endif /* USE_SSL */\n> + \n> + #ifndef WIN32\n> + \tfcntlarg = fcntl(conn->sock, F_GETFL, 0);\n> + \tif (fcntlarg == -1)\n> + \t\treturn (-1);\n> + \n> + \tif ((arg == TRUE && \n> + \t\tfcntl(conn->sock, F_SETFL, fcntlarg | O_NONBLOCK) == -1) ||\n> + \t\t(arg == FALSE &&\n> + \t\tfcntl(conn->sock, F_SETFL, fcntlarg & ~O_NONBLOCK) == -1)) \n> + #else\n> + \tfcntlarg = arg;\n> + \tif (ioctlsocket(conn->sock, FIONBIO, &fcntlarg) != 0)\n> + #endif\n> + \t{\n> + \t\tprintfPQExpBuffer(&conn->errorMessage,\n> + \t\t\t\"PQsetblocking() -- unable to set nonblocking status to %s\\n\",\n> + \t\t\targ == TRUE ? \"TRUE\" : \"FALSE\");\n> + \t\treturn (-1);\n> + \t}\n> + \n> + \tconn->nonblocking = arg;\n> + \treturn (0);\n> + }\n> + \n> + /* return the blocking status of the database connection, TRUE == nonblocking,\n> + \t FALSE == blocking\n> + */\n> + int\n> + PQisnonblocking(PGconn *conn)\n> + {\n> + \n> + \treturn (conn->nonblocking);\n> + }\n> + \n> + /* try to force data out, really only useful for non-blocking users */\n> + int\n> + PQflush(PGconn *conn)\n> + {\n> + \n> + \treturn (pqFlush(conn));\n> }\n> Index: fe-misc.c\n> ===================================================================\n> RCS file: /home/pgcvs/pgsql/src/interfaces/libpq/fe-misc.c,v\n> retrieving revision 1.33\n> diff -u -c -r1.33 fe-misc.c\n> cvs diff: conflicting specifications of output style\n> *** fe-misc.c\t1999/11/30 03:08:19\t1.33\n> --- fe-misc.c\t1999/12/14 08:21:09\n> ***************\n> *** 86,91 ****\n> --- 86,119 ----\n> {\n> \tsize_t avail = Max(conn->outBufSize - conn->outCount, 0);\n> \n> + \t/*\n> + \t * if we are non-blocking and the send queue is too full to buffer this\n> + \t * request then try to flush some and return an error \n> + \t */\n> + \tif (PQisnonblocking(conn) && nbytes > avail && pqFlush(conn))\n> + \t{\n> + \t\t/* \n> + \t\t * even if the flush failed we may still have written some\n> + \t\t * data, recalculate the size of the send-queue relative\n> + \t\t * to the amount we have to send, we may be able to queue it\n> + \t\t * afterall even though it's not sent to the database it's\n> + \t\t * ok, any routines that check the data coming from the\n> + \t\t * database better call pqFlush() anyway.\n> + \t\t */\n> + \t\tif (nbytes > Max(conn->outBufSize - conn->outCount, 0))\n> + \t\t{\n> + \t\t\tprintfPQExpBuffer(&conn->errorMessage,\n> + \t\t\t\t\"pqPutBytes -- pqFlush couldn't flush enough\"\n> + \t\t\t\t\" data: space available: %d, space needed %d\\n\",\n> + \t\t\t\tMax(conn->outBufSize - conn->outCount, 0), nbytes);\n> + \t\t\treturn EOF;\n> + \t\t}\n> + \t}\n> + \n> + \t/* \n> + \t * the non-blocking code above makes sure that this isn't true,\n> + \t * essentially this is no-op\n> + \t */\n> \twhile (nbytes > avail)\n> \t{\n> \t\tmemcpy(conn->outBuffer + conn->outCount, s, avail);\n> ***************\n> *** 548,553 ****\n> --- 576,589 ----\n> \t\treturn EOF;\n> \t}\n> \n> + \t/* \n> + \t * don't try to send zero data, allows us to use this function\n> + \t * without too much worry about overhead\n> + \t */\n> + \tif (len == 0)\n> + \t\treturn (0);\n> + \n> + \t/* while there's still data to send */\n> \twhile (len > 0)\n> \t{\n> \t\t/* Prevent being SIGPIPEd if backend has closed the connection. */\n> ***************\n> *** 556,561 ****\n> --- 592,598 ----\n> #endif\n> \n> \t\tint sent;\n> + \n> #ifdef USE_SSL\n> \t\tif (conn->ssl) \n> \t\t sent = SSL_write(conn->ssl, ptr, len);\n> ***************\n> *** 585,590 ****\n> --- 622,629 ----\n> \t\t\t\tcase EWOULDBLOCK:\n> \t\t\t\t\tbreak;\n> #endif\n> + \t\t\t\tcase EINTR:\n> + \t\t\t\t\tcontinue;\n> \n> \t\t\t\tcase EPIPE:\n> #ifdef ECONNRESET\n> ***************\n> *** 616,628 ****\n> \t\t\tptr += sent;\n> \t\t\tlen -= sent;\n> \t\t}\n> \t\tif (len > 0)\n> \t\t{\n> \t\t\t/* We didn't send it all, wait till we can send more */\n> \n> - \t\t\t/* At first glance this looks as though it should block. I think\n> - \t\t\t * that it will be OK though, as long as the socket is\n> - \t\t\t * non-blocking. */\n> \t\t\tif (pqWait(FALSE, TRUE, conn))\n> \t\t\t\treturn EOF;\n> \t\t}\n> --- 655,685 ----\n> \t\t\tptr += sent;\n> \t\t\tlen -= sent;\n> \t\t}\n> + \n> \t\tif (len > 0)\n> \t\t{\n> \t\t\t/* We didn't send it all, wait till we can send more */\n> + \n> + \t\t\t/* \n> + \t\t\t * if the socket is in non-blocking mode we may need\n> + \t\t\t * to abort here \n> + \t\t\t */\n> + #ifdef USE_SSL\n> + \t\t\t/* can't do anything for our SSL users yet */\n> + \t\t\tif (conn->ssl == NULL)\n> + \t\t\t{\n> + #endif\n> + \t\t\t\tif (PQisnonblocking(conn))\n> + \t\t\t\t{\n> + \t\t\t\t\t/* shift the contents of the buffer */\n> + \t\t\t\t\tmemmove(conn->outBuffer, ptr, len);\n> + \t\t\t\t\tconn->outCount = len;\n> + \t\t\t\t\treturn EOF;\n> + \t\t\t\t}\n> + #ifdef USE_SSL\n> + \t\t\t}\n> + #endif\n> \n> \t\t\tif (pqWait(FALSE, TRUE, conn))\n> \t\t\t\treturn EOF;\n> \t\t}\n> Index: libpq-fe.h\n> ===================================================================\n> RCS file: /home/pgcvs/pgsql/src/interfaces/libpq/libpq-fe.h,v\n> retrieving revision 1.53\n> diff -u -c -r1.53 libpq-fe.h\n> cvs diff: conflicting specifications of output style\n> *** libpq-fe.h\t1999/11/30 03:08:19\t1.53\n> --- libpq-fe.h\t1999/12/14 01:30:01\n> ***************\n> *** 269,274 ****\n> --- 269,281 ----\n> \textern int\tPQputnbytes(PGconn *conn, const char *buffer, int nbytes);\n> \textern int\tPQendcopy(PGconn *conn);\n> \n> + \t/* Set blocking/nonblocking connection to the backend */\n> + \textern int\tPQsetnonblocking(PGconn *conn, int arg);\n> + \textern int\tPQisnonblocking(PGconn *conn);\n> + \n> + \t/* Force the write buffer to be written (or at least try) */\n> + \textern int\tPQflush(PGconn *conn);\n> + \n> \t/*\n> \t * \"Fast path\" interface --- not really recommended for application\n> \t * use\n> Index: libpq-int.h\n> ===================================================================\n> RCS file: /home/pgcvs/pgsql/src/interfaces/libpq/libpq-int.h,v\n> retrieving revision 1.14\n> diff -u -c -r1.14 libpq-int.h\n> cvs diff: conflicting specifications of output style\n> *** libpq-int.h\t1999/11/30 03:08:19\t1.14\n> --- libpq-int.h\t1999/12/14 01:30:01\n> ***************\n> *** 215,220 ****\n> --- 215,223 ----\n> \tint\t\t\tinEnd;\t\t\t/* offset to first position after avail\n> \t\t\t\t\t\t\t\t * data */\n> \n> + \tint\t\t\tnonblocking;\t/* whether this connection is using a blocking\n> + \t\t\t\t\t\t\t\t * socket to the backend or not */\n> + \n> \t/* Buffer for data not yet sent to backend */\n> \tchar\t *outBuffer;\t\t/* currently allocated buffer */\n> \tint\t\t\toutBufSize;\t\t/* allocated size of buffer */\n> \n> \n> \n> \n> \n> ************\n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 8 Jan 2000 16:27:49 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] LIBPQ patches ..." }, { "msg_contents": "On Sat, 8 Jan 2000, Bruce Momjian wrote:\n\n> Looks fine. I have talked to someone about doing no-blocking\n> connections in the past. Maybe this the same person.\n> \n> I will let someone else comment on whether the protocol changes are\n> correct.\n\nOkay, if I haven't heard anything major by Sunday, I'm going to include\nthese, whic still gives us a month before beta (well, not quite, but\nclose) and then the beta period in order to clean it up...\n\n > \n> \n> > \n> > Does anyone have anything against me applying this to the current source\n> > tree? \n> > \n> > \n> > Marc G. Fournier ICQ#7615664 IRC Nick: Scrappy\n> > Systems Administrator @ hub.org \n> > primary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n> > \n> > ---------- Forwarded message ----------\n> > Date: Fri, 17 Dec 1999 13:51:50 -0800 (PST)\n> > From: Alfred Perlstein <[email protected]>\n> > To: The Hermit Hacker <[email protected]>\n> > Subject: Re: pctrackd updates and such\n> > \n> > On Fri, 17 Dec 1999, The Hermit Hacker wrote:\n> > \n> > > \n> > > Okay, first thing...can you redo these as context diffs? We generally\n> > > refuse *any* patches that aren't context...\n> > \n> > sure.\n> > \n> > > \n> > > Second, are these against a reasonably current snapshot of PostgreSQL\n> > > (aka. the upcoming v7), or v6.5.3 release? If v6.5.3, we're gonna need to\n> > > get these v7.x ready before we can commit them...\n> > \n> > they are against a checked out cvs copy as of a couple days ago,\n> > and should apply cleanly to what's in the current repo.\n> > \n> > > Once both of the above conditions are in place, and after I get back from\n> > > BC, I'll work on getting these into the v7.0 release...or, at least,\n> > > talked/commented about if there are any objections...\n> > > \n> > > I'm outta here for 10 days...Happy Holidays and talk with ya when I get\n> > > back...\n> > \n> > ok, cool see you soon. :)\n> > \n> > -Alfred\n> > \n> > don't forget the problem with sending queries that may occur:\n> > \n> > i'm not sure if handlesendfailure() can cope with only sending\n> > a 'Q' to the backend, we may have to work out reservations or\n> > something for space, another idea would be to implement a \n> > pqWritev() of some sort that would take an array of pointers\n> > and lengths to send to the backend and only allow any data to\n> > go into the backend if the entire string can fit.\n> > \n> > then again, handlesendfailure may work, but doing reservations\n> > for the send buffer seems cleaner...\n> > \n> > diff's contexted against pgsql-'current':\n> > \n> > \n> > Index: fe-connect.c\n> > ===================================================================\n> > RCS file: /home/pgcvs/pgsql/src/interfaces/libpq/fe-connect.c,v\n> > retrieving revision 1.108\n> > diff -u -c -r1.108 fe-connect.c\n> > cvs diff: conflicting specifications of output style\n> > *** fe-connect.c\t1999/12/02 00:26:15\t1.108\n> > --- fe-connect.c\t1999/12/14 09:42:24\n> > ***************\n> > *** 595,625 ****\n> > \treturn 0;\n> > }\n> > \n> > - \n> > - /* ----------\n> > - * connectMakeNonblocking -\n> > - * Make a connection non-blocking.\n> > - * Returns 1 if successful, 0 if not.\n> > - * ----------\n> > - */\n> > - static int\n> > - connectMakeNonblocking(PGconn *conn)\n> > - {\n> > - #ifndef WIN32\n> > - \tif (fcntl(conn->sock, F_SETFL, O_NONBLOCK) < 0)\n> > - #else\n> > - \tif (ioctlsocket(conn->sock, FIONBIO, &on) != 0)\n> > - #endif\n> > - \t{\n> > - \t\tprintfPQExpBuffer(&conn->errorMessage,\n> > - \t\t\t\t\t\t \"connectMakeNonblocking -- fcntl() failed: errno=%d\\n%s\\n\",\n> > - \t\t\t\t\t\t errno, strerror(errno));\n> > - \t\treturn 0;\n> > - \t}\n> > - \n> > - \treturn 1;\n> > - }\n> > - \n> > /* ----------\n> > * connectNoDelay -\n> > * Sets the TCP_NODELAY socket option.\n> > --- 595,600 ----\n> > ***************\n> > *** 792,798 ****\n> > \t * Ewan Mellor <[email protected]>.\n> > \t * ---------- */\n> > #if (!defined(WIN32) || defined(WIN32_NON_BLOCKING_CONNECTIONS)) && !defined(USE_SSL)\n> > ! \tif (!connectMakeNonblocking(conn))\n> > \t\tgoto connect_errReturn;\n> > #endif\t\n> > \n> > --- 767,773 ----\n> > \t * Ewan Mellor <[email protected]>.\n> > \t * ---------- */\n> > #if (!defined(WIN32) || defined(WIN32_NON_BLOCKING_CONNECTIONS)) && !defined(USE_SSL)\n> > ! \tif (PQsetnonblocking(conn, TRUE) != 0)\n> > \t\tgoto connect_errReturn;\n> > #endif\t\n> > \n> > ***************\n> > *** 904,910 ****\n> > \t/* This makes the connection non-blocking, for all those cases which forced us\n> > \t not to do it above. */\n> > #if (defined(WIN32) && !defined(WIN32_NON_BLOCKING_CONNECTIONS)) || defined(USE_SSL)\n> > ! \tif (!connectMakeNonblocking(conn))\n> > \t\tgoto connect_errReturn;\n> > #endif\t\n> > \n> > --- 879,885 ----\n> > \t/* This makes the connection non-blocking, for all those cases which forced us\n> > \t not to do it above. */\n> > #if (defined(WIN32) && !defined(WIN32_NON_BLOCKING_CONNECTIONS)) || defined(USE_SSL)\n> > ! \tif (PQsetnonblocking(conn, TRUE) != 0)\n> > \t\tgoto connect_errReturn;\n> > #endif\t\n> > \n> > ***************\n> > *** 1702,1707 ****\n> > --- 1677,1683 ----\n> > \tconn->inBuffer = (char *) malloc(conn->inBufSize);\n> > \tconn->outBufSize = 8 * 1024;\n> > \tconn->outBuffer = (char *) malloc(conn->outBufSize);\n> > + \tconn->nonblocking = FALSE;\n> > \tinitPQExpBuffer(&conn->errorMessage);\n> > \tinitPQExpBuffer(&conn->workBuffer);\n> > \tif (conn->inBuffer == NULL ||\n> > ***************\n> > *** 1811,1816 ****\n> > --- 1787,1793 ----\n> > \tconn->lobjfuncs = NULL;\n> > \tconn->inStart = conn->inCursor = conn->inEnd = 0;\n> > \tconn->outCount = 0;\n> > + \tconn->nonblocking = FALSE;\n> > \n> > }\n> > \n> > Index: fe-exec.c\n> > ===================================================================\n> > RCS file: /home/pgcvs/pgsql/src/interfaces/libpq/fe-exec.c,v\n> > retrieving revision 1.86\n> > diff -u -c -r1.86 fe-exec.c\n> > cvs diff: conflicting specifications of output style\n> > *** fe-exec.c\t1999/11/11 00:10:14\t1.86\n> > --- fe-exec.c\t1999/12/14 05:55:11\n> > ***************\n> > *** 13,18 ****\n> > --- 13,19 ----\n> > */\n> > #include <errno.h>\n> > #include <ctype.h>\n> > + #include <fcntl.h>\n> > \n> > #include \"postgres.h\"\n> > #include \"libpq-fe.h\"\n> > ***************\n> > *** 24,30 ****\n> > #include <unistd.h>\n> > #endif\n> > \n> > - \n> > /* keep this in same order as ExecStatusType in libpq-fe.h */\n> > const char *const pgresStatus[] = {\n> > \t\"PGRES_EMPTY_QUERY\",\n> > --- 25,30 ----\n> > ***************\n> > *** 574,580 ****\n> > --- 574,588 ----\n> > \t * we will NOT block waiting for more input.\n> > \t */\n> > \tif (pqReadData(conn) < 0)\n> > + \t{\n> > + \t\t/*\n> > + \t\t * try to flush the send-queue otherwise we may never get a \n> > + \t\t * resonce for something that may not have already been sent\n> > + \t\t * because it's in our write buffer!\n> > + \t\t */\n> > + \t\tpqFlush(conn);\n> > \t\treturn 0;\n> > + \t}\n> > \t/* Parsing of the data waits till later. */\n> > \treturn 1;\n> > }\n> > ***************\n> > *** 1088,1095 ****\n> > --- 1096,1112 ----\n> > {\n> > \tPGresult *result;\n> > \tPGresult *lastResult;\n> > + \tbool\tsavedblocking;\n> > \n> > \t/*\n> > + \t * we assume anyone calling PQexec wants blocking behaviour,\n> > + \t * we force the blocking status of the connection to blocking\n> > + \t * for the duration of this function and restore it on return\n> > + \t */\n> > + \tsavedblocking = PQisnonblocking(conn);\n> > + \tPQsetnonblocking(conn, FALSE);\n> > + \n> > + \t/*\n> > \t * Silently discard any prior query result that application didn't\n> > \t * eat. This is probably poor design, but it's here for backward\n> > \t * compatibility.\n> > ***************\n> > *** 1102,1115 ****\n> > \t\t\tPQclear(result);\n> > \t\t\tprintfPQExpBuffer(&conn->errorMessage,\n> > \t\t\t\t\"PQexec: you gotta get out of a COPY state yourself.\\n\");\n> > ! \t\t\treturn NULL;\n> > \t\t}\n> > \t\tPQclear(result);\n> > \t}\n> > \n> > \t/* OK to send the message */\n> > \tif (!PQsendQuery(conn, query))\n> > ! \t\treturn NULL;\n> > \n> > \t/*\n> > \t * For backwards compatibility, return the last result if there are\n> > --- 1119,1133 ----\n> > \t\t\tPQclear(result);\n> > \t\t\tprintfPQExpBuffer(&conn->errorMessage,\n> > \t\t\t\t\"PQexec: you gotta get out of a COPY state yourself.\\n\");\n> > ! \t\t\t/* restore blocking status */\n> > ! \t\t\tgoto errout;\n> > \t\t}\n> > \t\tPQclear(result);\n> > \t}\n> > \n> > \t/* OK to send the message */\n> > \tif (!PQsendQuery(conn, query))\n> > ! \t\tgoto errout;\n> > \n> > \t/*\n> > \t * For backwards compatibility, return the last result if there are\n> > ***************\n> > *** 1142,1148 ****\n> > --- 1160,1172 ----\n> > \t\t\tresult->resultStatus == PGRES_COPY_OUT)\n> > \t\t\tbreak;\n> > \t}\n> > + \n> > + \tPQsetnonblocking(conn, savedblocking);\n> > \treturn lastResult;\n> > + \n> > + errout:\n> > + \tPQsetnonblocking(conn, savedblocking);\n> > + \treturn NULL;\n> > }\n> > \n> > \n> > ***************\n> > *** 1431,1438 ****\n> > \t\t\t \"PQendcopy() -- I don't think there's a copy in progress.\\n\");\n> > \t\treturn 1;\n> > \t}\n> > \n> > ! \t(void) pqFlush(conn);\t\t/* make sure no data is waiting to be sent */\n> > \n> > \t/* Return to active duty */\n> > \tconn->asyncStatus = PGASYNC_BUSY;\n> > --- 1455,1468 ----\n> > \t\t\t \"PQendcopy() -- I don't think there's a copy in progress.\\n\");\n> > \t\treturn 1;\n> > \t}\n> > + \n> > + \t/* make sure no data is waiting to be sent */\n> > + \tif (pqFlush(conn))\n> > + \t\treturn (1);\n> > \n> > ! \t/* non blocking connections may have to abort at this point. */\n> > ! \tif (PQisnonblocking(conn) && PQisBusy(conn))\n> > ! \t\treturn (1);\n> > \n> > \t/* Return to active duty */\n> > \tconn->asyncStatus = PGASYNC_BUSY;\n> > ***************\n> > *** 2025,2028 ****\n> > --- 2055,2126 ----\n> > \t\treturn 1;\n> > \telse\n> > \t\treturn 0;\n> > + }\n> > + \n> > + /* PQsetnonblocking:\n> > + \t sets the PGconn's database connection non-blocking if the arg is TRUE\n> > + \t or makes it non-blocking if the arg is FALSE, this will not protect\n> > + \t you from PQexec(), you'll only be safe when using the non-blocking\n> > + \t API\n> > + \t Needs to be called only on a connected database connection.\n> > + */\n> > + \n> > + int\n> > + PQsetnonblocking(PGconn *conn, int arg)\n> > + {\n> > + \tint\tfcntlarg;\n> > + \n> > + \targ = (arg == TRUE) ? 1 : 0;\n> > + \tif (arg == conn->nonblocking)\n> > + \t\treturn (0);\n> > + \n> > + #ifdef USE_SSL\n> > + \tif (conn->ssl)\n> > + \t{\n> > + \t\tprintfPQExpBuffer(&conn->errorMessage,\n> > + \t\t\t\"PQsetnonblocking() -- not supported when using SSL\\n\");\n> > + \t\treturn (-1);\n> > + \t}\n> > + #endif /* USE_SSL */\n> > + \n> > + #ifndef WIN32\n> > + \tfcntlarg = fcntl(conn->sock, F_GETFL, 0);\n> > + \tif (fcntlarg == -1)\n> > + \t\treturn (-1);\n> > + \n> > + \tif ((arg == TRUE && \n> > + \t\tfcntl(conn->sock, F_SETFL, fcntlarg | O_NONBLOCK) == -1) ||\n> > + \t\t(arg == FALSE &&\n> > + \t\tfcntl(conn->sock, F_SETFL, fcntlarg & ~O_NONBLOCK) == -1)) \n> > + #else\n> > + \tfcntlarg = arg;\n> > + \tif (ioctlsocket(conn->sock, FIONBIO, &fcntlarg) != 0)\n> > + #endif\n> > + \t{\n> > + \t\tprintfPQExpBuffer(&conn->errorMessage,\n> > + \t\t\t\"PQsetblocking() -- unable to set nonblocking status to %s\\n\",\n> > + \t\t\targ == TRUE ? \"TRUE\" : \"FALSE\");\n> > + \t\treturn (-1);\n> > + \t}\n> > + \n> > + \tconn->nonblocking = arg;\n> > + \treturn (0);\n> > + }\n> > + \n> > + /* return the blocking status of the database connection, TRUE == nonblocking,\n> > + \t FALSE == blocking\n> > + */\n> > + int\n> > + PQisnonblocking(PGconn *conn)\n> > + {\n> > + \n> > + \treturn (conn->nonblocking);\n> > + }\n> > + \n> > + /* try to force data out, really only useful for non-blocking users */\n> > + int\n> > + PQflush(PGconn *conn)\n> > + {\n> > + \n> > + \treturn (pqFlush(conn));\n> > }\n> > Index: fe-misc.c\n> > ===================================================================\n> > RCS file: /home/pgcvs/pgsql/src/interfaces/libpq/fe-misc.c,v\n> > retrieving revision 1.33\n> > diff -u -c -r1.33 fe-misc.c\n> > cvs diff: conflicting specifications of output style\n> > *** fe-misc.c\t1999/11/30 03:08:19\t1.33\n> > --- fe-misc.c\t1999/12/14 08:21:09\n> > ***************\n> > *** 86,91 ****\n> > --- 86,119 ----\n> > {\n> > \tsize_t avail = Max(conn->outBufSize - conn->outCount, 0);\n> > \n> > + \t/*\n> > + \t * if we are non-blocking and the send queue is too full to buffer this\n> > + \t * request then try to flush some and return an error \n> > + \t */\n> > + \tif (PQisnonblocking(conn) && nbytes > avail && pqFlush(conn))\n> > + \t{\n> > + \t\t/* \n> > + \t\t * even if the flush failed we may still have written some\n> > + \t\t * data, recalculate the size of the send-queue relative\n> > + \t\t * to the amount we have to send, we may be able to queue it\n> > + \t\t * afterall even though it's not sent to the database it's\n> > + \t\t * ok, any routines that check the data coming from the\n> > + \t\t * database better call pqFlush() anyway.\n> > + \t\t */\n> > + \t\tif (nbytes > Max(conn->outBufSize - conn->outCount, 0))\n> > + \t\t{\n> > + \t\t\tprintfPQExpBuffer(&conn->errorMessage,\n> > + \t\t\t\t\"pqPutBytes -- pqFlush couldn't flush enough\"\n> > + \t\t\t\t\" data: space available: %d, space needed %d\\n\",\n> > + \t\t\t\tMax(conn->outBufSize - conn->outCount, 0), nbytes);\n> > + \t\t\treturn EOF;\n> > + \t\t}\n> > + \t}\n> > + \n> > + \t/* \n> > + \t * the non-blocking code above makes sure that this isn't true,\n> > + \t * essentially this is no-op\n> > + \t */\n> > \twhile (nbytes > avail)\n> > \t{\n> > \t\tmemcpy(conn->outBuffer + conn->outCount, s, avail);\n> > ***************\n> > *** 548,553 ****\n> > --- 576,589 ----\n> > \t\treturn EOF;\n> > \t}\n> > \n> > + \t/* \n> > + \t * don't try to send zero data, allows us to use this function\n> > + \t * without too much worry about overhead\n> > + \t */\n> > + \tif (len == 0)\n> > + \t\treturn (0);\n> > + \n> > + \t/* while there's still data to send */\n> > \twhile (len > 0)\n> > \t{\n> > \t\t/* Prevent being SIGPIPEd if backend has closed the connection. */\n> > ***************\n> > *** 556,561 ****\n> > --- 592,598 ----\n> > #endif\n> > \n> > \t\tint sent;\n> > + \n> > #ifdef USE_SSL\n> > \t\tif (conn->ssl) \n> > \t\t sent = SSL_write(conn->ssl, ptr, len);\n> > ***************\n> > *** 585,590 ****\n> > --- 622,629 ----\n> > \t\t\t\tcase EWOULDBLOCK:\n> > \t\t\t\t\tbreak;\n> > #endif\n> > + \t\t\t\tcase EINTR:\n> > + \t\t\t\t\tcontinue;\n> > \n> > \t\t\t\tcase EPIPE:\n> > #ifdef ECONNRESET\n> > ***************\n> > *** 616,628 ****\n> > \t\t\tptr += sent;\n> > \t\t\tlen -= sent;\n> > \t\t}\n> > \t\tif (len > 0)\n> > \t\t{\n> > \t\t\t/* We didn't send it all, wait till we can send more */\n> > \n> > - \t\t\t/* At first glance this looks as though it should block. I think\n> > - \t\t\t * that it will be OK though, as long as the socket is\n> > - \t\t\t * non-blocking. */\n> > \t\t\tif (pqWait(FALSE, TRUE, conn))\n> > \t\t\t\treturn EOF;\n> > \t\t}\n> > --- 655,685 ----\n> > \t\t\tptr += sent;\n> > \t\t\tlen -= sent;\n> > \t\t}\n> > + \n> > \t\tif (len > 0)\n> > \t\t{\n> > \t\t\t/* We didn't send it all, wait till we can send more */\n> > + \n> > + \t\t\t/* \n> > + \t\t\t * if the socket is in non-blocking mode we may need\n> > + \t\t\t * to abort here \n> > + \t\t\t */\n> > + #ifdef USE_SSL\n> > + \t\t\t/* can't do anything for our SSL users yet */\n> > + \t\t\tif (conn->ssl == NULL)\n> > + \t\t\t{\n> > + #endif\n> > + \t\t\t\tif (PQisnonblocking(conn))\n> > + \t\t\t\t{\n> > + \t\t\t\t\t/* shift the contents of the buffer */\n> > + \t\t\t\t\tmemmove(conn->outBuffer, ptr, len);\n> > + \t\t\t\t\tconn->outCount = len;\n> > + \t\t\t\t\treturn EOF;\n> > + \t\t\t\t}\n> > + #ifdef USE_SSL\n> > + \t\t\t}\n> > + #endif\n> > \n> > \t\t\tif (pqWait(FALSE, TRUE, conn))\n> > \t\t\t\treturn EOF;\n> > \t\t}\n> > Index: libpq-fe.h\n> > ===================================================================\n> > RCS file: /home/pgcvs/pgsql/src/interfaces/libpq/libpq-fe.h,v\n> > retrieving revision 1.53\n> > diff -u -c -r1.53 libpq-fe.h\n> > cvs diff: conflicting specifications of output style\n> > *** libpq-fe.h\t1999/11/30 03:08:19\t1.53\n> > --- libpq-fe.h\t1999/12/14 01:30:01\n> > ***************\n> > *** 269,274 ****\n> > --- 269,281 ----\n> > \textern int\tPQputnbytes(PGconn *conn, const char *buffer, int nbytes);\n> > \textern int\tPQendcopy(PGconn *conn);\n> > \n> > + \t/* Set blocking/nonblocking connection to the backend */\n> > + \textern int\tPQsetnonblocking(PGconn *conn, int arg);\n> > + \textern int\tPQisnonblocking(PGconn *conn);\n> > + \n> > + \t/* Force the write buffer to be written (or at least try) */\n> > + \textern int\tPQflush(PGconn *conn);\n> > + \n> > \t/*\n> > \t * \"Fast path\" interface --- not really recommended for application\n> > \t * use\n> > Index: libpq-int.h\n> > ===================================================================\n> > RCS file: /home/pgcvs/pgsql/src/interfaces/libpq/libpq-int.h,v\n> > retrieving revision 1.14\n> > diff -u -c -r1.14 libpq-int.h\n> > cvs diff: conflicting specifications of output style\n> > *** libpq-int.h\t1999/11/30 03:08:19\t1.14\n> > --- libpq-int.h\t1999/12/14 01:30:01\n> > ***************\n> > *** 215,220 ****\n> > --- 215,223 ----\n> > \tint\t\t\tinEnd;\t\t\t/* offset to first position after avail\n> > \t\t\t\t\t\t\t\t * data */\n> > \n> > + \tint\t\t\tnonblocking;\t/* whether this connection is using a blocking\n> > + \t\t\t\t\t\t\t\t * socket to the backend or not */\n> > + \n> > \t/* Buffer for data not yet sent to backend */\n> > \tchar\t *outBuffer;\t\t/* currently allocated buffer */\n> > \tint\t\t\toutBufSize;\t\t/* allocated size of buffer */\n> > \n> > \n> > \n> > \n> > \n> > ************\n> > \n> \n> \n> -- \n> Bruce Momjian | http://www.op.net/~candle\n> [email protected] | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Sat, 8 Jan 2000 17:53:39 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] LIBPQ patches ..." }, { "msg_contents": "The Hermit Hacker <[email protected]> writes:\n> Does anyone have anything against me applying this to the current source\n> tree? \n\nI'm not particularly comfortable with it --- it looks like the semantics\nneed more careful thought, particularly concerning when the output buffer\ngets flushed and what happens if we can't send data right away. The\ninsertion of a pqFlush into PQconsumeInput, in particular, looks like\nan ill-thought-out hack that could break some applications.\n\nI also object strongly to the lack of documentation. Patches that\nchange public APIs and come without doco updates should be rejected\nout of hand, IMNSHO. Keeping the documentation up to date should\nnot be considered optional --- especially not when you're talking\nabout something that makes subtle and pervasive changes to library\nbehavior.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 08 Jan 2000 17:27:02 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] LIBPQ patches ... " }, { "msg_contents": "On Sat, 8 Jan 2000, Tom Lane wrote:\n\n> The Hermit Hacker <[email protected]> writes:\n> > Does anyone have anything against me applying this to the current source\n> > tree? \n> \n> I'm not particularly comfortable with it --- it looks like the semantics\n> need more careful thought, particularly concerning when the output buffer\n> gets flushed and what happens if we can't send data right away. The\n> insertion of a pqFlush into PQconsumeInput, in particular, looks like\n> an ill-thought-out hack that could break some applications.\n\nWell, at least we have more discussion on this then the previous two posts\nabout it, so it should give something for Alfred to address :) Is there\nanyone workign with libpq that could comment on possible a better way of\nit being implemented? \n\n> I also object strongly to the lack of documentation. Patches that\n> change public APIs and come without doco updates should be rejected\n> out of hand, IMNSHO. Keeping the documentation up to date should\n> not be considered optional --- especially not when you're talking\n> about something that makes subtle and pervasive changes to library\n> behavior.\n\nAgreed here...Alfred and I talked about that on the phone tonight...I\nposted the patches tonight so that he could get some feedback on them...if\nwe could figure out what needs to be fixed/improved, and he has an\nindication that he's working in the right direction, then documentation is\nforthcoming...\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Sat, 8 Jan 2000 18:38:16 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] LIBPQ patches ... " }, { "msg_contents": "* Tom Lane <[email protected]> [000108 14:56] wrote:\n> The Hermit Hacker <[email protected]> writes:\n> > Does anyone have anything against me applying this to the current source\n> > tree? \n> \n> I'm not particularly comfortable with it --- it looks like the semantics\n> need more careful thought, particularly concerning when the output buffer\n> gets flushed and what happens if we can't send data right away. \n\nCould you be more specific? My patches address the fact that\nalthough there is work to make libpq non-blocking you can easily\nblock while sending large queries expecially because of the select()\nthat is done in pqFlush().\n\nThe problem is that libpq doesn't reserve space in the send buffer\nand will just block if waiting for the socket to the backend to\ndrain. This needs to be fixed if libpq is trully going to offer\nnon-blocking behavior.\n\nUnless you reserve the space in the buffer you have to block\notherwise if you abort (so as not to block) then libpq may have\nsent a partial query down the pipe or just buffered part of some\ndata you've sent to the backend. At this point you will be out of sync\nwith the backend.\n\nIf you are in 'normal mode' (blocking) then the behavior shouldn't\nbe any different, if you are non-blocking then if you attempt to\nsend data and it's not possible you'll get an error without\npotentially sending a partial line to the backend.\n\n> The\n> insertion of a pqFlush into PQconsumeInput, in particular, looks like\n> an ill-thought-out hack that could break some applications.\n\nI think I agree, the code I was using would attempt an PQconsumeInput()\nbefore doing a PQendcopy(), there could be data in the send buffer\nthat would make PQconsumeInput() never succeed hence the need for a\nflush.\n\nI'm going to try it without the PQconsumeInput() before the PQendcopy()\nmy modifications for PQendcopy() should make it non-blocking safe.\nbut in the meanwhile here's my (probably wrong) reasoning behind this\n'hack': \n\n\tNo, IMHO it's needed, the problem is that there may be data\n\tin the send buffer that hasn't been sent yet, it could be\n\tpart of a request to the backend that you are explicitly\n\twaiting for a result from.\n\n\tThis can happen when doing a COPY into the database.\n\n\tWhat happens is that you send data, then when you send the\n\t'end copy' it can get buffered, then you loop forever\n\tattempting to read a result for a query that was never\n\tsent.\n\n\tIn regards to it breaking applications, the send buffer\n\tshould be opaque to the libpq user, libpq never has offered\n\ta truly non-blocking api, and even when using non-blocking\n\tthe flush will fail if it can't be done and PQconsumeInput()\n\twill error out accordingly.\n\n\tOld applications can be snagged by the Flush since in theory\n\tPQconsumeInput shouldn't block, however I'm not sure if\n\tthere's a real workaround for this except\n\n\t1.. saving the blocking status of the connection, \n\t2.. setting it non-blocking and attempting a flush and then\n\t3.. restoring the blocking status.\n\n\tIt seems that old applications can break (looping on an\n\tunsent line) regardless because of the not-flushed-query\n\tproblem.\n\nIf you can figure an occasion where this might actually happen\n(with the exception of my accidentaly abuse of libpq) then it\nmay need to be revisited.\n\nI'll get back to you guys on the PQendcopy before PQconsumeInput\ntests.\n\n> I also object strongly to the lack of documentation. Patches that\n> change public APIs and come without doco updates should be rejected\n> out of hand, IMNSHO. Keeping the documentation up to date should\n> not be considered optional --- especially not when you're talking\n> about something that makes subtle and pervasive changes to library\n> behavior.\n\nI agree with you about the documentation issues, I will try to add\nsome documentation to the patches.\n\nI think I can also take out the visibility of the PQflush() function\nas normal applications really shouldn't need it.\n\nHow do you feel about the explicit PQsetnonblocking and PQisnonblocking\nfunctions that I added as well as the additional field 'nonblocking' \nadded to PGconn? IMO the user shouldn't set the socket non-blocking\nwithout informing the library about it otherwise it gets really ugly\nbecause we have to constantly poll the socket's flags to make sure we\nDTRT.\n\nI also apologize for my not indented patches, is there a way to indent\naccording to postgresql standards on a FreeBSD system? The patches\nfor pgindent are a bit out of date and I get floating point exceptions\nwhen attempting to pgindent.\n\nThanks for the feedback.\n-Alfred\n", "msg_date": "Sat, 8 Jan 2000 16:01:48 -0800", "msg_from": "Alfred Perlstein <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] LIBPQ patches ..." }, { "msg_contents": "* Alfred Perlstein <[email protected]> [000108 16:08] wrote:\n> * Tom Lane <[email protected]> [000108 14:56] wrote:\n> \n> > The\n> > insertion of a pqFlush into PQconsumeInput, in particular, looks like\n> > an ill-thought-out hack that could break some applications.\n> \n> I think I agree, the code I was using would attempt an PQconsumeInput()\n> before doing a PQendcopy(), there could be data in the send buffer\n> that would make PQconsumeInput() never succeed hence the need for a\n> flush.\n> \n> I'm going to try it without the PQconsumeInput() before the PQendcopy()\n> my modifications for PQendcopy() should make it non-blocking safe.\n> but in the meanwhile here's my (probably wrong) reasoning behind this\n> 'hack': \n> \n> \tNo, IMHO it's needed, the problem is that there may be data\n> \tin the send buffer that hasn't been sent yet, it could be\n> \tpart of a request to the backend that you are explicitly\n> \twaiting for a result from.\n> \n> \tThis can happen when doing a COPY into the database.\n> \n> \tWhat happens is that you send data, then when you send the\n> \t'end copy' it can get buffered, then you loop forever\n> \tattempting to read a result for a query that was never\n> \tsent.\n> \n> \tIn regards to it breaking applications, the send buffer\n> \tshould be opaque to the libpq user, libpq never has offered\n> \ta truly non-blocking api, and even when using non-blocking\n> \tthe flush will fail if it can't be done and PQconsumeInput()\n> \twill error out accordingly.\n> \n> \tOld applications can be snagged by the Flush since in theory\n> \tPQconsumeInput shouldn't block, however I'm not sure if\n> \tthere's a real workaround for this except\n> \n> \t1.. saving the blocking status of the connection, \n> \t2.. setting it non-blocking and attempting a flush and then\n> \t3.. restoring the blocking status.\n> \n> \tIt seems that old applications can break (looping on an\n> \tunsent line) regardless because of the not-flushed-query\n> \tproblem.\n> \n> If you can figure an occasion where this might actually happen\n> (with the exception of my accidentaly abuse of libpq) then it\n> may need to be revisited.\n> \n> I'll get back to you guys on the PQendcopy before PQconsumeInput\n> tests.\n\nI just remebered where the problem is (sorry it's been about 2 weeks\nsince i've read through the code) it's a bit different\nand messier than I thought:\n\nI begin my COPY commands with a PQsendQuery() which has this block\nof code in it:\n\n\t/* send the query to the backend; */\n\t/* the frontend-backend protocol uses 'Q' to designate queries */\n\tif (pqPutnchar(\"Q\", 1, conn) ||\n\t\tpqPuts(query, conn) ||\n\t\tpqFlush(conn))\n\t{\n\t\thandleSendFailure(conn);\n\t\treturn 0;\n\t}\n\nIt can get really hairy for non-blocking connections if any of the\nfunctions in the 'if' conditional fail, any ideas on a workaround?\n\nOne that comes to mind is using the low/high watermarks in sockets,\nif we do that then a write-ready true condition would garantee that\nwe have X number of bytes available in our send buffer and we can\nsafely queue the data. This doesn't seem portable and would be \npretty complex.\n\nAnother is to attempt a flush beforehand aborting early if it fails\nhowever we still need to flush after we pqPutnchar and pqPuts,\nif that fails we are back to needing to call pqFlush from PQconsumeInput\nbeccause the only failure is that the backend's pipe is potentially full\nand we may have queued something.\n\nThis 'hack' would be to allow the last flush to fail and always call\npqFlush in PQconsumeInput when the connection is non-blocking because\nwith the new code it shouldn't block, PQconsumeInput will function to\ndrive data to the backend in that situation. We'll only do the\nflush in PQconsumeInput() for non-blocking connections because\nthe conditional in PQsendQuery shouldn't fail for blocking connections\nunless something is seriously wrong.\n\nI hate to say it, but I like the hack approach, it is somewhat wierd\nbut looks like it would work quite well as any error returned from\nthe backend would be delayed until PQconsumeInput and a broken connection\nwould still be returned immediatly from PQsendQuery.\n\nYour opinion?\n\n-- \n-Alfred Perlstein - [[email protected]|[email protected]]\nWintelcom systems administrator and programmer\n - http://www.wintelcom.net/ [[email protected]]\n", "msg_date": "Sat, 8 Jan 2000 16:34:52 -0800", "msg_from": "Alfred Perlstein <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] LIBPQ patches ..." }, { "msg_contents": "\nHello all. I wanted to throw up an idea for a\nXML interface to PostgreSQL. \n\nThe underlying information model for SGML documents\nis called the \"groves\" model. The model's primary\nprupose is to handle multiple content hierarchites\nwithin a single network; as this is the primary \nproblem with multimedia hypertext markup (HyTime).\nAnyway, what is interesting is that the core of the\ngroves model seems to be the following seemingly\nsimple production:\n\n node := character | map( string, list(node) )\n\nIt is an alternation between a map (unordered \nset) and a list (ordered bag). Anyway, the discovery\nmade me sit back in my chair and go hmmmmmm. As it \nseems to me; this model is the essence of a multi-level \nresult set, and could point towards a rather natural\nmapping of a multi-hierarchy result set onto a tag\nbased markup language like XML.\n\nFor starters, here is a row...\n\n row-node := map (string, list(characters))\n\nFor example, here is an order, and two order-line rows:\n\n order-row-node := { id = '345'\n , buyer = 'Clark Evans'\n }\n\n order-line-row := { order-id = '345'\n , product = 'Bag of rice' \n , quantity = '1'\n }\n\n order-line-row := { order-id = '345'\n , product = 'Tofo'\n , quantity = '3'\n\nAnd here, is a table:\n\n relation-node := map('relation-name',list(row-node))\n\n line-relation := { order-line = \n [\n { order-id = '345'\n , product = 'Bag of rice'\n , quantity = '1'\n }\n ,\n ...\n ,\n { order-id = '345'\n , product = 'Tofo'\n , quantity = '1'\n }\n ] \n }\n\nHere is the origonal production again:\n\n node := character | map( string, list(node) ) \n\nIt could then be used to return a nested\nresult set like:\n\n SELECT * \n FROM ORDER \n WHERE ID = '345'\n \n with this sub-query nested...\n\n SELECT PRODUCT, QUANTITY \n FROM ORDER_LINE \n WHERE ORDER_ID = ID;\n\n my-order =: { id = '345'\n , buyer = 'Clark Evans'\n , lines =\n [\n { order-id = '345'\n , product = 'Bag of rice' }\n , quantity = '1'\n }\n ,\n { order-id = '345'\n , product = 'Tofo'\n , quantity = '3'\n }\n ] \n }\n\nHere is a mapping to a simple markup language \n(http://www.egroups.com/list/sml-dev/info.html)\n\n<order>\n <id>345</id>\n <buyer>Clark Evans</buyer>\n <order-line-list>\n <order-line>\n <product>Bag of rice</product>\n <quantity>1</quantity>\n </order-line>\n <order-line>\n <product>Tofo</product>\n <quantity>3</quantity>\n </order-line>\n </order-line-list>\n</order>\n\nSo, if you notice the even levels are maps, \nand the odd levels are lists. Kinda cool.\n\nOf course, you can shorten it with attribute notation...\n\n<order id=\"345\" buyer=\"Clark Evans\">\n <order-line product=\"Bag of rice\" quantity=\"1\" />\n <order-line product=\"Tofo\" quantity=\"3\" />\n</order>\n\n (The syntax is more brief, but it only \n allows for one list per map)\n\nAnyway, this could allow for some nice automated\nconversions in between a database and SML/XML/SGML.\n\nIndeed, I was thinking that a binary version of\nof the former syntax would make for a great \ninterface into/outof the database. Add a simple\ntext/binary adapter and PostgreSQL would be\nXML ready... without the need for any particular\nmapping tool.\n\nBest,\n\nClark\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n", "msg_date": "Sat, 8 Jan 2000 22:57:34 -0500 (EST)", "msg_from": "\"Clark C. Evans\" <[email protected]>", "msg_from_op": false, "msg_subject": "A Markup Database Interface; Borrowing from Groves" }, { "msg_contents": "At 05:27 PM 1/8/00 -0500, Tom Lane wrote:\n\n>I also object strongly to the lack of documentation. Patches that\n>change public APIs and come without doco updates should be rejected\n>out of hand, IMNSHO. Keeping the documentation up to date should\n>not be considered optional --- especially not when you're talking\n>about something that makes subtle and pervasive changes to library\n>behavior.\n\nBoy, Tom's really laid it out in excellent style. If the author of\nsuch changes doesn't document them, chances are that the documentation\nwon't get done. That's very bad. \n\nThe automatic rejection of undocumented patches that change the API\nor other user-visible behavior shouldn't be controversial. I know\nthere are some folks who aren't native-english speakers, so perhaps\nyou don't want to require that the implementor of such patches provide\nthe final documentation wording. But the information should be there\nand spelled out in a form that can be very easily moved to the docs.\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n", "msg_date": "Sun, 09 Jan 2000 07:01:02 -0800", "msg_from": "Don Baccus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] LIBPQ patches ... " }, { "msg_contents": "Don Baccus <[email protected]> writes:\n> At 05:27 PM 1/8/00 -0500, Tom Lane wrote:\n>> I also object strongly to the lack of documentation.\n\n> ... I know there are some folks who aren't native-english speakers, so\n> perhaps you don't want to require that the implementor of such patches\n> provide the final documentation wording. But the information should\n> be there and spelled out in a form that can be very easily moved to\n> the docs.\n\nOh, absolutely. Thomas, our master of the docs, has always had the\npolicy of \"give me some words, I'll take care of formatting and\nediting...\"\n\nI was probably too harsh on Alfred last night, since in fact his code\nwas fairly well commented, and some minimal doco could have been\nextracted from the routine headers. But on a change like this, I think\nsome paragraphs of coherent high-level explanation are needed: what it\ndoes, when and why you'd use it. I didn't see that anywhere...\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 09 Jan 2000 10:50:28 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] LIBPQ patches ... " }, { "msg_contents": "> At 05:27 PM 1/8/00 -0500, Tom Lane wrote:\n> \n> >I also object strongly to the lack of documentation. Patches that\n> >change public APIs and come without doco updates should be rejected\n> >out of hand, IMNSHO. Keeping the documentation up to date should\n> >not be considered optional --- especially not when you're talking\n> >about something that makes subtle and pervasive changes to library\n> >behavior.\n> \n> Boy, Tom's really laid it out in excellent style. If the author of\n> such changes doesn't document them, chances are that the documentation\n> won't get done. That's very bad. \n> \n> The automatic rejection of undocumented patches that change the API\n> or other user-visible behavior shouldn't be controversial. I know\n> there are some folks who aren't native-english speakers, so perhaps\n> you don't want to require that the implementor of such patches provide\n> the final documentation wording. But the information should be there\n> and spelled out in a form that can be very easily moved to the docs.\n\nIf it is missing, we get back to them before final release and ask for\ndoc patches. They get in there one way or another.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 9 Jan 2000 13:03:36 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] LIBPQ patches ..." }, { "msg_contents": "On Sun, 9 Jan 2000, Don Baccus wrote:\n\n> At 05:27 PM 1/8/00 -0500, Tom Lane wrote:\n> \n> >I also object strongly to the lack of documentation. Patches that\n> >change public APIs and come without doco updates should be rejected\n> >out of hand, IMNSHO. Keeping the documentation up to date should\n> >not be considered optional --- especially not when you're talking\n> >about something that makes subtle and pervasive changes to library\n> >behavior.\n> \n> Boy, Tom's really laid it out in excellent style. If the author of\n> such changes doesn't document them, chances are that the documentation\n> won't get done. That's very bad. \n> \n> The automatic rejection of undocumented patches that change the API\n> or other user-visible behavior shouldn't be controversial. I know\n> there are some folks who aren't native-english speakers, so perhaps\n> you don't want to require that the implementor of such patches provide\n> the final documentation wording. But the information should be there\n> and spelled out in a form that can be very easily moved to the docs.\n\nThese patches were originally submited before Xmas by Alfred, asking for\nfeedback on them and possibly pointing out errors in implementation...he\nwanted to get a feel whether or not it was *worth* him putting further\nwork into them. They fell on silent ears.\n\nPersonally, I wouldn't waste my time on documenting something that, in the\nend, I'd be the only one using...I'd get feedback on the usefulness first,\nand then deal with building up the documentation after I've found out that\nits worth it...\n\nAlfred didn't ask \"do I need to add documentation?\" ... he knew that ...\nhe asked whether or not the implementation was appropriate, and was worth\nhis time to continue working ...\n\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n\n", "msg_date": "Sun, 9 Jan 2000 15:32:21 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] LIBPQ patches ... " }, { "msg_contents": "The FreeBSD Documentation Project (FDP) has excellent references to get a\ngeneral idea on building sgml docs. First, you can install the\ntextproc/docproj port or, if you're not running freebsd, refer to the\nwebsite to see which programs you need. Second, you can read the FDP\nPrimer which details how everything comes together:\nhttp://www.freebsd.org/tutorials/docproj-primer/\n\nFurthermore, again if you happen to be running FreeBSD, you can grab the\ndoc src using cvsup. The proper reference is also documented somewhere in\nthe Primer or in the Synchronisation chapter in the Handbook.\n\nKeep at it, sgml and the docbook stylesheets are really worthwhile when\nyou start getting the hang of it.\nMarc\n\n> * Tom Lane <[email protected]> [000109 08:18] wrote: \n> > Don Baccus <[email protected]> writes:\n> > > At 05:27 PM 1/8/00 -0500, Tom Lane wrote:\n> > >> I also object strongly to the lack of documentation.\n> > \n> > > ... I know there are some folks who aren't native-english speakers, so\n> > > perhaps you don't want to require that the implementor of such patches\n> > > provide the final documentation wording. But the information should\n> > > be there and spelled out in a form that can be very easily moved to\n> > > the docs.\n> > \n> > Oh, absolutely. Thomas, our master of the docs, has always had the\n> > policy of \"give me some words, I'll take care of formatting and\n> > editing...\"\n> > \n> > I was probably too harsh on Alfred last night, since in fact his code\n> > was fairly well commented, and some minimal doco could have been\n> > extracted from the routine headers. But on a change like this, I think\n> > some paragraphs of coherent high-level explanation are needed: what it\n> > does, when and why you'd use it. I didn't see that anywhere...\n> \n> I've actually been trying to work on the sgml and failing miserably,\n> I have no clue how this stuff works (sgml compilation) are you asking\n> for a couple of paragraphs that describe the proposed changes?\n> \n> If so I hope this suffices, if not some help on building the sgml\n> would be much appreciated:\n> \n> --------\n> \n> Summary:\n> \n> First and foremost, great pains have been taken so that there \n> are _no_ compatibility issues.\n> \n> If a 6.5.3 libpq program should not behave any differently\n> with this patches in place, all they do is offer a step closer\n> to a truly non-blocking connection to the backend and address\n> some issues with non-blocking connections.\n> \n> ----\n> \n> Added functions:\n> \n> int PQisnonblocking(static PGconn *conn);\n> \n> returns whether or not the socket is in blocking mode, however...\n> it doesn't actually check the socket flags, it relies on the user\n> to call 'PQsetnonblocking()' to keep the internal state of libpq\n> sane. users should no longer use 'PQsocket()' to retrieve the\n> socket and 'manually' ioctl/fcntl it to non-blocking\n> \n> returns TRUE if the socket has been set to blocking more, FALSE\n> if the socket is blocking\n> \n> int PQflush(PGconn *conn);\n> \n> flush the send-queue to the backend, just make this visible to the\n> user for convience, as the internal function works, 0 for success,\n> EOF for any failure.\n> \n> int PQsetnonblocking(PGconn *conn, int arg);\n> \n> actually set the connection to the backend to blocking or\n> non-blocking arg should be set to TRUE to set the connection to\n> non-blocking or FALSE to set it blocking.\n> \n> there's an implied blocking flush of the send-queue which is\n> really ok as the user is either 'going into' or 'returning from'\n> a blocking state\n> \n> returns 0 for success, -1 for failure\n> \n> ---\n> \n> New functionality:\n> \n> PQsetblocking() allows libpq to know what behavior the user really\n> wants, the user will not block sending data to the backend,\n> potentially if i had a constant stream of data and was doing a\n> COPYIN it'd never finish because unless the backend lost the\n> connection I would block while sending until the backend can take\n> more data.\n> \n> ---\n> \n> Implementation changes:\n> \n> none should be visible to programs based on 6.5.3's libpq.\n> \n> programs based on later versions of libpq will notice that\n> the non-blocking connection functions will set the state of\n> the connection to non-blocking automatically.\n> \n> when the connection is set non-blocking pqFlush() will not block\n> if the sendqueue would be filled by new data inserted into the\n> the queue.\n> \n> functions that poll for data from the backend implicitly _try_\n> flush the send queue if set to non-blocking. This allows the\n> polling to act as a context for pushing queued data to the backend.\n> \n> ---\n> \n> Problems:\n> \n> We need some sort of send-queue commit reservations so that\n> there's no chance of us sending a partial queury down the pipe\n> to the backend, right now this is hacked around by potentially\n> blocking in non-blocking mode if something 'goes terribly wrong'\n> I plan to fix this.\n> \n> ---\n> \n> Quirks:\n> \n> PQexec() assumes the caller wants blocking behavior and will set the\n> connection to blocking for the duration of the PQexec() call, it will\n> then restore it\n> \n> ---\n> \n> Internal changes:\n> \n> new field in PGconn 'int nonblocking' set to 1 if the connection is \n> nonblocking, 0 if blocking (default)\n> \n> macro pqIsnonblocking(PGconn) to avoid a function call to check blocking\n> status (only visible if libpq-int.h is included)\n> \n> the internal function connectMakeNonblocking() has been replaced with\n> PQsetblocking()\n> \n> restart a send if EINTR is reported during a flush.\n> \n> ---\n> \n> Lastly:\n> \n> This is work in progress, I will be working towards making libpq\n> better at not blocking.\n> \n> here are the diffs:\n> \n> Index: fe-connect.c\n> ===================================================================\n> RCS file: /home/pgcvs/pgsql/src/interfaces/libpq/fe-connect.c,v\n> retrieving revision 1.109\n> diff -u -c -IHeader -I$Id: -r1.109 fe-connect.c\n> cvs diff: conflicting specifications of output style\n> *** fe-connect.c\t2000/01/14 05:33:15\t1.109\n> --- fe-connect.c\t2000/01/14 18:36:54\n> ***************\n> *** 594,624 ****\n> \treturn 0;\n> }\n> \n> - \n> - /* ----------\n> - * connectMakeNonblocking -\n> - * Make a connection non-blocking.\n> - * Returns 1 if successful, 0 if not.\n> - * ----------\n> - */\n> - static int\n> - connectMakeNonblocking(PGconn *conn)\n> - {\n> - #ifndef WIN32\n> - \tif (fcntl(conn->sock, F_SETFL, O_NONBLOCK) < 0)\n> - #else\n> - \tif (ioctlsocket(conn->sock, FIONBIO, &on) != 0)\n> - #endif\n> - \t{\n> - \t\tprintfPQExpBuffer(&conn->errorMessage,\n> - \t\t\t\t\t\t \"connectMakeNonblocking -- fcntl() failed: errno=%d\\n%s\\n\",\n> - \t\t\t\t\t\t errno, strerror(errno));\n> - \t\treturn 0;\n> - \t}\n> - \n> - \treturn 1;\n> - }\n> - \n> /* ----------\n> * connectNoDelay -\n> * Sets the TCP_NODELAY socket option.\n> --- 594,599 ----\n> ***************\n> *** 789,795 ****\n> \t * Ewan Mellor <[email protected]>.\n> \t * ---------- */\n> #if (!defined(WIN32) || defined(WIN32_NON_BLOCKING_CONNECTIONS)) && !defined(USE_SSL)\n> ! \tif (!connectMakeNonblocking(conn))\n> \t\tgoto connect_errReturn;\n> #endif\t\n> \n> --- 764,770 ----\n> \t * Ewan Mellor <[email protected]>.\n> \t * ---------- */\n> #if (!defined(WIN32) || defined(WIN32_NON_BLOCKING_CONNECTIONS)) && !defined(USE_SSL)\n> ! \tif (PQsetnonblocking(conn, TRUE) != 0)\n> \t\tgoto connect_errReturn;\n> #endif\t\n> \n> ***************\n> *** 898,904 ****\n> \t/* This makes the connection non-blocking, for all those cases which forced us\n> \t not to do it above. */\n> #if (defined(WIN32) && !defined(WIN32_NON_BLOCKING_CONNECTIONS)) || defined(USE_SSL)\n> ! \tif (!connectMakeNonblocking(conn))\n> \t\tgoto connect_errReturn;\n> #endif\t\n> \n> --- 873,879 ----\n> \t/* This makes the connection non-blocking, for all those cases which forced us\n> \t not to do it above. */\n> #if (defined(WIN32) && !defined(WIN32_NON_BLOCKING_CONNECTIONS)) || defined(USE_SSL)\n> ! \tif (PQsetnonblocking(conn, TRUE) != 0)\n> \t\tgoto connect_errReturn;\n> #endif\t\n> \n> ***************\n> *** 1702,1707 ****\n> --- 1677,1683 ----\n> \tconn->inBuffer = (char *) malloc(conn->inBufSize);\n> \tconn->outBufSize = 8 * 1024;\n> \tconn->outBuffer = (char *) malloc(conn->outBufSize);\n> + \tconn->nonblocking = FALSE;\n> \tinitPQExpBuffer(&conn->errorMessage);\n> \tinitPQExpBuffer(&conn->workBuffer);\n> \tif (conn->inBuffer == NULL ||\n> ***************\n> *** 1812,1817 ****\n> --- 1788,1794 ----\n> \tconn->lobjfuncs = NULL;\n> \tconn->inStart = conn->inCursor = conn->inEnd = 0;\n> \tconn->outCount = 0;\n> + \tconn->nonblocking = FALSE;\n> \n> }\n> \n> Index: fe-exec.c\n> ===================================================================\n> RCS file: /home/pgcvs/pgsql/src/interfaces/libpq/fe-exec.c,v\n> retrieving revision 1.86\n> diff -u -c -IHeader -I$Id: -r1.86 fe-exec.c\n> cvs diff: conflicting specifications of output style\n> *** fe-exec.c\t1999/11/11 00:10:14\t1.86\n> --- fe-exec.c\t2000/01/14 22:47:07\n> ***************\n> *** 13,18 ****\n> --- 13,19 ----\n> */\n> #include <errno.h>\n> #include <ctype.h>\n> + #include <fcntl.h>\n> \n> #include \"postgres.h\"\n> #include \"libpq-fe.h\"\n> ***************\n> *** 24,30 ****\n> #include <unistd.h>\n> #endif\n> \n> - \n> /* keep this in same order as ExecStatusType in libpq-fe.h */\n> const char *const pgresStatus[] = {\n> \t\"PGRES_EMPTY_QUERY\",\n> --- 25,30 ----\n> ***************\n> *** 514,526 ****\n> \tconn->curTuple = NULL;\n> \n> \t/* send the query to the backend; */\n> ! \t/* the frontend-backend protocol uses 'Q' to designate queries */\n> ! \tif (pqPutnchar(\"Q\", 1, conn) ||\n> ! \t\tpqPuts(query, conn) ||\n> ! \t\tpqFlush(conn))\n> \t{\n> ! \t\thandleSendFailure(conn);\n> ! \t\treturn 0;\n> \t}\n> \n> \t/* OK, it's launched! */\n> --- 514,566 ----\n> \tconn->curTuple = NULL;\n> \n> \t/* send the query to the backend; */\n> ! \n> ! \t/*\n> ! \t * in order to guarantee that we don't send a partial query \n> ! \t * where we would become out of sync with the backend and/or\n> ! \t * block during a non-blocking connection we must first flush\n> ! \t * the send buffer before sending more data\n> ! \t *\n> ! \t * an alternative is to implement 'queue reservations' where\n> ! \t * we are able to roll up a transaction \n> ! \t * (the 'Q' along with our query) and make sure we have\n> ! \t * enough space for it all in the send buffer.\n> ! \t */\n> ! \tif (pqIsnonblocking(conn))\n> \t{\n> ! \t\t/*\n> ! \t\t * the buffer must have emptied completely before we allow\n> ! \t\t * a new query to be buffered\n> ! \t\t */\n> ! \t\tif (pqFlush(conn))\n> ! \t\t\treturn 0;\n> ! \t\t/* 'Q' == queries */\n> ! \t\t/* XXX: if we fail here we really ought to not block */\n> ! \t\tif (pqPutnchar(\"Q\", 1, conn) ||\n> ! \t\t\tpqPuts(query, conn))\n> ! \t\t{\n> ! \t\t\thandleSendFailure(conn);\t\n> ! \t\t\treturn 0;\n> ! \t\t}\n> ! \t\t/*\n> ! \t\t * give the data a push, ignore the return value as\n> ! \t\t * ConsumeInput() will do any aditional flushing if needed\n> ! \t\t */\n> ! \t\t(void) pqFlush(conn);\t\n> ! \t}\n> ! \telse\n> ! \t{\n> ! \t\t/* \n> ! \t\t * the frontend-backend protocol uses 'Q' to \n> ! \t\t * designate queries \n> ! \t\t */\n> ! \t\tif (pqPutnchar(\"Q\", 1, conn) ||\n> ! \t\t\tpqPuts(query, conn) ||\n> ! \t\t\tpqFlush(conn))\n> ! \t\t{\n> ! \t\t\thandleSendFailure(conn);\n> ! \t\t\treturn 0;\n> ! \t\t}\n> \t}\n> \n> \t/* OK, it's launched! */\n> ***************\n> *** 574,580 ****\n> --- 614,630 ----\n> \t * we will NOT block waiting for more input.\n> \t */\n> \tif (pqReadData(conn) < 0)\n> + \t{\n> + \t\t/*\n> + \t\t * for non-blocking connections\n> + \t\t * try to flush the send-queue otherwise we may never get a \n> + \t\t * responce for something that may not have already been sent\n> + \t\t * because it's in our write buffer!\n> + \t\t */\n> + \t\tif (pqIsnonblocking(conn))\n> + \t\t\t(void) pqFlush(conn);\n> \t\treturn 0;\n> + \t}\n> \t/* Parsing of the data waits till later. */\n> \treturn 1;\n> }\n> ***************\n> *** 1088,1093 ****\n> --- 1138,1153 ----\n> {\n> \tPGresult *result;\n> \tPGresult *lastResult;\n> + \tbool\tsavedblocking;\n> + \n> + \t/*\n> + \t * we assume anyone calling PQexec wants blocking behaviour,\n> + \t * we force the blocking status of the connection to blocking\n> + \t * for the duration of this function and restore it on return\n> + \t */\n> + \tsavedblocking = pqIsnonblocking(conn);\n> + \tif (PQsetnonblocking(conn, FALSE) == -1)\n> + \t\treturn NULL;\n> \n> \t/*\n> \t * Silently discard any prior query result that application didn't\n> ***************\n> *** 1102,1115 ****\n> \t\t\tPQclear(result);\n> \t\t\tprintfPQExpBuffer(&conn->errorMessage,\n> \t\t\t\t\"PQexec: you gotta get out of a COPY state yourself.\\n\");\n> ! \t\t\treturn NULL;\n> \t\t}\n> \t\tPQclear(result);\n> \t}\n> \n> \t/* OK to send the message */\n> \tif (!PQsendQuery(conn, query))\n> ! \t\treturn NULL;\n> \n> \t/*\n> \t * For backwards compatibility, return the last result if there are\n> --- 1162,1176 ----\n> \t\t\tPQclear(result);\n> \t\t\tprintfPQExpBuffer(&conn->errorMessage,\n> \t\t\t\t\"PQexec: you gotta get out of a COPY state yourself.\\n\");\n> ! \t\t\t/* restore blocking status */\n> ! \t\t\tgoto errout;\n> \t\t}\n> \t\tPQclear(result);\n> \t}\n> \n> \t/* OK to send the message */\n> \tif (!PQsendQuery(conn, query))\n> ! \t\tgoto errout;\t/* restore blocking status */\n> \n> \t/*\n> \t * For backwards compatibility, return the last result if there are\n> ***************\n> *** 1142,1148 ****\n> --- 1203,1217 ----\n> \t\t\tresult->resultStatus == PGRES_COPY_OUT)\n> \t\t\tbreak;\n> \t}\n> + \n> + \tif (PQsetnonblocking(conn, savedblocking) == -1)\n> + \t\treturn NULL;\n> \treturn lastResult;\n> + \n> + errout:\n> + \tif (PQsetnonblocking(conn, savedblocking) == -1)\n> + \t\treturn NULL;\n> + \treturn NULL;\n> }\n> \n> \n> ***************\n> *** 1431,1438 ****\n> \t\t\t \"PQendcopy() -- I don't think there's a copy in progress.\\n\");\n> \t\treturn 1;\n> \t}\n> \n> ! \t(void) pqFlush(conn);\t\t/* make sure no data is waiting to be sent */\n> \n> \t/* Return to active duty */\n> \tconn->asyncStatus = PGASYNC_BUSY;\n> --- 1500,1516 ----\n> \t\t\t \"PQendcopy() -- I don't think there's a copy in progress.\\n\");\n> \t\treturn 1;\n> \t}\n> + \n> + \t/*\n> + \t * make sure no data is waiting to be sent, \n> + \t * abort if we are non-blocking and the flush fails\n> + \t */\n> + \tif (pqFlush(conn) && pqIsnonblocking(conn))\n> + \t\treturn (1);\n> \n> ! \t/* non blocking connections may have to abort at this point. */\n> ! \tif (pqIsnonblocking(conn) && PQisBusy(conn))\n> ! \t\treturn (1);\n> \n> \t/* Return to active duty */\n> \tconn->asyncStatus = PGASYNC_BUSY;\n> ***************\n> *** 2025,2028 ****\n> --- 2103,2192 ----\n> \t\treturn 1;\n> \telse\n> \t\treturn 0;\n> + }\n> + \n> + /* PQsetnonblocking:\n> + \t sets the PGconn's database connection non-blocking if the arg is TRUE\n> + \t or makes it non-blocking if the arg is FALSE, this will not protect\n> + \t you from PQexec(), you'll only be safe when using the non-blocking\n> + \t API\n> + \t Needs to be called only on a connected database connection.\n> + */\n> + \n> + int\n> + PQsetnonblocking(PGconn *conn, int arg)\n> + {\n> + \tint\tfcntlarg;\n> + \n> + \targ = (arg == TRUE) ? 1 : 0;\n> + \t/* early out if the socket is already in the state requested */\n> + \tif (arg == conn->nonblocking)\n> + \t\treturn (0);\n> + \n> + \t/*\n> + \t * to guarantee constancy for flushing/query/result-polling behavior\n> + \t * we need to flush the send queue at this point in order to guarantee\n> + \t * proper behavior.\n> + \t * this is ok because either they are making a transition\n> + \t * _from_ or _to_ blocking mode, either way we can block them.\n> + \t */\n> + \t/* if we are going from blocking to non-blocking flush here */\n> + \tif (!pqIsnonblocking(conn) && pqFlush(conn))\n> + \t\treturn (-1);\n> + \n> + \n> + #ifdef USE_SSL\n> + \tif (conn->ssl)\n> + \t{\n> + \t\tprintfPQExpBuffer(&conn->errorMessage,\n> + \t\t\t\"PQsetnonblocking() -- not supported when using SSL\\n\");\n> + \t\treturn (-1);\n> + \t}\n> + #endif /* USE_SSL */\n> + \n> + #ifndef WIN32\n> + \tfcntlarg = fcntl(conn->sock, F_GETFL, 0);\n> + \tif (fcntlarg == -1)\n> + \t\treturn (-1);\n> + \n> + \tif ((arg == TRUE && \n> + \t\tfcntl(conn->sock, F_SETFL, fcntlarg | O_NONBLOCK) == -1) ||\n> + \t\t(arg == FALSE &&\n> + \t\tfcntl(conn->sock, F_SETFL, fcntlarg & ~O_NONBLOCK) == -1)) \n> + #else\n> + \tfcntlarg = arg;\n> + \tif (ioctlsocket(conn->sock, FIONBIO, &fcntlarg) != 0)\n> + #endif\n> + \t{\n> + \t\tprintfPQExpBuffer(&conn->errorMessage,\n> + \t\t\t\"PQsetblocking() -- unable to set nonblocking status to %s\\n\",\n> + \t\t\targ == TRUE ? \"TRUE\" : \"FALSE\");\n> + \t\treturn (-1);\n> + \t}\n> + \n> + \tconn->nonblocking = arg;\n> + \n> + \t/* if we are going from non-blocking to blocking flush here */\n> + \tif (pqIsnonblocking(conn) && pqFlush(conn))\n> + \t\treturn (-1);\n> + \n> + \treturn (0);\n> + }\n> + \n> + /* return the blocking status of the database connection, TRUE == nonblocking,\n> + \t FALSE == blocking\n> + */\n> + int\n> + PQisnonblocking(const PGconn *conn)\n> + {\n> + \n> + \treturn (pqIsnonblocking(conn));\n> + }\n> + \n> + /* try to force data out, really only useful for non-blocking users */\n> + int\n> + PQflush(PGconn *conn)\n> + {\n> + \n> + \treturn (pqFlush(conn));\n> }\n> Index: fe-misc.c\n> ===================================================================\n> RCS file: /home/pgcvs/pgsql/src/interfaces/libpq/fe-misc.c,v\n> retrieving revision 1.33\n> diff -u -c -IHeader -I$Id: -r1.33 fe-misc.c\n> cvs diff: conflicting specifications of output style\n> *** fe-misc.c\t1999/11/30 03:08:19\t1.33\n> --- fe-misc.c\t2000/01/12 03:12:14\n> ***************\n> *** 86,91 ****\n> --- 86,122 ----\n> {\n> \tsize_t avail = Max(conn->outBufSize - conn->outCount, 0);\n> \n> + \t/*\n> + \t * if we are non-blocking and the send queue is too full to buffer this\n> + \t * request then try to flush some and return an error \n> + \t */\n> + \tif (pqIsnonblocking(conn) && nbytes > avail && pqFlush(conn))\n> + \t{\n> + \t\t/* \n> + \t\t * even if the flush failed we may still have written some\n> + \t\t * data, recalculate the size of the send-queue relative\n> + \t\t * to the amount we have to send, we may be able to queue it\n> + \t\t * afterall even though it's not sent to the database it's\n> + \t\t * ok, any routines that check the data coming from the\n> + \t\t * database better call pqFlush() anyway.\n> + \t\t */\n> + \t\tif (nbytes > Max(conn->outBufSize - conn->outCount, 0))\n> + \t\t{\n> + \t\t\tprintfPQExpBuffer(&conn->errorMessage,\n> + \t\t\t\t\"pqPutBytes -- pqFlush couldn't flush enough\"\n> + \t\t\t\t\" data: space available: %d, space needed %d\\n\",\n> + \t\t\t\tMax(conn->outBufSize - conn->outCount, 0), nbytes);\n> + \t\t\treturn EOF;\n> + \t\t}\n> + \t}\n> + \n> + \t/* \n> + \t * is the amount of data to be sent is larger than the size of the\n> + \t * output buffer then we must flush it to make more room.\n> + \t *\n> + \t * the code above will make sure the loop conditional is never \n> + \t * true for non-blocking connections\n> + \t */\n> \twhile (nbytes > avail)\n> \t{\n> \t\tmemcpy(conn->outBuffer + conn->outCount, s, avail);\n> ***************\n> *** 548,553 ****\n> --- 579,592 ----\n> \t\treturn EOF;\n> \t}\n> \n> + \t/* \n> + \t * don't try to send zero data, allows us to use this function\n> + \t * without too much worry about overhead\n> + \t */\n> + \tif (len == 0)\n> + \t\treturn (0);\n> + \n> + \t/* while there's still data to send */\n> \twhile (len > 0)\n> \t{\n> \t\t/* Prevent being SIGPIPEd if backend has closed the connection. */\n> ***************\n> *** 556,561 ****\n> --- 595,601 ----\n> #endif\n> \n> \t\tint sent;\n> + \n> #ifdef USE_SSL\n> \t\tif (conn->ssl) \n> \t\t sent = SSL_write(conn->ssl, ptr, len);\n> ***************\n> *** 585,590 ****\n> --- 625,632 ----\n> \t\t\t\tcase EWOULDBLOCK:\n> \t\t\t\t\tbreak;\n> #endif\n> + \t\t\t\tcase EINTR:\n> + \t\t\t\t\tcontinue;\n> \n> \t\t\t\tcase EPIPE:\n> #ifdef ECONNRESET\n> ***************\n> *** 616,628 ****\n> \t\t\tptr += sent;\n> \t\t\tlen -= sent;\n> \t\t}\n> \t\tif (len > 0)\n> \t\t{\n> \t\t\t/* We didn't send it all, wait till we can send more */\n> \n> - \t\t\t/* At first glance this looks as though it should block. I think\n> - \t\t\t * that it will be OK though, as long as the socket is\n> - \t\t\t * non-blocking. */\n> \t\t\tif (pqWait(FALSE, TRUE, conn))\n> \t\t\t\treturn EOF;\n> \t\t}\n> --- 658,688 ----\n> \t\t\tptr += sent;\n> \t\t\tlen -= sent;\n> \t\t}\n> + \n> \t\tif (len > 0)\n> \t\t{\n> \t\t\t/* We didn't send it all, wait till we can send more */\n> + \n> + \t\t\t/* \n> + \t\t\t * if the socket is in non-blocking mode we may need\n> + \t\t\t * to abort here \n> + \t\t\t */\n> + #ifdef USE_SSL\n> + \t\t\t/* can't do anything for our SSL users yet */\n> + \t\t\tif (conn->ssl == NULL)\n> + \t\t\t{\n> + #endif\n> + \t\t\t\tif (pqIsnonblocking(conn))\n> + \t\t\t\t{\n> + \t\t\t\t\t/* shift the contents of the buffer */\n> + \t\t\t\t\tmemmove(conn->outBuffer, ptr, len);\n> + \t\t\t\t\tconn->outCount = len;\n> + \t\t\t\t\treturn EOF;\n> + \t\t\t\t}\n> + #ifdef USE_SSL\n> + \t\t\t}\n> + #endif\n> \n> \t\t\tif (pqWait(FALSE, TRUE, conn))\n> \t\t\t\treturn EOF;\n> \t\t}\n> Index: libpq-fe.h\n> ===================================================================\n> RCS file: /home/pgcvs/pgsql/src/interfaces/libpq/libpq-fe.h,v\n> retrieving revision 1.54\n> diff -u -c -IHeader -I$Id: -r1.54 libpq-fe.h\n> cvs diff: conflicting specifications of output style\n> *** libpq-fe.h\t2000/01/14 05:33:15\t1.54\n> --- libpq-fe.h\t2000/01/14 22:45:33\n> ***************\n> *** 261,266 ****\n> --- 261,273 ----\n> \textern int\tPQgetlineAsync(PGconn *conn, char *buffer, int bufsize);\n> \textern int\tPQputnbytes(PGconn *conn, const char *buffer, int nbytes);\n> \textern int\tPQendcopy(PGconn *conn);\n> + \n> + \t/* Set blocking/nonblocking connection to the backend */\n> + \textern int\tPQsetnonblocking(PGconn *conn, int arg);\n> + \textern int\tPQisnonblocking(const PGconn *conn);\n> + \n> + \t/* Force the write buffer to be written (or at least try) */\n> + \textern int\tPQflush(PGconn *conn);\n> \n> \t/*\n> \t * \"Fast path\" interface --- not really recommended for application\n> Index: libpq-int.h\n> ===================================================================\n> RCS file: /home/pgcvs/pgsql/src/interfaces/libpq/libpq-int.h,v\n> retrieving revision 1.15\n> diff -u -c -IHeader -I$Id: -r1.15 libpq-int.h\n> cvs diff: conflicting specifications of output style\n> *** libpq-int.h\t2000/01/14 05:33:15\t1.15\n> --- libpq-int.h\t2000/01/14 18:32:51\n> ***************\n> *** 214,219 ****\n> --- 214,222 ----\n> \tint\t\t\tinEnd;\t\t\t/* offset to first position after avail\n> \t\t\t\t\t\t\t\t * data */\n> \n> + \tint\t\t\tnonblocking;\t/* whether this connection is using a blocking\n> + \t\t\t\t\t\t\t\t * socket to the backend or not */\n> + \n> \t/* Buffer for data not yet sent to backend */\n> \tchar\t *outBuffer;\t\t/* currently allocated buffer */\n> \tint\t\t\toutBufSize;\t\t/* allocated size of buffer */\n> ***************\n> *** 297,301 ****\n> --- 300,310 ----\n> #define strerror(A) (sys_errlist[(A)])\n> #endif\t /* sunos4 */\n> #endif\t /* !strerror */\n> + \n> + /* \n> + * this is so that we can check is a connection is non-blocking internally\n> + * without the overhead of a function call\n> + */\n> + #define pqIsnonblocking(conn)\t(conn->nonblocking)\n> \n> #endif\t /* LIBPQ_INT_H */\n> \n> \n> on a side note miscadmin.h causes problems on FreeBSD because it uses\n> pid_t without having included sys/types.h\n> \n> thanks!\n> \n> -- \n> -Alfred Perlstein - [[email protected]|[email protected]]\n> \n> ************\n> \n\n", "msg_date": "Fri, 14 Jan 2000 14:11:52 +0000 (GMT)", "msg_from": "admin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Revised nonblocking patches + quasi docs" }, { "msg_contents": "* Tom Lane <[email protected]> [000109 08:18] wrote:\n> Don Baccus <[email protected]> writes:\n> > At 05:27 PM 1/8/00 -0500, Tom Lane wrote:\n> >> I also object strongly to the lack of documentation.\n> \n> > ... I know there are some folks who aren't native-english speakers, so\n> > perhaps you don't want to require that the implementor of such patches\n> > provide the final documentation wording. But the information should\n> > be there and spelled out in a form that can be very easily moved to\n> > the docs.\n> \n> Oh, absolutely. Thomas, our master of the docs, has always had the\n> policy of \"give me some words, I'll take care of formatting and\n> editing...\"\n> \n> I was probably too harsh on Alfred last night, since in fact his code\n> was fairly well commented, and some minimal doco could have been\n> extracted from the routine headers. But on a change like this, I think\n> some paragraphs of coherent high-level explanation are needed: what it\n> does, when and why you'd use it. I didn't see that anywhere...\n\nI've actually been trying to work on the sgml and failing miserably,\nI have no clue how this stuff works (sgml compilation) are you asking\nfor a couple of paragraphs that describe the proposed changes?\n\nIf so I hope this suffices, if not some help on building the sgml\nwould be much appreciated:\n\n--------\n\nSummary:\n\nFirst and foremost, great pains have been taken so that there \nare _no_ compatibility issues.\n\nIf a 6.5.3 libpq program should not behave any differently\nwith this patches in place, all they do is offer a step closer\nto a truly non-blocking connection to the backend and address\nsome issues with non-blocking connections.\n\n----\n\nAdded functions:\n\nint PQisnonblocking(static PGconn *conn);\n\n returns whether or not the socket is in blocking mode, however...\n it doesn't actually check the socket flags, it relies on the user\n to call 'PQsetnonblocking()' to keep the internal state of libpq\n sane. users should no longer use 'PQsocket()' to retrieve the\n socket and 'manually' ioctl/fcntl it to non-blocking\n\n returns TRUE if the socket has been set to blocking more, FALSE\n if the socket is blocking\n\nint PQflush(PGconn *conn);\n\n flush the send-queue to the backend, just make this visible to the\n user for convience, as the internal function works, 0 for success,\n EOF for any failure.\n\nint PQsetnonblocking(PGconn *conn, int arg);\n\n actually set the connection to the backend to blocking or\n non-blocking arg should be set to TRUE to set the connection to\n non-blocking or FALSE to set it blocking.\n\n there's an implied blocking flush of the send-queue which is\n really ok as the user is either 'going into' or 'returning from'\n a blocking state\n\n returns 0 for success, -1 for failure\n\n---\n\nNew functionality:\n\n PQsetblocking() allows libpq to know what behavior the user really\n wants, the user will not block sending data to the backend,\n potentially if i had a constant stream of data and was doing a\n COPYIN it'd never finish because unless the backend lost the\n connection I would block while sending until the backend can take\n more data.\n\n---\n\nImplementation changes:\n\n none should be visible to programs based on 6.5.3's libpq.\n\n programs based on later versions of libpq will notice that\n the non-blocking connection functions will set the state of\n the connection to non-blocking automatically.\n\n when the connection is set non-blocking pqFlush() will not block\n if the sendqueue would be filled by new data inserted into the\n the queue.\n\n functions that poll for data from the backend implicitly _try_\n flush the send queue if set to non-blocking. This allows the\n polling to act as a context for pushing queued data to the backend.\n\n---\n\nProblems:\n\n We need some sort of send-queue commit reservations so that\n there's no chance of us sending a partial queury down the pipe\n to the backend, right now this is hacked around by potentially\n blocking in non-blocking mode if something 'goes terribly wrong'\n I plan to fix this.\n\n---\n\nQuirks:\n\n PQexec() assumes the caller wants blocking behavior and will set the\n connection to blocking for the duration of the PQexec() call, it will\n then restore it\n\n---\n\nInternal changes:\n\n new field in PGconn 'int nonblocking' set to 1 if the connection is \n nonblocking, 0 if blocking (default)\n\n macro pqIsnonblocking(PGconn) to avoid a function call to check blocking\n status (only visible if libpq-int.h is included)\n\n the internal function connectMakeNonblocking() has been replaced with\n PQsetblocking()\n\n restart a send if EINTR is reported during a flush.\n\n---\n\nLastly:\n\n This is work in progress, I will be working towards making libpq\n better at not blocking.\n\nhere are the diffs:\n\nIndex: fe-connect.c\n===================================================================\nRCS file: /home/pgcvs/pgsql/src/interfaces/libpq/fe-connect.c,v\nretrieving revision 1.109\ndiff -u -c -IHeader -I$Id: -r1.109 fe-connect.c\ncvs diff: conflicting specifications of output style\n*** fe-connect.c\t2000/01/14 05:33:15\t1.109\n--- fe-connect.c\t2000/01/14 18:36:54\n***************\n*** 594,624 ****\n \treturn 0;\n }\n \n- \n- /* ----------\n- * connectMakeNonblocking -\n- * Make a connection non-blocking.\n- * Returns 1 if successful, 0 if not.\n- * ----------\n- */\n- static int\n- connectMakeNonblocking(PGconn *conn)\n- {\n- #ifndef WIN32\n- \tif (fcntl(conn->sock, F_SETFL, O_NONBLOCK) < 0)\n- #else\n- \tif (ioctlsocket(conn->sock, FIONBIO, &on) != 0)\n- #endif\n- \t{\n- \t\tprintfPQExpBuffer(&conn->errorMessage,\n- \t\t\t\t\t\t \"connectMakeNonblocking -- fcntl() failed: errno=%d\\n%s\\n\",\n- \t\t\t\t\t\t errno, strerror(errno));\n- \t\treturn 0;\n- \t}\n- \n- \treturn 1;\n- }\n- \n /* ----------\n * connectNoDelay -\n * Sets the TCP_NODELAY socket option.\n--- 594,599 ----\n***************\n*** 789,795 ****\n \t * Ewan Mellor <[email protected]>.\n \t * ---------- */\n #if (!defined(WIN32) || defined(WIN32_NON_BLOCKING_CONNECTIONS)) && !defined(USE_SSL)\n! \tif (!connectMakeNonblocking(conn))\n \t\tgoto connect_errReturn;\n #endif\t\n \n--- 764,770 ----\n \t * Ewan Mellor <[email protected]>.\n \t * ---------- */\n #if (!defined(WIN32) || defined(WIN32_NON_BLOCKING_CONNECTIONS)) && !defined(USE_SSL)\n! \tif (PQsetnonblocking(conn, TRUE) != 0)\n \t\tgoto connect_errReturn;\n #endif\t\n \n***************\n*** 898,904 ****\n \t/* This makes the connection non-blocking, for all those cases which forced us\n \t not to do it above. */\n #if (defined(WIN32) && !defined(WIN32_NON_BLOCKING_CONNECTIONS)) || defined(USE_SSL)\n! \tif (!connectMakeNonblocking(conn))\n \t\tgoto connect_errReturn;\n #endif\t\n \n--- 873,879 ----\n \t/* This makes the connection non-blocking, for all those cases which forced us\n \t not to do it above. */\n #if (defined(WIN32) && !defined(WIN32_NON_BLOCKING_CONNECTIONS)) || defined(USE_SSL)\n! \tif (PQsetnonblocking(conn, TRUE) != 0)\n \t\tgoto connect_errReturn;\n #endif\t\n \n***************\n*** 1702,1707 ****\n--- 1677,1683 ----\n \tconn->inBuffer = (char *) malloc(conn->inBufSize);\n \tconn->outBufSize = 8 * 1024;\n \tconn->outBuffer = (char *) malloc(conn->outBufSize);\n+ \tconn->nonblocking = FALSE;\n \tinitPQExpBuffer(&conn->errorMessage);\n \tinitPQExpBuffer(&conn->workBuffer);\n \tif (conn->inBuffer == NULL ||\n***************\n*** 1812,1817 ****\n--- 1788,1794 ----\n \tconn->lobjfuncs = NULL;\n \tconn->inStart = conn->inCursor = conn->inEnd = 0;\n \tconn->outCount = 0;\n+ \tconn->nonblocking = FALSE;\n \n }\n \nIndex: fe-exec.c\n===================================================================\nRCS file: /home/pgcvs/pgsql/src/interfaces/libpq/fe-exec.c,v\nretrieving revision 1.86\ndiff -u -c -IHeader -I$Id: -r1.86 fe-exec.c\ncvs diff: conflicting specifications of output style\n*** fe-exec.c\t1999/11/11 00:10:14\t1.86\n--- fe-exec.c\t2000/01/14 22:47:07\n***************\n*** 13,18 ****\n--- 13,19 ----\n */\n #include <errno.h>\n #include <ctype.h>\n+ #include <fcntl.h>\n \n #include \"postgres.h\"\n #include \"libpq-fe.h\"\n***************\n*** 24,30 ****\n #include <unistd.h>\n #endif\n \n- \n /* keep this in same order as ExecStatusType in libpq-fe.h */\n const char *const pgresStatus[] = {\n \t\"PGRES_EMPTY_QUERY\",\n--- 25,30 ----\n***************\n*** 514,526 ****\n \tconn->curTuple = NULL;\n \n \t/* send the query to the backend; */\n! \t/* the frontend-backend protocol uses 'Q' to designate queries */\n! \tif (pqPutnchar(\"Q\", 1, conn) ||\n! \t\tpqPuts(query, conn) ||\n! \t\tpqFlush(conn))\n \t{\n! \t\thandleSendFailure(conn);\n! \t\treturn 0;\n \t}\n \n \t/* OK, it's launched! */\n--- 514,566 ----\n \tconn->curTuple = NULL;\n \n \t/* send the query to the backend; */\n! \n! \t/*\n! \t * in order to guarantee that we don't send a partial query \n! \t * where we would become out of sync with the backend and/or\n! \t * block during a non-blocking connection we must first flush\n! \t * the send buffer before sending more data\n! \t *\n! \t * an alternative is to implement 'queue reservations' where\n! \t * we are able to roll up a transaction \n! \t * (the 'Q' along with our query) and make sure we have\n! \t * enough space for it all in the send buffer.\n! \t */\n! \tif (pqIsnonblocking(conn))\n \t{\n! \t\t/*\n! \t\t * the buffer must have emptied completely before we allow\n! \t\t * a new query to be buffered\n! \t\t */\n! \t\tif (pqFlush(conn))\n! \t\t\treturn 0;\n! \t\t/* 'Q' == queries */\n! \t\t/* XXX: if we fail here we really ought to not block */\n! \t\tif (pqPutnchar(\"Q\", 1, conn) ||\n! \t\t\tpqPuts(query, conn))\n! \t\t{\n! \t\t\thandleSendFailure(conn);\t\n! \t\t\treturn 0;\n! \t\t}\n! \t\t/*\n! \t\t * give the data a push, ignore the return value as\n! \t\t * ConsumeInput() will do any aditional flushing if needed\n! \t\t */\n! \t\t(void) pqFlush(conn);\t\n! \t}\n! \telse\n! \t{\n! \t\t/* \n! \t\t * the frontend-backend protocol uses 'Q' to \n! \t\t * designate queries \n! \t\t */\n! \t\tif (pqPutnchar(\"Q\", 1, conn) ||\n! \t\t\tpqPuts(query, conn) ||\n! \t\t\tpqFlush(conn))\n! \t\t{\n! \t\t\thandleSendFailure(conn);\n! \t\t\treturn 0;\n! \t\t}\n \t}\n \n \t/* OK, it's launched! */\n***************\n*** 574,580 ****\n--- 614,630 ----\n \t * we will NOT block waiting for more input.\n \t */\n \tif (pqReadData(conn) < 0)\n+ \t{\n+ \t\t/*\n+ \t\t * for non-blocking connections\n+ \t\t * try to flush the send-queue otherwise we may never get a \n+ \t\t * responce for something that may not have already been sent\n+ \t\t * because it's in our write buffer!\n+ \t\t */\n+ \t\tif (pqIsnonblocking(conn))\n+ \t\t\t(void) pqFlush(conn);\n \t\treturn 0;\n+ \t}\n \t/* Parsing of the data waits till later. */\n \treturn 1;\n }\n***************\n*** 1088,1093 ****\n--- 1138,1153 ----\n {\n \tPGresult *result;\n \tPGresult *lastResult;\n+ \tbool\tsavedblocking;\n+ \n+ \t/*\n+ \t * we assume anyone calling PQexec wants blocking behaviour,\n+ \t * we force the blocking status of the connection to blocking\n+ \t * for the duration of this function and restore it on return\n+ \t */\n+ \tsavedblocking = pqIsnonblocking(conn);\n+ \tif (PQsetnonblocking(conn, FALSE) == -1)\n+ \t\treturn NULL;\n \n \t/*\n \t * Silently discard any prior query result that application didn't\n***************\n*** 1102,1115 ****\n \t\t\tPQclear(result);\n \t\t\tprintfPQExpBuffer(&conn->errorMessage,\n \t\t\t\t\"PQexec: you gotta get out of a COPY state yourself.\\n\");\n! \t\t\treturn NULL;\n \t\t}\n \t\tPQclear(result);\n \t}\n \n \t/* OK to send the message */\n \tif (!PQsendQuery(conn, query))\n! \t\treturn NULL;\n \n \t/*\n \t * For backwards compatibility, return the last result if there are\n--- 1162,1176 ----\n \t\t\tPQclear(result);\n \t\t\tprintfPQExpBuffer(&conn->errorMessage,\n \t\t\t\t\"PQexec: you gotta get out of a COPY state yourself.\\n\");\n! \t\t\t/* restore blocking status */\n! \t\t\tgoto errout;\n \t\t}\n \t\tPQclear(result);\n \t}\n \n \t/* OK to send the message */\n \tif (!PQsendQuery(conn, query))\n! \t\tgoto errout;\t/* restore blocking status */\n \n \t/*\n \t * For backwards compatibility, return the last result if there are\n***************\n*** 1142,1148 ****\n--- 1203,1217 ----\n \t\t\tresult->resultStatus == PGRES_COPY_OUT)\n \t\t\tbreak;\n \t}\n+ \n+ \tif (PQsetnonblocking(conn, savedblocking) == -1)\n+ \t\treturn NULL;\n \treturn lastResult;\n+ \n+ errout:\n+ \tif (PQsetnonblocking(conn, savedblocking) == -1)\n+ \t\treturn NULL;\n+ \treturn NULL;\n }\n \n \n***************\n*** 1431,1438 ****\n \t\t\t \"PQendcopy() -- I don't think there's a copy in progress.\\n\");\n \t\treturn 1;\n \t}\n \n! \t(void) pqFlush(conn);\t\t/* make sure no data is waiting to be sent */\n \n \t/* Return to active duty */\n \tconn->asyncStatus = PGASYNC_BUSY;\n--- 1500,1516 ----\n \t\t\t \"PQendcopy() -- I don't think there's a copy in progress.\\n\");\n \t\treturn 1;\n \t}\n+ \n+ \t/*\n+ \t * make sure no data is waiting to be sent, \n+ \t * abort if we are non-blocking and the flush fails\n+ \t */\n+ \tif (pqFlush(conn) && pqIsnonblocking(conn))\n+ \t\treturn (1);\n \n! \t/* non blocking connections may have to abort at this point. */\n! \tif (pqIsnonblocking(conn) && PQisBusy(conn))\n! \t\treturn (1);\n \n \t/* Return to active duty */\n \tconn->asyncStatus = PGASYNC_BUSY;\n***************\n*** 2025,2028 ****\n--- 2103,2192 ----\n \t\treturn 1;\n \telse\n \t\treturn 0;\n+ }\n+ \n+ /* PQsetnonblocking:\n+ \t sets the PGconn's database connection non-blocking if the arg is TRUE\n+ \t or makes it non-blocking if the arg is FALSE, this will not protect\n+ \t you from PQexec(), you'll only be safe when using the non-blocking\n+ \t API\n+ \t Needs to be called only on a connected database connection.\n+ */\n+ \n+ int\n+ PQsetnonblocking(PGconn *conn, int arg)\n+ {\n+ \tint\tfcntlarg;\n+ \n+ \targ = (arg == TRUE) ? 1 : 0;\n+ \t/* early out if the socket is already in the state requested */\n+ \tif (arg == conn->nonblocking)\n+ \t\treturn (0);\n+ \n+ \t/*\n+ \t * to guarantee constancy for flushing/query/result-polling behavior\n+ \t * we need to flush the send queue at this point in order to guarantee\n+ \t * proper behavior.\n+ \t * this is ok because either they are making a transition\n+ \t * _from_ or _to_ blocking mode, either way we can block them.\n+ \t */\n+ \t/* if we are going from blocking to non-blocking flush here */\n+ \tif (!pqIsnonblocking(conn) && pqFlush(conn))\n+ \t\treturn (-1);\n+ \n+ \n+ #ifdef USE_SSL\n+ \tif (conn->ssl)\n+ \t{\n+ \t\tprintfPQExpBuffer(&conn->errorMessage,\n+ \t\t\t\"PQsetnonblocking() -- not supported when using SSL\\n\");\n+ \t\treturn (-1);\n+ \t}\n+ #endif /* USE_SSL */\n+ \n+ #ifndef WIN32\n+ \tfcntlarg = fcntl(conn->sock, F_GETFL, 0);\n+ \tif (fcntlarg == -1)\n+ \t\treturn (-1);\n+ \n+ \tif ((arg == TRUE && \n+ \t\tfcntl(conn->sock, F_SETFL, fcntlarg | O_NONBLOCK) == -1) ||\n+ \t\t(arg == FALSE &&\n+ \t\tfcntl(conn->sock, F_SETFL, fcntlarg & ~O_NONBLOCK) == -1)) \n+ #else\n+ \tfcntlarg = arg;\n+ \tif (ioctlsocket(conn->sock, FIONBIO, &fcntlarg) != 0)\n+ #endif\n+ \t{\n+ \t\tprintfPQExpBuffer(&conn->errorMessage,\n+ \t\t\t\"PQsetblocking() -- unable to set nonblocking status to %s\\n\",\n+ \t\t\targ == TRUE ? \"TRUE\" : \"FALSE\");\n+ \t\treturn (-1);\n+ \t}\n+ \n+ \tconn->nonblocking = arg;\n+ \n+ \t/* if we are going from non-blocking to blocking flush here */\n+ \tif (pqIsnonblocking(conn) && pqFlush(conn))\n+ \t\treturn (-1);\n+ \n+ \treturn (0);\n+ }\n+ \n+ /* return the blocking status of the database connection, TRUE == nonblocking,\n+ \t FALSE == blocking\n+ */\n+ int\n+ PQisnonblocking(const PGconn *conn)\n+ {\n+ \n+ \treturn (pqIsnonblocking(conn));\n+ }\n+ \n+ /* try to force data out, really only useful for non-blocking users */\n+ int\n+ PQflush(PGconn *conn)\n+ {\n+ \n+ \treturn (pqFlush(conn));\n }\nIndex: fe-misc.c\n===================================================================\nRCS file: /home/pgcvs/pgsql/src/interfaces/libpq/fe-misc.c,v\nretrieving revision 1.33\ndiff -u -c -IHeader -I$Id: -r1.33 fe-misc.c\ncvs diff: conflicting specifications of output style\n*** fe-misc.c\t1999/11/30 03:08:19\t1.33\n--- fe-misc.c\t2000/01/12 03:12:14\n***************\n*** 86,91 ****\n--- 86,122 ----\n {\n \tsize_t avail = Max(conn->outBufSize - conn->outCount, 0);\n \n+ \t/*\n+ \t * if we are non-blocking and the send queue is too full to buffer this\n+ \t * request then try to flush some and return an error \n+ \t */\n+ \tif (pqIsnonblocking(conn) && nbytes > avail && pqFlush(conn))\n+ \t{\n+ \t\t/* \n+ \t\t * even if the flush failed we may still have written some\n+ \t\t * data, recalculate the size of the send-queue relative\n+ \t\t * to the amount we have to send, we may be able to queue it\n+ \t\t * afterall even though it's not sent to the database it's\n+ \t\t * ok, any routines that check the data coming from the\n+ \t\t * database better call pqFlush() anyway.\n+ \t\t */\n+ \t\tif (nbytes > Max(conn->outBufSize - conn->outCount, 0))\n+ \t\t{\n+ \t\t\tprintfPQExpBuffer(&conn->errorMessage,\n+ \t\t\t\t\"pqPutBytes -- pqFlush couldn't flush enough\"\n+ \t\t\t\t\" data: space available: %d, space needed %d\\n\",\n+ \t\t\t\tMax(conn->outBufSize - conn->outCount, 0), nbytes);\n+ \t\t\treturn EOF;\n+ \t\t}\n+ \t}\n+ \n+ \t/* \n+ \t * is the amount of data to be sent is larger than the size of the\n+ \t * output buffer then we must flush it to make more room.\n+ \t *\n+ \t * the code above will make sure the loop conditional is never \n+ \t * true for non-blocking connections\n+ \t */\n \twhile (nbytes > avail)\n \t{\n \t\tmemcpy(conn->outBuffer + conn->outCount, s, avail);\n***************\n*** 548,553 ****\n--- 579,592 ----\n \t\treturn EOF;\n \t}\n \n+ \t/* \n+ \t * don't try to send zero data, allows us to use this function\n+ \t * without too much worry about overhead\n+ \t */\n+ \tif (len == 0)\n+ \t\treturn (0);\n+ \n+ \t/* while there's still data to send */\n \twhile (len > 0)\n \t{\n \t\t/* Prevent being SIGPIPEd if backend has closed the connection. */\n***************\n*** 556,561 ****\n--- 595,601 ----\n #endif\n \n \t\tint sent;\n+ \n #ifdef USE_SSL\n \t\tif (conn->ssl) \n \t\t sent = SSL_write(conn->ssl, ptr, len);\n***************\n*** 585,590 ****\n--- 625,632 ----\n \t\t\t\tcase EWOULDBLOCK:\n \t\t\t\t\tbreak;\n #endif\n+ \t\t\t\tcase EINTR:\n+ \t\t\t\t\tcontinue;\n \n \t\t\t\tcase EPIPE:\n #ifdef ECONNRESET\n***************\n*** 616,628 ****\n \t\t\tptr += sent;\n \t\t\tlen -= sent;\n \t\t}\n \t\tif (len > 0)\n \t\t{\n \t\t\t/* We didn't send it all, wait till we can send more */\n \n- \t\t\t/* At first glance this looks as though it should block. I think\n- \t\t\t * that it will be OK though, as long as the socket is\n- \t\t\t * non-blocking. */\n \t\t\tif (pqWait(FALSE, TRUE, conn))\n \t\t\t\treturn EOF;\n \t\t}\n--- 658,688 ----\n \t\t\tptr += sent;\n \t\t\tlen -= sent;\n \t\t}\n+ \n \t\tif (len > 0)\n \t\t{\n \t\t\t/* We didn't send it all, wait till we can send more */\n+ \n+ \t\t\t/* \n+ \t\t\t * if the socket is in non-blocking mode we may need\n+ \t\t\t * to abort here \n+ \t\t\t */\n+ #ifdef USE_SSL\n+ \t\t\t/* can't do anything for our SSL users yet */\n+ \t\t\tif (conn->ssl == NULL)\n+ \t\t\t{\n+ #endif\n+ \t\t\t\tif (pqIsnonblocking(conn))\n+ \t\t\t\t{\n+ \t\t\t\t\t/* shift the contents of the buffer */\n+ \t\t\t\t\tmemmove(conn->outBuffer, ptr, len);\n+ \t\t\t\t\tconn->outCount = len;\n+ \t\t\t\t\treturn EOF;\n+ \t\t\t\t}\n+ #ifdef USE_SSL\n+ \t\t\t}\n+ #endif\n \n \t\t\tif (pqWait(FALSE, TRUE, conn))\n \t\t\t\treturn EOF;\n \t\t}\nIndex: libpq-fe.h\n===================================================================\nRCS file: /home/pgcvs/pgsql/src/interfaces/libpq/libpq-fe.h,v\nretrieving revision 1.54\ndiff -u -c -IHeader -I$Id: -r1.54 libpq-fe.h\ncvs diff: conflicting specifications of output style\n*** libpq-fe.h\t2000/01/14 05:33:15\t1.54\n--- libpq-fe.h\t2000/01/14 22:45:33\n***************\n*** 261,266 ****\n--- 261,273 ----\n \textern int\tPQgetlineAsync(PGconn *conn, char *buffer, int bufsize);\n \textern int\tPQputnbytes(PGconn *conn, const char *buffer, int nbytes);\n \textern int\tPQendcopy(PGconn *conn);\n+ \n+ \t/* Set blocking/nonblocking connection to the backend */\n+ \textern int\tPQsetnonblocking(PGconn *conn, int arg);\n+ \textern int\tPQisnonblocking(const PGconn *conn);\n+ \n+ \t/* Force the write buffer to be written (or at least try) */\n+ \textern int\tPQflush(PGconn *conn);\n \n \t/*\n \t * \"Fast path\" interface --- not really recommended for application\nIndex: libpq-int.h\n===================================================================\nRCS file: /home/pgcvs/pgsql/src/interfaces/libpq/libpq-int.h,v\nretrieving revision 1.15\ndiff -u -c -IHeader -I$Id: -r1.15 libpq-int.h\ncvs diff: conflicting specifications of output style\n*** libpq-int.h\t2000/01/14 05:33:15\t1.15\n--- libpq-int.h\t2000/01/14 18:32:51\n***************\n*** 214,219 ****\n--- 214,222 ----\n \tint\t\t\tinEnd;\t\t\t/* offset to first position after avail\n \t\t\t\t\t\t\t\t * data */\n \n+ \tint\t\t\tnonblocking;\t/* whether this connection is using a blocking\n+ \t\t\t\t\t\t\t\t * socket to the backend or not */\n+ \n \t/* Buffer for data not yet sent to backend */\n \tchar\t *outBuffer;\t\t/* currently allocated buffer */\n \tint\t\t\toutBufSize;\t\t/* allocated size of buffer */\n***************\n*** 297,301 ****\n--- 300,310 ----\n #define strerror(A) (sys_errlist[(A)])\n #endif\t /* sunos4 */\n #endif\t /* !strerror */\n+ \n+ /* \n+ * this is so that we can check is a connection is non-blocking internally\n+ * without the overhead of a function call\n+ */\n+ #define pqIsnonblocking(conn)\t(conn->nonblocking)\n \n #endif\t /* LIBPQ_INT_H */\n\n\non a side note miscadmin.h causes problems on FreeBSD because it uses\npid_t without having included sys/types.h\n\nthanks!\n\n-- \n-Alfred Perlstein - [[email protected]|[email protected]]\n", "msg_date": "Fri, 14 Jan 2000 11:14:30 -0800", "msg_from": "Alfred Perlstein <[email protected]>", "msg_from_op": false, "msg_subject": "Revised nonblocking patches + quasi docs" }, { "msg_contents": "On Fri, 14 Jan 2000, Alfred Perlstein wrote:\n\n> \n> If so then I'll be glad to update the docs myself, otherwise I'd\n> also be happy to provide coupious amounts of plaintext docs and\n> comments in my code like I have been so far.\n\nThat's all you need to do...as long as we have documentation that can be\nincluded, it will be included ... if in sgml, all the better, but\nplaintext works also...\n\n\n", "msg_date": "Fri, 14 Jan 2000 15:58:36 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Revised nonblocking patches + quasi docs" }, { "msg_contents": "* admin <[email protected]> [000114 11:35] wrote:\n> Alfred wrote:\n> > * Tom Lane <[email protected]> [000109 08:18] wrote: \n> > > Don Baccus <[email protected]> writes:\n> > > > At 05:27 PM 1/8/00 -0500, Tom Lane wrote:\n> > > >> I also object strongly to the lack of documentation.\n> > > \n> > > > ... I know there are some folks who aren't native-english speakers, so\n> > > > perhaps you don't want to require that the implementor of such patches\n> > > > provide the final documentation wording. But the information should\n> > > > be there and spelled out in a form that can be very easily moved to\n> > > > the docs.\n> > > \n> > > Oh, absolutely. Thomas, our master of the docs, has always had the\n> > > policy of \"give me some words, I'll take care of formatting and\n> > > editing...\"\n> > > \n> > > I was probably too harsh on Alfred last night, since in fact his code\n> > > was fairly well commented, and some minimal doco could have been\n> > > extracted from the routine headers. But on a change like this, I think\n> > > some paragraphs of coherent high-level explanation are needed: what it\n> > > does, when and why you'd use it. I didn't see that anywhere...\n> > \n> > I've actually been trying to work on the sgml and failing miserably,\n> > I have no clue how this stuff works (sgml compilation) are you asking\n> > for a couple of paragraphs that describe the proposed changes?\n> > \n> > If so I hope this suffices, if not some help on building the sgml\n> > would be much appreciated:\n> > \n> > --------\n> > \n> The FreeBSD Documentation Project (FDP) has excellent references to get a\n> general idea on building sgml docs. First, you can install the\n> textproc/docproj port or, if you're not running freebsd, refer to the\n> website to see which programs you need. Second, you can read the FDP\n> Primer which details how everything comes together:\n> http://www.freebsd.org/tutorials/docproj-primer/\n> \n> Furthermore, again if you happen to be running FreeBSD, you can grab the\n> doc src using cvsup. The proper reference is also documented somewhere in\n> the Primer or in the Synchronisation chapter in the Handbook.\n> \n> Keep at it, sgml and the docbook stylesheets are really worthwhile when\n> you start getting the hang of it.\n> Marc\n\n'course I run freebsd. :) I even have the docproj port installed,\nhowever it seems that there's some things missing here, (see the\nend of this message).\n\nI really have no problem with commenting my code nor do I have a\nproblem with producing documentation for these changes, however\nI'm _extremely_ pressed for time with this project, haven't slept\nin 2 days and I and don't have time to fight with building the sgml\nfiles to check that my changes/additions are valid, I'd much rather\nfocus on working on the rest of libpq for blocking issues and getting\nmy app into test mode.\n\nPerhaps someone can offer a step-by-step to building _postgresql's_\ndoc files, or maybe there's a machine out there where this will\nbuild properly and someone can give me an account on it?\n\nIf so then I'll be glad to update the docs myself, otherwise I'd\nalso be happy to provide coupious amounts of plaintext docs and\ncomments in my code like I have been so far.\n\nthanks,\n-Alfred Perlstein - [[email protected]|[email protected]]\n\n~/pgcvs/pgsql/doc/src % gmake \ngmake all\ngmake[1]: Entering directory `/home/bright/pgcvs/pgsql/doc/src'\ngmake -C sgml clean\ngmake[2]: Entering directory `/home/bright/pgcvs/pgsql/doc/src/sgml'\n(rm -rf HTML.manifest *.html *.htm *.1 *.l man1 manl manpage*)\ngmake[2]: Leaving directory `/home/bright/pgcvs/pgsql/doc/src/sgml'\ngmake -C sgml admin.html\ngmake[2]: Entering directory `/home/bright/pgcvs/pgsql/doc/src/sgml'\n(rm -rf *.htm)\njade -D ref -D ../graphics -V %use-id-as-filename% -d /home/users/t/thomas/db118.d/docbook/html/docbook.dsl -t sgml admin.sgml\n\n ^^^^^^^^^^^^---- huh?\n ~/pgcvs/pgsql/doc % find . -name \"*.dsl\"\n ~/pgcvs/pgsql/doc % \n\n continues...\n\njade:admin.sgml:26:59:W: cannot generate system identifier for public text \"-//Davenport//DTD DocBook V3.0//EN\"\njade:admin.sgml:51:0:E: reference to entity \"BOOK\" for which no system identifier could be generated\njade:admin.sgml:26:0: entity was defined here\njade:admin.sgml:51:0:E: DTD did not contain element declaration for document type name\njade:admin.sgml:53:9:E: there is no attribute \"ID\"\njade:admin.sgml:53:16:E: element \"BOOK\" undefined\njade:admin.sgml:57:7:E: element \"TITLE\" undefined\njade:admin.sgml:58:10:E: element \"BOOKINFO\" undefined\njade:admin.sgml:59:14:E: element \"RELEASEINFO\" undefined\njade:admin.sgml:60:13:E: element \"BOOKBIBLIO\" undefined\njade:admin.sgml:61:15:E: element \"AUTHORGROUP\" undefined\njade:admin.sgml:62:15:E: element \"CORPAUTHOR\" undefined\njade:admin.sgml:67:10:E: element \"EDITOR\" undefined\njade:admin.sgml:68:14:E: element \"FIRSTNAME\" undefined\njade:admin.sgml:69:12:E: element \"SURNAME\" undefined\njade:admin.sgml:70:16:E: element \"AFFILIATION\" undefined\njade:admin.sgml:71:13:E: element \"ORGNAME\" undefined\njade:admin.sgml:82:8:E: element \"DATE\" undefined\njade:admin.sgml:85:14:E: element \"LEGALNOTICE\" undefined\njade:admin.sgml:86:8:E: element \"PARA\" undefined\njade:admin.sgml:87:16:E: element \"PRODUCTNAME\" undefined\njade:admin.sgml:87:56:E: general entity \"copy\" not defined and no default entity\njade:admin.sgml:107:13:E: there is no attribute \"ID\"\njade:admin.sgml:107:22:E: element \"PREFACE\" undefined\njade:admin.sgml:108:8:E: element \"TITLE\" undefined\njade:admin.sgml:110:7:E: element \"PARA\" undefined\njade:admin.sgml:111:15:E: element \"PRODUCTNAME\" undefined\njade:admin.sgml:117:15:E: element \"PRODUCTNAME\" undefined\njade:intro-ag.sgml:1:13:E: there is no attribute \"ID\"\njade:intro-ag.sgml:1:23:E: element \"CHAPTER\" undefined\njade:intro-ag.sgml:2:8:E: element \"TITLE\" undefined\njade:intro-ag.sgml:4:7:E: element \"PARA\" undefined\njade:intro-ag.sgml:6:14:E: there is no attribute \"URL\"\njade:intro-ag.sgml:6:38:E: element \"ULINK\" undefined\njade:intro-ag.sgml:6:51:E: element \"PRODUCTNAME\" undefined\njade:intro-ag.sgml:10:15:E: element \"PRODUCTNAME\" undefined\njade:intro-ag.sgml:11:74:E: element \"ULINK\" undefined\njade:intro-ag.sgml:12:16:E: element \"PRODUCTNAME\" undefined\njade:intro-ag.sgml:13:19:E: element \"PRODUCTNAME\" undefined\njade:intro-ag.sgml:15:55:E: element \"ACRONYM\" undefined\njade:intro-ag.sgml:16:33:E: element \"ACRONYM\" undefined\njade:intro-ag.sgml:17:23:E: element \"ACRONYM\" undefined\njade:info.sgml:1:6:E: element \"SECT1\" undefined\njade:info.sgml:2:7:E: element \"TITLE\" undefined\njade:info.sgml:4:6:E: element \"PARA\" undefined\njade:info.sgml:8:14:E: element \"VARIABLELIST\" undefined\njade:info.sgml:9:15:E: element \"VARLISTENTRY\" undefined\njade:info.sgml:10:8:E: element \"TERM\" undefined\njade:info.sgml:11:12:E: element \"LISTITEM\" undefined\njade:info.sgml:12:9:E: element \"PARA\" undefined\njade:info.sgml:18:15:E: element \"VARLISTENTRY\" undefined\njade:info.sgml:19:8:E: element \"TERM\" undefined\njade:info.sgml:20:12:E: element \"LISTITEM\" undefined\njade:info.sgml:21:9:E: element \"PARA\" undefined\njade:info.sgml:27:15:E: element \"VARLISTENTRY\" undefined\njade:info.sgml:28:8:E: element \"TERM\" undefined\njade:info.sgml:29:12:E: element \"LISTITEM\" undefined\njade:info.sgml:30:9:E: element \"PARA\" undefined\njade:info.sgml:38:15:E: element \"VARLISTENTRY\" undefined\njade:info.sgml:39:8:E: element \"TERM\" undefined\njade:info.sgml:40:12:E: element \"LISTITEM\" undefined\njade:info.sgml:41:9:E: element \"PARA\" undefined\njade:info.sgml:47:15:E: element \"VARLISTENTRY\" undefined\njade:info.sgml:48:8:E: element \"TERM\" undefined\njade:info.sgml:49:12:E: element \"LISTITEM\" undefined\njade:info.sgml:50:9:E: element \"PARA\" undefined\njade:info.sgml:51:33:E: element \"PRODUCTNAME\" undefined\njade:info.sgml:53:17:E: element \"PRODUCTNAME\" undefined\njade:info.sgml:55:15:E: element \"CITETITLE\" undefined\njade:info.sgml:56:41:E: element \"CITETITLE\" undefined\njade:info.sgml:61:15:E: element \"VARLISTENTRY\" undefined\njade:info.sgml:62:8:E: element \"TERM\" undefined\njade:info.sgml:63:12:E: element \"LISTITEM\" undefined\njade:info.sgml:64:9:E: element \"PARA\" undefined\njade:info.sgml:66:41:E: element \"CITETITLE\" undefined\njade:info.sgml:72:6:E: element \"PARA\" undefined\njade:info.sgml:74:14:E: element \"PRODUCTNAME\" undefined\njade:info.sgml:77:14:E: element \"VARIABLELIST\" undefined\njade:info.sgml:78:15:E: element \"VARLISTENTRY\" undefined\njade:info.sgml:79:8:E: element \"TERM\" undefined\njade:info.sgml:80:12:E: element \"LISTITEM\" undefined\njade:info.sgml:81:9:E: element \"PARA\" undefined\njade:info.sgml:87:15:E: element \"VARLISTENTRY\" undefined\njade:info.sgml:88:8:E: element \"TERM\" undefined\njade:info.sgml:89:12:E: element \"LISTITEM\" undefined\njade:info.sgml:90:9:E: element \"PARA\" undefined\njade:info.sgml:97:15:E: element \"VARLISTENTRY\" undefined\njade:info.sgml:98:8:E: element \"TERM\" undefined\njade:info.sgml:99:12:E: element \"LISTITEM\" undefined\njade:info.sgml:100:9:E: element \"PARA\" undefined\njade:info.sgml:106:15:E: element \"VARLISTENTRY\" undefined\njade:info.sgml:107:8:E: element \"TERM\" undefined\njade:info.sgml:108:12:E: element \"LISTITEM\" undefined\njade:info.sgml:109:9:E: element \"PARA\" undefined\njade:info.sgml:111:32:E: element \"ULINK\" undefined\njade:info.sgml:111:45:E: element \"PRODUCTNAME\" undefined\njade:info.sgml:113:28:E: element \"PRODUCTNAME\" undefined\njade:info.sgml:119:15:E: element \"VARLISTENTRY\" undefined\njade:info.sgml:120:8:E: element \"TERM\" undefined\njade:info.sgml:121:12:E: element \"LISTITEM\" undefined\njade:info.sgml:122:9:E: element \"PARA\" undefined\njade:info.sgml:124:53:E: element \"ULINK\" undefined\njade:info.sgml:125:67:E: element \"ULINK\" undefined\njade:info.sgml:133:15:E: element \"VARLISTENTRY\" undefined\njade:info.sgml:134:8:E: element \"TERM\" undefined\njade:info.sgml:135:12:E: element \"LISTITEM\" undefined\njade:info.sgml:136:9:E: element \"PARA\" undefined\njade:info.sgml:137:17:E: element \"PRODUCTNAME\" undefined\njade:info.sgml:139:37:E: element \"PRODUCTNAME\" undefined\njade:info.sgml:147:9:E: element \"PARA\" undefined\njade:info.sgml:151:50:E: element \"ULINK\" undefined\njade:info.sgml:152:64:E: element \"ULINK\" undefined\njade:notation.sgml:1:10:E: there is no attribute \"ID\"\njade:notation.sgml:1:23:E: element \"SECT1\" undefined\njade:notation.sgml:2:7:E: element \"TITLE\" undefined\njade:notation.sgml:4:6:E: element \"PARA\" undefined\njade:notation.sgml:6:12:E: element \"FIRSTTERM\" undefined\njade:notation.sgml:8:14:E: element \"PRODUCTNAME\" undefined\njade:notation.sgml:10:14:E: element \"PRODUCTNAME\" undefined\njade:notation.sgml:13:14:E: element \"PRODUCTNAME\" undefined\njade:notation.sgml:16:6:E: element \"PARA\" undefined\njade:notation.sgml:18:14:E: element \"PRODUCTNAME\" undefined\njade:notation.sgml:18:48:E: element \"FIRSTTERM\" undefined\njade:notation.sgml:19:32:E: element \"REPLACEABLE\" undefined\njade:notation.sgml:20:27:E: element \"PRODUCTNAME\" undefined\njade:notation.sgml:24:31:E: element \"PRODUCTNAME\" undefined\njade:notation.sgml:26:28:E: element \"PRODUCTNAME\" undefined\njade:notation.sgml:27:11:E: element \"EMPHASIS\" undefined\njade:notation.sgml:28:73:E: element \"FIRSTTERM\" undefined\njade:notation.sgml:29:66:E: element \"FIRSTTERM\" undefined\njade:notation.sgml:33:6:E: element \"PARA\" undefined\njade:notation.sgml:35:12:E: element \"FIRSTTERM\" undefined\njade:notation.sgml:36:13:E: element \"ACRONYM\" undefined\njade:notation.sgml:37:14:E: element \"PRODUCTNAME\" undefined\njade:notation.sgml:41:14:E: element \"APPLICATION\" undefined\njade:notation.sgml:44:6:E: element \"PARA\" undefined\njade:notation.sgml:45:18:E: element \"APPLICATION\" undefined\njade:notation.sgml:47:21:E: element \"PRODUCTNAME\" undefined\njade:notation.sgml:48:51:E: element \"APPLICATION\" undefined\njade:notation.sgml:50:38:E: element \"APPLICATION\" undefined\njade:notation.sgml:56:6:E: element \"PARA\" undefined\njade:notation.sgml:57:18:E: element \"PRODUCTNAME\" undefined\njade:notation.sgml:58:45:E: element \"APPLICATION\" undefined\njade:notation.sgml:60:14:E: element \"PRODUCTNAME\" undefined\njade:notation.sgml:68:20:E: element \"SECT1\" undefined\njade:notation.sgml:69:7:E: element \"TITLE\" undefined\njade:notation.sgml:71:6:E: element \"PARA\" undefined\njade:notation.sgml:72:8:E: element \"QUOTE\" undefined\njade:notation.sgml:72:33:E: element \"FILENAME\" undefined\njade:notation.sgml:74:26:E: element \"PRODUCTNAME\" undefined\njade:notation.sgml:77:6:E: element \"PARA\" undefined\njade:notation.sgml:79:9:E: element \"QUOTE\" undefined\njade:notation.sgml:79:30:E: element \"QUOTE\" undefined\njade:notation.sgml:81:9:E: element \"QUOTE\" undefined\njade:notation.sgml:81:30:E: element \"QUOTE\" undefined\njade:notation.sgml:81:78:E: element \"QUOTE\" undefined\njade:notation.sgml:85:6:E: element \"PARA\" undefined\njade:notation.sgml:86:34:E: element \"QUOTE\" undefined\njade:notation.sgml:86:55:E: element \"QUOTE\" undefined\njade:notation.sgml:87:22:E: element \"QUOTE\" undefined\njade:notation.sgml:90:6:E: element \"PARA\" undefined\njade:notation.sgml:92:71:E: element \"QUOTE\" undefined\njade:notation.sgml:92:73:E: general entity \"gt\" not defined and no default entity\njade:notation.sgml:93:41:E: element \"PRODUCTNAME\" undefined\njade:notation.sgml:94:49:E: element \"QUOTE\" undefined\njade:notation.sgml:96:8:E: element \"QUOTE\" undefined\njade:notation.sgml:97:10:E: element \"ACRONYM\" undefined\njade:notation.sgml:97:63:E: element \"QUOTE\" undefined\njade:notation.sgml:101:6:E: element \"NOTE\" undefined\njade:notation.sgml:102:7:E: element \"PARA\" undefined\njade:notation.sgml:103:39:E: element \"PRODUCTNAME\" undefined\njade:notation.sgml:106:42:E: element \"ULINK\" undefined\njade:y2k.sgml:1:15:E: element \"SECT1\" undefined\njade:y2k.sgml:2:7:E: element \"TITLE\" undefined\njade:y2k.sgml:4:6:E: element \"NOTE\" undefined\njade:y2k.sgml:5:8:E: element \"TITLE\" undefined\njade:y2k.sgml:7:7:E: element \"PARA\" undefined\njade:y2k.sgml:9:50:E: element \"ULINK\" undefined\njade:y2k.sgml:14:6:E: element \"PARA\" undefined\njade:y2k.sgml:15:18:E: element \"PRODUCTNAME\" undefined\njade:y2k.sgml:16:18:E: element \"PRODUCTNAME\" undefined\njade:y2k.sgml:21:14:E: element \"ITEMIZEDLIST\" undefined\njade:y2k.sgml:22:11:E: element \"LISTITEM\" undefined\njade:y2k.sgml:23:8:E: element \"PARA\" undefined\njade:y2k.sgml:24:65:E: element \"PRODUCTNAME\" undefined\njade:y2k.sgml:26:36:E: element \"PRODUCTNAME\" undefined\njade:y2k.sgml:31:11:E: element \"LISTITEM\" undefined\njade:y2k.sgml:32:8:E: element \"PARA\" undefined\njade:y2k.sgml:36:19:E: element \"PRODUCTNAME\" undefined\njade:y2k.sgml:42:11:E: element \"LISTITEM\" undefined\njade:y2k.sgml:43:8:E: element \"PARA\" undefined\njade:y2k.sgml:47:65:E: element \"ULINK\" undefined\njade:y2k.sgml:50:15:E: element \"QUOTE\" undefined\njade:y2k.sgml:50:57:E: element \"QUOTE\" undefined\njade:y2k.sgml:51:18:E: element \"QUOTE\" undefined\njade:y2k.sgml:51:60:E: element \"QUOTE\" undefined\njade:y2k.sgml:55:11:E: element \"LISTITEM\" undefined\njade:y2k.sgml:56:8:E: element \"PARA\" undefined\njade:y2k.sgml:59:16:E: element \"PRODUCTNAME\" undefined\njade:y2k.sgml:64:6:E: element \"PARA\" undefined\njade:y2k.sgml:66:56:E: element \"ULINK\" undefined\njade:y2k.sgml:68:53:E: element \"ULINK\" undefined\njade:I: maximum number of errors (200) reached; change with -E option\njade:E: cannot open \"/home/users/t/thomas/db118.d/docbook/html/docbook.dsl\" (No such file or directory)\njade:E: specification document does not have the DSSSL architecture as a base architecture\n\n\n PostgreSQL Administrator's Guide\n Covering v6.5 for general release\n The PostgreSQL Development Team\n \n Thomas\n Lockhart\n Caltech/JPL\n \n \n \n\n (last updated 1999-06-01)\n \n\n PostgreSQL is Copyright 1996-9\n by the Postgres Global Development Group.\n \n \n\n \n\n\n\n Summary\n\n Postgres, \n developed originally in the UC Berkeley Computer Science Department,\n pioneered many of the object-relational concepts\n now becoming available in some commercial databases.\n It provides SQL92/SQL3 language support,\n transaction integrity, and type extensibility.\n PostgreSQL is an open-source descendant\n of this original Berkeley code.\n \n \n\n Introduction\n\n This document is the Administrator's Manual for the \n PostgreSQL\n database management system, originally developed at the University\n of California at Berkeley. \n\n PostgreSQL is based on\n Postgres release 4.2. \n The Postgres project, \n led by Professor Michael Stonebraker, was sponsored by the\n Defense Advanced Research Projects Agency (DARPA), the\n Army Research Office (ARO), the National Science \n Foundation (NSF), and ESL, Inc.\n \n\n Resources\n\n This manual set is organized into several parts:\n \n\n Tutorial\n An introduction for new users. Does not cover advanced features.\n \n \n \n\n User's Guide\n General information for users, including available commands and data types.\n \n \n \n\n Programmer's Guide\n Advanced information for application programmers. Topics include\n type and function extensibility, library interfaces,\n and application design issues.\n \n \n \n\n Administrator's Guide\n Installation and management information. List of supported machines.\n \n \n \n\n Developer's Guide\n Information for Postgres developers.\n This is intended for those who are contributing to the\n Postgres project;\n application development information should appear in the \n Programmer's Guide.\n Currently included in the Programmer's Guide.\n \n \n \n\n Reference Manual\n Detailed reference information on command syntax.\n Currently included in the User's Guide.\n \n \n \n \n\n In addition to this manual set, there are other resources to help you with\n Postgres installation and use:\n \n\n man pages\n The man pages have general information on command syntax.\n \n \n \n\n FAQs\n The Frequently Asked Questions (FAQ) documents address both general issues\n and some platform-specific issues.\n \n \n \n\n READMEs\n README files are available for some contributed packages.\n \n \n \n\n Web Site\n The\n Postgres\n web site might have some information not appearing in the distribution.\n There is a mhonarc catalog of mailing list traffic\n which is a rich resource for many topics.\n \n \n \n\n Mailing Lists\n The\n pgsql-general\n (archive)\n mailing list is a good place to have user questions answered.\n Other mailing lists are available; consult the Info Central section of the\n PostgreSQL web site for details.\n \n \n \n\n Yourself!\n Postgres is an open source product. \n As such, it depends on the user community for ongoing support.\n As you begin to use Postgres, \n you will rely on others for help, either through the\n documentation or through the mailing lists. \n Consider contributing your knowledge back. If you learn something\n which is not in the documentation, write it up and contribute it.\n If you add features to the code, contribute it.\n \n\n Even those without a lot of experience can provide corrections and\n minor changes in the documentation, and that is a good way to start.\n The \n pgsql-docs\n (archive)\n mailing list is the place to get going.\n \n \n \n \n\n\n\n Terminology\n\n In the following documentation,\n site\n may be interpreted as the host machine on which \n Postgres is installed.\n Since it is possible to install more than one set of \n Postgres\n databases on a single host, this term more precisely denotes any\n particular set of installed \n Postgres binaries and databases.\n \n\n The \n Postgres superuser\n is the user named postgres\n who owns the Postgres\n binaries and database files. As the database superuser, all\n protection mechanisms may be bypassed and any data accessed\n arbitrarily. \n In addition, the Postgres superuser is allowed to execute\n some support programs which are generally not available to all users.\n Note that the Postgres superuser is\n not\n the same as the Unix superuser (which will be referred to as root).\n The superuser should have a non-zero user identifier (UID)\n for security reasons.\n \n\n The\n database administrator\n or DBA, is the person who is responsible for installing \n Postgres with mechanisms to\n enforce a security policy for a site. The DBA can add new users by\n the method described below \n and maintain a set of template databases for use by\n createdb.\n \n\n The postmaster\n is the process that acts as a clearing-house for requests \n to the Postgres system.\n Frontend applications connect to the postmaster,\n which keeps tracks of any system errors and communication between the\n backend processes. The postmaster\n can take several command-line arguments to tune its behavior.\n However, supplying arguments is necessary only if you intend to run multiple\n sites or a non-default site.\n \n\n The Postgres backend\n (the actual executable program postgres) may be executed\n directly from the user shell by the \n Postgres super-user \n (with the database name as an argument). However,\n doing this bypasses the shared buffer pool and lock table associated\n with a postmaster/site, therefore this is not recommended in a multiuser\n site.\n \n\n Notation\n\n ... or /usr/local/pgsql/ \n at the front of a file name is used to represent the\n path to the Postgres superuser's home directory.\n \n\n In a command synopsis, brackets\n ([ and ]) indicate an optional phrase or keyword.\n Anything in braces\n ({ and }) and containing vertical bars (|)\n indicates that you must choose one.\n \n\n In examples, parentheses (( and )) are used to group boolean\n expressions. | is the boolean operator OR.\n \n\n Examples will show commands executed from various accounts and programs.\n Commands executed from the root account will be preceeded with .\n Commands executed from the Postgres\n superuser account will be preceeded with %, while commands\n executed from an unprivileged user's account will be preceeded with\n $.\n SQL commands will be preceeded with =\n or will have no leading prompt, depending on the context.\n \n\n At the time of writing (Postgres v6.5) the notation for\n flagging commands is not universally consistant throughout the documentation set.\n Please report problems to\n the Documentation Mailing List.\n \n \n\n\n\n Y2K Statement\n\n Author\n\n Written by \n Thomas Lockhart\n on 1998-10-22.\n \n \n\n The PostgreSQL Global Development Team provides\n the Postgres software code tree as a public service,\n without warranty and without liability for it's behavior or performance.\n However, at the time of writing:\n \n\n The author of this statement, a volunteer on the Postgres\n support team since November, 1996, is not aware of \n any problems in the Postgres code base related\n to time transitions around Jan 1, 2000 (Y2K).\n \n \n\n The author of this statement is not aware of any reports of Y2K problems \n uncovered in regression testing\n or in other field use of recent or current versions\n of Postgres. We might have expected\n to hear about problems if they existed, given the installed base and\n the active participation of users on the support mailing lists.\n \n \n\n To the best of the author's knowledge, the\n assumptions Postgres makes about dates specified with a two-digit year\n are documented in the current \n User's Guide\n in the chapter on data types.\n For two-digit years, the significant transition year is 1970, not 2000;\n e.g. 70-01-01 is interpreted as 1970-01-01,\n whereas 69-01-01 is interpreted as 2069-01-01.\n \n \n\n Any Y2K problems in the underlying OS related to obtaining \"the\n current time\" may propagate into apparent Y2K problems in\n Postgres.\n \n \n \n\n Refer to \n The Gnu Project\n and\n gmake[2]: *** [admin.html] Error 1\ngmake[2]: Leaving directory `/home/bright/pgcvs/pgsql/doc/src/sgml'\ngmake[1]: *** [admin.tar] Error 2\ngmake[1]: Leaving directory `/home/bright/pgcvs/pgsql/doc/src'\ngmake: *** [install] Error 2\n\n\noy!\n", "msg_date": "Fri, 14 Jan 2000 12:05:17 -0800", "msg_from": "Alfred Perlstein <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Revised nonblocking patches + quasi docs" }, { "msg_contents": "> > > I've actually been trying to work on the sgml and failing miserably,\n> > > I have no clue how this stuff works (sgml compilation) are you asking\n> > > for a couple of paragraphs that describe the proposed changes?\n> > > If so I hope this suffices, if not some help on building the sgml\n> > > would be much appreciated:\n> 'course I run freebsd. :) I even have the docproj port installed,\n> however it seems that there's some things missing here, (see the\n> end of this message).\n> Perhaps someone can offer a step-by-step to building _postgresql's_\n> doc files, or maybe there's a machine out there where this will\n> build properly and someone can give me an account on it?\n\nYou are probably very near having something working. The Postgres docs\nhave an appendix on \"Documentation\", which contains some information\non getting jade built and running. The makefile in doc/src/sgml has\ncomments which indicate what probably needs to be changed, and how to\ndo it (the parameters are set so the stuff builds on postgresql.org,\nbut I have a couple of lines in my Makefile.custom to get things to\nwork at home).\n\nIn particular, the .dsl files are somewhere in your jade installation\ntree (it is in /usr/lib/sgml/stylesheets/nwalsh-modular/{print,html}\non my Linux box).\n\nAsk more specific questions and we'll help you through it, but only\nafter you get some sleep :)\n\nI'm out of town through the weekend, but will be on-list Monday night\nafaik.\n\n - Thomas\n\n> ~/pgcvs/pgsql/doc/src % gmake\n> gmake all\n> gmake[1]: Entering directory `/home/bright/pgcvs/pgsql/doc/src'\n> gmake -C sgml clean\n> gmake[2]: Entering directory `/home/bright/pgcvs/pgsql/doc/src/sgml'\n> (rm -rf HTML.manifest *.html *.htm *.1 *.l man1 manl manpage*)\n> gmake[2]: Leaving directory `/home/bright/pgcvs/pgsql/doc/src/sgml'\n> gmake -C sgml admin.html\n> gmake[2]: Entering directory `/home/bright/pgcvs/pgsql/doc/src/sgml'\n> (rm -rf *.htm)\n> jade -D ref -D ../graphics -V %use-id-as-filename% -d /home/users/t/thomas/db118.d/docbook/html/docbook.dsl -t sgml admin.sgml\n> \n> ^^^^^^^^^^^^---- huh?\n> ~/pgcvs/pgsql/doc % find . -name \"*.dsl\"\n> ~/pgcvs/pgsql/doc %\n> \n> continues...\n\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Sat, 15 Jan 2000 03:19:15 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Revised nonblocking patches + quasi docs" }, { "msg_contents": "* Thomas Lockhart <[email protected]> [000114 19:42] wrote:\n> > > > I've actually been trying to work on the sgml and failing miserably,\n> > > > I have no clue how this stuff works (sgml compilation) are you asking\n> > > > for a couple of paragraphs that describe the proposed changes?\n> > > > If so I hope this suffices, if not some help on building the sgml\n> > > > would be much appreciated:\n> > 'course I run freebsd. :) I even have the docproj port installed,\n> > however it seems that there's some things missing here, (see the\n> > end of this message).\n> > Perhaps someone can offer a step-by-step to building _postgresql's_\n> > doc files, or maybe there's a machine out there where this will\n> > build properly and someone can give me an account on it?\n> \n> You are probably very near having something working. \n\nYes, I can feel it, I just ran to the store to pick up the goat blood\nand candles. :)\n\nWith the help of a friend Jeroen Ruigrok van der Werven, one of\nthe FreeBSD'doc folks I got it working here's what I needed to do:\n\ninstall the textproc/docproj and textproc/docbook from /usr/ports,\nmaybe more packages are needed (dsssl-docbook-modular, dtd-catalogs),\ni was installing everything hoping to get it to work...\n\nsetup my enviornment... (this ought to be mentioned in the docs)\n\nexport SMGL_ROOT=/usr/local/share/sgml\nSGML_CATALOG_FILES=/usr/local/share/sgml/jade/catalog\nSGML_CATALOG_FILES=/usr/local/share/sgml/html/catalog:$SGML_CATALOG_FILES\nSGML_CATALOG_FILES=/usr/local/share/sgml/iso8879/catalog:$SGML_CATALOG_FILES\nSGML_CATALOG_FILES=/usr/local/share/sgml/transpec/catalog:$SGML_CATALOG_FILES\nSGML_CATALOG_FILES=/usr/local/share/sgml/docbook/catalog:$SGML_CATALOG_FILES\nexport SGML_CATALOG_FILES\n\nthen in the pgsql/doc/src dir:\n\ngmake all \\\n\tHSTYLE=/usr/local/share/sgml/docbook/dsssl/modular/html/ \\\n\tPSTYLE=/usr/local/share/sgml/docbook/dsssl/modular/print/ \\\n\nwait a good long time...\n\nviola.\n\nOk, I should have the docs for my code along with some help for \nhapless FreeBSD users trying to work on this stuff in a bit.\n\nI guess this means that Marc can't weasel his way out of doing \ndocumentation anymore, or was that the point all along? :)\n\nBtw, does anyone have some fixes so gvim doesn't barf doing syntax\nhighlighting on these sgml files?\n\n-Alfred\n", "msg_date": "Sat, 15 Jan 2000 06:06:08 -0800", "msg_from": "Alfred Perlstein <[email protected]>", "msg_from_op": false, "msg_subject": "FreeBSD postgresql doc-HOWTO was: Re: [HACKERS] Revised nonblocking\n\tpatches + quasi docs" }, { "msg_contents": "* Tom Lane <[email protected]> [000109 08:18] wrote:\n> Don Baccus <[email protected]> writes:\n> > At 05:27 PM 1/8/00 -0500, Tom Lane wrote:\n> >> I also object strongly to the lack of documentation.\n> \n> > ... I know there are some folks who aren't native-english speakers, so\n> > perhaps you don't want to require that the implementor of such patches\n> > provide the final documentation wording. But the information should\n> > be there and spelled out in a form that can be very easily moved to\n> > the docs.\n> \n> Oh, absolutely. Thomas, our master of the docs, has always had the\n> policy of \"give me some words, I'll take care of formatting and\n> editing...\"\n> \n> I was probably too harsh on Alfred last night, since in fact his code\n> was fairly well commented, and some minimal doco could have been\n> extracted from the routine headers. But on a change like this, I think\n> some paragraphs of coherent high-level explanation are needed: what it\n> does, when and why you'd use it. I didn't see that anywhere...\n\nHere's the revised patch, it includes sgml docs and changes to\nensure that old style connections behave the way they are expected\nto:\n\nIndex: doc/src/sgml/libpq.sgml\n===================================================================\nRCS file: /home/pgcvs/pgsql/doc/src/sgml/libpq.sgml,v\nretrieving revision 1.25\ndiff -u -c -r1.25 libpq.sgml\n*** doc/src/sgml/libpq.sgml\t2000/01/14 05:33:13\t1.25\n--- doc/src/sgml/libpq.sgml\t2000/01/17 03:40:30\n***************\n*** 377,382 ****\n--- 377,386 ----\n changed in the future.\n </para>\n <para>\n+ These functions leave the socket in a non-blocking state as if \n+ <function>PQsetnonblocking</function> had been called.\n+ </para>\n+ <para>\n These functions are not thread-safe.\n </para>\n </listitem>\n***************\n*** 1168,1175 ****\n--- 1172,1229 ----\n Applications that do not like these limitations can instead use the\n underlying functions that <function>PQexec</function> is built from:\n <function>PQsendQuery</function> and <function>PQgetResult</function>.\n+ </para>\n+ <para>\n+ Older programs that used this functionality as well as \n+ <function>PQputline</function> and <function>PQputnbytes</function>\n+ could block waiting to send data to the backend, to\n+ address that issue, the function <function>PQsetnonblocking</function>\n+ was added.\n+ </para>\n+ <para>\n+ Old applications can neglect to use <function>PQsetnonblocking</function>\n+ and get the older potentially blocking behavior. Newer programs can use \n+ <function>PQsetnonblocking</function> to achieve a completely non-blocking\n+ connection to the backend.\n \n <itemizedlist>\n+ <listitem>\n+ <para>\n+ <function>PQsetnonblocking</function> Sets the state of the connection\n+ to non-blocking.\n+ <synopsis>\n+ int PQsetnonblocking(PGconn *conn)\n+ </synopsis>\n+ this function will ensure that calls to \n+ <function>PQputline</function>, <function>PQputnbytes</function>,\n+ <function>PQsendQuery</function> and <function>PQendcopy</function>\n+ will not block but instead return an error if they need to be called\n+ again.\n+ </para>\n+ <para>\n+ When a database connection has been set to non-blocking mode and\n+ <function>PQexec</function> is called, it will temporarily set the state\n+ of the connection to blocking until the <function>PQexec</function> \n+ completes. \n+ </para>\n+ <para>\n+ More of libpq is expected to be made safe for \n+ <function>PQsetnonblocking</function> functionality in the near future.\n+ </para>\n+ </listitem>\n+ \n+ <listitem>\n+ <para>\n+ <function>PQisnonblocking</function>\n+ Returns the blocking status of the database connection.\n+ <synopsis>\n+ int PQisnonblocking(const PGconn *conn)\n+ </synopsis>\n+ Returns TRUE if the connection is set to non-blocking mode,\n+ FALSE if blocking.\n+ </para>\n+ </listitem>\n+ \n <listitem>\n <para>\n <function>PQsendQuery</function>\n***************\n*** 1267,1286 ****\n \n <listitem>\n <para>\n <function>PQsocket</function>\n \t Obtain the file descriptor number for the backend connection socket.\n! \t A valid descriptor will be >= 0; a result of -1 indicates that\n \t no backend connection is currently open.\n <synopsis>\n int PQsocket(const PGconn *conn);\n </synopsis>\n <function>PQsocket</function> should be used to obtain the backend socket descriptor\n in preparation for executing <function>select</function>(2). This allows an\n! application to wait for either backend responses or other conditions.\n If the result of <function>select</function>(2) indicates that data can be read from\n the backend socket, then <function>PQconsumeInput</function> should be called to read the\n data; after which, <function>PQisBusy</function>, <function>PQgetResult</function>,\n and/or <function>PQnotifies</function> can be used to process the response.\n </para>\n </listitem>\n \n--- 1321,1363 ----\n \n <listitem>\n <para>\n+ <function>PQflush</function> Attempt to flush any data queued to the backend,\n+ returns 0 if successful (or if the send queue is empty) or EOF if it failed for\n+ some reason.\n+ <synopsis>\n+ int PQflush(PGconn *conn);\n+ </synopsis>\n+ <function>PQflush</function> needs to be called on a non-blocking connection \n+ before calling <function>select</function> to determine if a responce has\n+ arrived. If 0 is returned it ensures that there is no data queued to the \n+ backend that has not actually been sent. Only applications that have used\n+ <function>PQsetnonblocking</function> have a need for this.\n+ </para>\n+ </listitem>\n+ \n+ <listitem>\n+ <para>\n <function>PQsocket</function>\n \t Obtain the file descriptor number for the backend connection socket.\n! \t A valid descriptor will be &gt;= 0; a result of -1 indicates that\n \t no backend connection is currently open.\n <synopsis>\n int PQsocket(const PGconn *conn);\n </synopsis>\n <function>PQsocket</function> should be used to obtain the backend socket descriptor\n in preparation for executing <function>select</function>(2). This allows an\n! application using a blocking connection to wait for either backend responses or\n! other conditions.\n If the result of <function>select</function>(2) indicates that data can be read from\n the backend socket, then <function>PQconsumeInput</function> should be called to read the\n data; after which, <function>PQisBusy</function>, <function>PQgetResult</function>,\n and/or <function>PQnotifies</function> can be used to process the response.\n+ </para>\n+ <para>\n+ Non-blocking connections (that have used <function>PQsetnonblocking</function>)\n+ should not use <function>select</function> until <function>PQflush</function>\n+ has returned 0 indicating that there is no buffered data waiting to be sent\n+ to the backend.\n </para>\n </listitem>\n \nIndex: src/interfaces/libpq/fe-connect.c\n===================================================================\nRCS file: /home/pgcvs/pgsql/src/interfaces/libpq/fe-connect.c,v\nretrieving revision 1.111\ndiff -u -c -r1.111 fe-connect.c\n*** src/interfaces/libpq/fe-connect.c\t2000/01/16 21:18:52\t1.111\n--- src/interfaces/libpq/fe-connect.c\t2000/01/17 02:35:56\n***************\n*** 594,624 ****\n \treturn 0;\n }\n \n- \n- /* ----------\n- * connectMakeNonblocking -\n- * Make a connection non-blocking.\n- * Returns 1 if successful, 0 if not.\n- * ----------\n- */\n- static int\n- connectMakeNonblocking(PGconn *conn)\n- {\n- #ifndef WIN32\n- \tif (fcntl(conn->sock, F_SETFL, O_NONBLOCK) < 0)\n- #else\n- \tif (ioctlsocket(conn->sock, FIONBIO, &on) != 0)\n- #endif\n- \t{\n- \t\tprintfPQExpBuffer(&conn->errorMessage,\n- \t\t\t\t\t\t \"connectMakeNonblocking -- fcntl() failed: errno=%d\\n%s\\n\",\n- \t\t\t\t\t\t errno, strerror(errno));\n- \t\treturn 0;\n- \t}\n- \n- \treturn 1;\n- }\n- \n /* ----------\n * connectNoDelay -\n * Sets the TCP_NODELAY socket option.\n--- 594,599 ----\n***************\n*** 789,795 ****\n \t * Ewan Mellor <[email protected]>.\n \t * ---------- */\n #if (!defined(WIN32) || defined(WIN32_NON_BLOCKING_CONNECTIONS)) && !defined(USE_SSL)\n! \tif (!connectMakeNonblocking(conn))\n \t\tgoto connect_errReturn;\n #endif\t\n \n--- 764,770 ----\n \t * Ewan Mellor <[email protected]>.\n \t * ---------- */\n #if (!defined(WIN32) || defined(WIN32_NON_BLOCKING_CONNECTIONS)) && !defined(USE_SSL)\n! \tif (PQsetnonblocking(conn, TRUE) != 0)\n \t\tgoto connect_errReturn;\n #endif\t\n \n***************\n*** 898,904 ****\n \t/* This makes the connection non-blocking, for all those cases which forced us\n \t not to do it above. */\n #if (defined(WIN32) && !defined(WIN32_NON_BLOCKING_CONNECTIONS)) || defined(USE_SSL)\n! \tif (!connectMakeNonblocking(conn))\n \t\tgoto connect_errReturn;\n #endif\t\n \n--- 873,879 ----\n \t/* This makes the connection non-blocking, for all those cases which forced us\n \t not to do it above. */\n #if (defined(WIN32) && !defined(WIN32_NON_BLOCKING_CONNECTIONS)) || defined(USE_SSL)\n! \tif (PQsetnonblocking(conn, TRUE) != 0)\n \t\tgoto connect_errReturn;\n #endif\t\n \n***************\n*** 1720,1725 ****\n--- 1695,1701 ----\n \tconn->inBuffer = (char *) malloc(conn->inBufSize);\n \tconn->outBufSize = 8 * 1024;\n \tconn->outBuffer = (char *) malloc(conn->outBufSize);\n+ \tconn->nonblocking = FALSE;\n \tinitPQExpBuffer(&conn->errorMessage);\n \tinitPQExpBuffer(&conn->workBuffer);\n \tif (conn->inBuffer == NULL ||\n***************\n*** 1830,1835 ****\n--- 1806,1812 ----\n \tconn->lobjfuncs = NULL;\n \tconn->inStart = conn->inCursor = conn->inEnd = 0;\n \tconn->outCount = 0;\n+ \tconn->nonblocking = FALSE;\n \n }\n \nIndex: src/interfaces/libpq/fe-exec.c\n===================================================================\nRCS file: /home/pgcvs/pgsql/src/interfaces/libpq/fe-exec.c,v\nretrieving revision 1.86\ndiff -u -c -r1.86 fe-exec.c\n*** src/interfaces/libpq/fe-exec.c\t1999/11/11 00:10:14\t1.86\n--- src/interfaces/libpq/fe-exec.c\t2000/01/14 22:47:07\n***************\n*** 13,18 ****\n--- 13,19 ----\n */\n #include <errno.h>\n #include <ctype.h>\n+ #include <fcntl.h>\n \n #include \"postgres.h\"\n #include \"libpq-fe.h\"\n***************\n*** 24,30 ****\n #include <unistd.h>\n #endif\n \n- \n /* keep this in same order as ExecStatusType in libpq-fe.h */\n const char *const pgresStatus[] = {\n \t\"PGRES_EMPTY_QUERY\",\n--- 25,30 ----\n***************\n*** 514,526 ****\n \tconn->curTuple = NULL;\n \n \t/* send the query to the backend; */\n! \t/* the frontend-backend protocol uses 'Q' to designate queries */\n! \tif (pqPutnchar(\"Q\", 1, conn) ||\n! \t\tpqPuts(query, conn) ||\n! \t\tpqFlush(conn))\n \t{\n! \t\thandleSendFailure(conn);\n! \t\treturn 0;\n \t}\n \n \t/* OK, it's launched! */\n--- 514,566 ----\n \tconn->curTuple = NULL;\n \n \t/* send the query to the backend; */\n! \n! \t/*\n! \t * in order to guarantee that we don't send a partial query \n! \t * where we would become out of sync with the backend and/or\n! \t * block during a non-blocking connection we must first flush\n! \t * the send buffer before sending more data\n! \t *\n! \t * an alternative is to implement 'queue reservations' where\n! \t * we are able to roll up a transaction \n! \t * (the 'Q' along with our query) and make sure we have\n! \t * enough space for it all in the send buffer.\n! \t */\n! \tif (pqIsnonblocking(conn))\n \t{\n! \t\t/*\n! \t\t * the buffer must have emptied completely before we allow\n! \t\t * a new query to be buffered\n! \t\t */\n! \t\tif (pqFlush(conn))\n! \t\t\treturn 0;\n! \t\t/* 'Q' == queries */\n! \t\t/* XXX: if we fail here we really ought to not block */\n! \t\tif (pqPutnchar(\"Q\", 1, conn) ||\n! \t\t\tpqPuts(query, conn))\n! \t\t{\n! \t\t\thandleSendFailure(conn);\t\n! \t\t\treturn 0;\n! \t\t}\n! \t\t/*\n! \t\t * give the data a push, ignore the return value as\n! \t\t * ConsumeInput() will do any aditional flushing if needed\n! \t\t */\n! \t\t(void) pqFlush(conn);\t\n! \t}\n! \telse\n! \t{\n! \t\t/* \n! \t\t * the frontend-backend protocol uses 'Q' to \n! \t\t * designate queries \n! \t\t */\n! \t\tif (pqPutnchar(\"Q\", 1, conn) ||\n! \t\t\tpqPuts(query, conn) ||\n! \t\t\tpqFlush(conn))\n! \t\t{\n! \t\t\thandleSendFailure(conn);\n! \t\t\treturn 0;\n! \t\t}\n \t}\n \n \t/* OK, it's launched! */\n***************\n*** 574,580 ****\n--- 614,630 ----\n \t * we will NOT block waiting for more input.\n \t */\n \tif (pqReadData(conn) < 0)\n+ \t{\n+ \t\t/*\n+ \t\t * for non-blocking connections\n+ \t\t * try to flush the send-queue otherwise we may never get a \n+ \t\t * responce for something that may not have already been sent\n+ \t\t * because it's in our write buffer!\n+ \t\t */\n+ \t\tif (pqIsnonblocking(conn))\n+ \t\t\t(void) pqFlush(conn);\n \t\treturn 0;\n+ \t}\n \t/* Parsing of the data waits till later. */\n \treturn 1;\n }\n***************\n*** 1088,1093 ****\n--- 1138,1153 ----\n {\n \tPGresult *result;\n \tPGresult *lastResult;\n+ \tbool\tsavedblocking;\n+ \n+ \t/*\n+ \t * we assume anyone calling PQexec wants blocking behaviour,\n+ \t * we force the blocking status of the connection to blocking\n+ \t * for the duration of this function and restore it on return\n+ \t */\n+ \tsavedblocking = pqIsnonblocking(conn);\n+ \tif (PQsetnonblocking(conn, FALSE) == -1)\n+ \t\treturn NULL;\n \n \t/*\n \t * Silently discard any prior query result that application didn't\n***************\n*** 1102,1115 ****\n \t\t\tPQclear(result);\n \t\t\tprintfPQExpBuffer(&conn->errorMessage,\n \t\t\t\t\"PQexec: you gotta get out of a COPY state yourself.\\n\");\n! \t\t\treturn NULL;\n \t\t}\n \t\tPQclear(result);\n \t}\n \n \t/* OK to send the message */\n \tif (!PQsendQuery(conn, query))\n! \t\treturn NULL;\n \n \t/*\n \t * For backwards compatibility, return the last result if there are\n--- 1162,1176 ----\n \t\t\tPQclear(result);\n \t\t\tprintfPQExpBuffer(&conn->errorMessage,\n \t\t\t\t\"PQexec: you gotta get out of a COPY state yourself.\\n\");\n! \t\t\t/* restore blocking status */\n! \t\t\tgoto errout;\n \t\t}\n \t\tPQclear(result);\n \t}\n \n \t/* OK to send the message */\n \tif (!PQsendQuery(conn, query))\n! \t\tgoto errout;\t/* restore blocking status */\n \n \t/*\n \t * For backwards compatibility, return the last result if there are\n***************\n*** 1142,1148 ****\n--- 1203,1217 ----\n \t\t\tresult->resultStatus == PGRES_COPY_OUT)\n \t\t\tbreak;\n \t}\n+ \n+ \tif (PQsetnonblocking(conn, savedblocking) == -1)\n+ \t\treturn NULL;\n \treturn lastResult;\n+ \n+ errout:\n+ \tif (PQsetnonblocking(conn, savedblocking) == -1)\n+ \t\treturn NULL;\n+ \treturn NULL;\n }\n \n \n***************\n*** 1431,1438 ****\n \t\t\t \"PQendcopy() -- I don't think there's a copy in progress.\\n\");\n \t\treturn 1;\n \t}\n \n! \t(void) pqFlush(conn);\t\t/* make sure no data is waiting to be sent */\n \n \t/* Return to active duty */\n \tconn->asyncStatus = PGASYNC_BUSY;\n--- 1500,1516 ----\n \t\t\t \"PQendcopy() -- I don't think there's a copy in progress.\\n\");\n \t\treturn 1;\n \t}\n+ \n+ \t/*\n+ \t * make sure no data is waiting to be sent, \n+ \t * abort if we are non-blocking and the flush fails\n+ \t */\n+ \tif (pqFlush(conn) && pqIsnonblocking(conn))\n+ \t\treturn (1);\n \n! \t/* non blocking connections may have to abort at this point. */\n! \tif (pqIsnonblocking(conn) && PQisBusy(conn))\n! \t\treturn (1);\n \n \t/* Return to active duty */\n \tconn->asyncStatus = PGASYNC_BUSY;\n***************\n*** 2025,2028 ****\n--- 2103,2192 ----\n \t\treturn 1;\n \telse\n \t\treturn 0;\n+ }\n+ \n+ /* PQsetnonblocking:\n+ \t sets the PGconn's database connection non-blocking if the arg is TRUE\n+ \t or makes it non-blocking if the arg is FALSE, this will not protect\n+ \t you from PQexec(), you'll only be safe when using the non-blocking\n+ \t API\n+ \t Needs to be called only on a connected database connection.\n+ */\n+ \n+ int\n+ PQsetnonblocking(PGconn *conn, int arg)\n+ {\n+ \tint\tfcntlarg;\n+ \n+ \targ = (arg == TRUE) ? 1 : 0;\n+ \t/* early out if the socket is already in the state requested */\n+ \tif (arg == conn->nonblocking)\n+ \t\treturn (0);\n+ \n+ \t/*\n+ \t * to guarantee constancy for flushing/query/result-polling behavior\n+ \t * we need to flush the send queue at this point in order to guarantee\n+ \t * proper behavior.\n+ \t * this is ok because either they are making a transition\n+ \t * _from_ or _to_ blocking mode, either way we can block them.\n+ \t */\n+ \t/* if we are going from blocking to non-blocking flush here */\n+ \tif (!pqIsnonblocking(conn) && pqFlush(conn))\n+ \t\treturn (-1);\n+ \n+ \n+ #ifdef USE_SSL\n+ \tif (conn->ssl)\n+ \t{\n+ \t\tprintfPQExpBuffer(&conn->errorMessage,\n+ \t\t\t\"PQsetnonblocking() -- not supported when using SSL\\n\");\n+ \t\treturn (-1);\n+ \t}\n+ #endif /* USE_SSL */\n+ \n+ #ifndef WIN32\n+ \tfcntlarg = fcntl(conn->sock, F_GETFL, 0);\n+ \tif (fcntlarg == -1)\n+ \t\treturn (-1);\n+ \n+ \tif ((arg == TRUE && \n+ \t\tfcntl(conn->sock, F_SETFL, fcntlarg | O_NONBLOCK) == -1) ||\n+ \t\t(arg == FALSE &&\n+ \t\tfcntl(conn->sock, F_SETFL, fcntlarg & ~O_NONBLOCK) == -1)) \n+ #else\n+ \tfcntlarg = arg;\n+ \tif (ioctlsocket(conn->sock, FIONBIO, &fcntlarg) != 0)\n+ #endif\n+ \t{\n+ \t\tprintfPQExpBuffer(&conn->errorMessage,\n+ \t\t\t\"PQsetblocking() -- unable to set nonblocking status to %s\\n\",\n+ \t\t\targ == TRUE ? \"TRUE\" : \"FALSE\");\n+ \t\treturn (-1);\n+ \t}\n+ \n+ \tconn->nonblocking = arg;\n+ \n+ \t/* if we are going from non-blocking to blocking flush here */\n+ \tif (pqIsnonblocking(conn) && pqFlush(conn))\n+ \t\treturn (-1);\n+ \n+ \treturn (0);\n+ }\n+ \n+ /* return the blocking status of the database connection, TRUE == nonblocking,\n+ \t FALSE == blocking\n+ */\n+ int\n+ PQisnonblocking(const PGconn *conn)\n+ {\n+ \n+ \treturn (pqIsnonblocking(conn));\n+ }\n+ \n+ /* try to force data out, really only useful for non-blocking users */\n+ int\n+ PQflush(PGconn *conn)\n+ {\n+ \n+ \treturn (pqFlush(conn));\n }\nIndex: src/interfaces/libpq/fe-misc.c\n===================================================================\nRCS file: /home/pgcvs/pgsql/src/interfaces/libpq/fe-misc.c,v\nretrieving revision 1.33\ndiff -u -c -r1.33 fe-misc.c\n*** src/interfaces/libpq/fe-misc.c\t1999/11/30 03:08:19\t1.33\n--- src/interfaces/libpq/fe-misc.c\t2000/01/12 03:12:14\n***************\n*** 86,91 ****\n--- 86,122 ----\n {\n \tsize_t avail = Max(conn->outBufSize - conn->outCount, 0);\n \n+ \t/*\n+ \t * if we are non-blocking and the send queue is too full to buffer this\n+ \t * request then try to flush some and return an error \n+ \t */\n+ \tif (pqIsnonblocking(conn) && nbytes > avail && pqFlush(conn))\n+ \t{\n+ \t\t/* \n+ \t\t * even if the flush failed we may still have written some\n+ \t\t * data, recalculate the size of the send-queue relative\n+ \t\t * to the amount we have to send, we may be able to queue it\n+ \t\t * afterall even though it's not sent to the database it's\n+ \t\t * ok, any routines that check the data coming from the\n+ \t\t * database better call pqFlush() anyway.\n+ \t\t */\n+ \t\tif (nbytes > Max(conn->outBufSize - conn->outCount, 0))\n+ \t\t{\n+ \t\t\tprintfPQExpBuffer(&conn->errorMessage,\n+ \t\t\t\t\"pqPutBytes -- pqFlush couldn't flush enough\"\n+ \t\t\t\t\" data: space available: %d, space needed %d\\n\",\n+ \t\t\t\tMax(conn->outBufSize - conn->outCount, 0), nbytes);\n+ \t\t\treturn EOF;\n+ \t\t}\n+ \t}\n+ \n+ \t/* \n+ \t * is the amount of data to be sent is larger than the size of the\n+ \t * output buffer then we must flush it to make more room.\n+ \t *\n+ \t * the code above will make sure the loop conditional is never \n+ \t * true for non-blocking connections\n+ \t */\n \twhile (nbytes > avail)\n \t{\n \t\tmemcpy(conn->outBuffer + conn->outCount, s, avail);\n***************\n*** 548,553 ****\n--- 579,592 ----\n \t\treturn EOF;\n \t}\n \n+ \t/* \n+ \t * don't try to send zero data, allows us to use this function\n+ \t * without too much worry about overhead\n+ \t */\n+ \tif (len == 0)\n+ \t\treturn (0);\n+ \n+ \t/* while there's still data to send */\n \twhile (len > 0)\n \t{\n \t\t/* Prevent being SIGPIPEd if backend has closed the connection. */\n***************\n*** 556,561 ****\n--- 595,601 ----\n #endif\n \n \t\tint sent;\n+ \n #ifdef USE_SSL\n \t\tif (conn->ssl) \n \t\t sent = SSL_write(conn->ssl, ptr, len);\n***************\n*** 585,590 ****\n--- 625,632 ----\n \t\t\t\tcase EWOULDBLOCK:\n \t\t\t\t\tbreak;\n #endif\n+ \t\t\t\tcase EINTR:\n+ \t\t\t\t\tcontinue;\n \n \t\t\t\tcase EPIPE:\n #ifdef ECONNRESET\n***************\n*** 616,628 ****\n \t\t\tptr += sent;\n \t\t\tlen -= sent;\n \t\t}\n \t\tif (len > 0)\n \t\t{\n \t\t\t/* We didn't send it all, wait till we can send more */\n \n- \t\t\t/* At first glance this looks as though it should block. I think\n- \t\t\t * that it will be OK though, as long as the socket is\n- \t\t\t * non-blocking. */\n \t\t\tif (pqWait(FALSE, TRUE, conn))\n \t\t\t\treturn EOF;\n \t\t}\n--- 658,688 ----\n \t\t\tptr += sent;\n \t\t\tlen -= sent;\n \t\t}\n+ \n \t\tif (len > 0)\n \t\t{\n \t\t\t/* We didn't send it all, wait till we can send more */\n+ \n+ \t\t\t/* \n+ \t\t\t * if the socket is in non-blocking mode we may need\n+ \t\t\t * to abort here \n+ \t\t\t */\n+ #ifdef USE_SSL\n+ \t\t\t/* can't do anything for our SSL users yet */\n+ \t\t\tif (conn->ssl == NULL)\n+ \t\t\t{\n+ #endif\n+ \t\t\t\tif (pqIsnonblocking(conn))\n+ \t\t\t\t{\n+ \t\t\t\t\t/* shift the contents of the buffer */\n+ \t\t\t\t\tmemmove(conn->outBuffer, ptr, len);\n+ \t\t\t\t\tconn->outCount = len;\n+ \t\t\t\t\treturn EOF;\n+ \t\t\t\t}\n+ #ifdef USE_SSL\n+ \t\t\t}\n+ #endif\n \n \t\t\tif (pqWait(FALSE, TRUE, conn))\n \t\t\t\treturn EOF;\n \t\t}\nIndex: src/interfaces/libpq/libpq-fe.h\n===================================================================\nRCS file: /home/pgcvs/pgsql/src/interfaces/libpq/libpq-fe.h,v\nretrieving revision 1.55\ndiff -u -c -r1.55 libpq-fe.h\n*** src/interfaces/libpq/libpq-fe.h\t2000/01/15 05:37:21\t1.55\n--- src/interfaces/libpq/libpq-fe.h\t2000/01/17 02:35:56\n***************\n*** 263,268 ****\n--- 263,275 ----\n \textern int\tPQputnbytes(PGconn *conn, const char *buffer, int nbytes);\n \textern int\tPQendcopy(PGconn *conn);\n \n+ \t/* Set blocking/nonblocking connection to the backend */\n+ \textern int\tPQsetnonblocking(PGconn *conn, int arg);\n+ \textern int\tPQisnonblocking(const PGconn *conn);\n+ \n+ \t/* Force the write buffer to be written (or at least try) */\n+ \textern int\tPQflush(PGconn *conn);\n+ \n \t/*\n \t * \"Fast path\" interface --- not really recommended for application\n \t * use\nIndex: src/interfaces/libpq/libpq-int.h\n===================================================================\nRCS file: /home/pgcvs/pgsql/src/interfaces/libpq/libpq-int.h,v\nretrieving revision 1.16\ndiff -u -c -r1.16 libpq-int.h\n*** src/interfaces/libpq/libpq-int.h\t2000/01/15 05:37:21\t1.16\n--- src/interfaces/libpq/libpq-int.h\t2000/01/17 02:35:56\n***************\n*** 214,219 ****\n--- 214,222 ----\n \tint\t\t\tinEnd;\t\t\t/* offset to first position after avail\n \t\t\t\t\t\t\t\t * data */\n \n+ \tint\t\t\tnonblocking;\t/* whether this connection is using a blocking\n+ \t\t\t\t\t\t\t\t * socket to the backend or not */\n+ \n \t/* Buffer for data not yet sent to backend */\n \tchar\t *outBuffer;\t\t/* currently allocated buffer */\n \tint\t\t\toutBufSize;\t\t/* allocated size of buffer */\n***************\n*** 299,303 ****\n--- 302,312 ----\n #define strerror(A) (sys_errlist[(A)])\n #endif\t /* sunos4 */\n #endif\t /* !strerror */\n+ \n+ /* \n+ * this is so that we can check is a connection is non-blocking internally\n+ * without the overhead of a function call\n+ */\n+ #define pqIsnonblocking(conn)\t(conn->nonblocking)\n \n #endif\t /* LIBPQ_INT_H */\n\n", "msg_date": "Sun, 16 Jan 2000 16:00:45 -0800", "msg_from": "Alfred Perlstein <[email protected]>", "msg_from_op": false, "msg_subject": "docs done Re: [HACKERS] LIBPQ patches ..." }, { "msg_contents": "> setup my enviornment... (this ought to be mentioned in the docs)\n\nBut afaik this isn't required for me to run on postgresql.org, a\nFreeBSD machine set up by Marc/scrappy.\n\n> then in the pgsql/doc/src dir:\n> gmake all \\\n> HSTYLE=/usr/local/share/sgml/docbook/dsssl/modular/html/ \\\n> PSTYLE=/usr/local/share/sgml/docbook/dsssl/modular/print/ \\\n\nThat works too. I usually just set up a src/Makefile.custom with the\ntwo lines defining HSTYLE and PSTYLE.\n\n> Btw, does anyone have some fixes so gvim doesn't barf doing syntax\n> highlighting on these sgml files?\n\nLet us know when you find them; I can help with emacs...\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Tue, 18 Jan 2000 02:58:18 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: FreeBSD postgresql doc-HOWTO was: Re: [HACKERS] Revised\n\tnonblocking patches + quasi docs" }, { "msg_contents": "Applied.\n\n> * Tom Lane <[email protected]> [000109 08:18] wrote:\n> > Don Baccus <[email protected]> writes:\n> > > At 05:27 PM 1/8/00 -0500, Tom Lane wrote:\n> > >> I also object strongly to the lack of documentation.\n> > \n> > > ... I know there are some folks who aren't native-english speakers, so\n> > > perhaps you don't want to require that the implementor of such patches\n> > > provide the final documentation wording. But the information should\n> > > be there and spelled out in a form that can be very easily moved to\n> > > the docs.\n> > \n> > Oh, absolutely. Thomas, our master of the docs, has always had the\n> > policy of \"give me some words, I'll take care of formatting and\n> > editing...\"\n> > \n> > I was probably too harsh on Alfred last night, since in fact his code\n> > was fairly well commented, and some minimal doco could have been\n> > extracted from the routine headers. But on a change like this, I think\n> > some paragraphs of coherent high-level explanation are needed: what it\n> > does, when and why you'd use it. I didn't see that anywhere...\n> \n> Here's the revised patch, it includes sgml docs and changes to\n> ensure that old style connections behave the way they are expected\n> to:\n> \n> Index: doc/src/sgml/libpq.sgml\n> ===================================================================\n> RCS file: /home/pgcvs/pgsql/doc/src/sgml/libpq.sgml,v\n> retrieving revision 1.25\n> diff -u -c -r1.25 libpq.sgml\n> *** doc/src/sgml/libpq.sgml\t2000/01/14 05:33:13\t1.25\n> --- doc/src/sgml/libpq.sgml\t2000/01/17 03:40:30\n> ***************\n> *** 377,382 ****\n> --- 377,386 ----\n> changed in the future.\n> </para>\n> <para>\n> + These functions leave the socket in a non-blocking state as if \n> + <function>PQsetnonblocking</function> had been called.\n> + </para>\n> + <para>\n> These functions are not thread-safe.\n> </para>\n> </listitem>\n> ***************\n> *** 1168,1175 ****\n> --- 1172,1229 ----\n> Applications that do not like these limitations can instead use the\n> underlying functions that <function>PQexec</function> is built from:\n> <function>PQsendQuery</function> and <function>PQgetResult</function>.\n> + </para>\n> + <para>\n> + Older programs that used this functionality as well as \n> + <function>PQputline</function> and <function>PQputnbytes</function>\n> + could block waiting to send data to the backend, to\n> + address that issue, the function <function>PQsetnonblocking</function>\n> + was added.\n> + </para>\n> + <para>\n> + Old applications can neglect to use <function>PQsetnonblocking</function>\n> + and get the older potentially blocking behavior. Newer programs can use \n> + <function>PQsetnonblocking</function> to achieve a completely non-blocking\n> + connection to the backend.\n> \n> <itemizedlist>\n> + <listitem>\n> + <para>\n> + <function>PQsetnonblocking</function> Sets the state of the connection\n> + to non-blocking.\n> + <synopsis>\n> + int PQsetnonblocking(PGconn *conn)\n> + </synopsis>\n> + this function will ensure that calls to \n> + <function>PQputline</function>, <function>PQputnbytes</function>,\n> + <function>PQsendQuery</function> and <function>PQendcopy</function>\n> + will not block but instead return an error if they need to be called\n> + again.\n> + </para>\n> + <para>\n> + When a database connection has been set to non-blocking mode and\n> + <function>PQexec</function> is called, it will temporarily set the state\n> + of the connection to blocking until the <function>PQexec</function> \n> + completes. \n> + </para>\n> + <para>\n> + More of libpq is expected to be made safe for \n> + <function>PQsetnonblocking</function> functionality in the near future.\n> + </para>\n> + </listitem>\n> + \n> + <listitem>\n> + <para>\n> + <function>PQisnonblocking</function>\n> + Returns the blocking status of the database connection.\n> + <synopsis>\n> + int PQisnonblocking(const PGconn *conn)\n> + </synopsis>\n> + Returns TRUE if the connection is set to non-blocking mode,\n> + FALSE if blocking.\n> + </para>\n> + </listitem>\n> + \n> <listitem>\n> <para>\n> <function>PQsendQuery</function>\n> ***************\n> *** 1267,1286 ****\n> \n> <listitem>\n> <para>\n> <function>PQsocket</function>\n> \t Obtain the file descriptor number for the backend connection socket.\n> ! \t A valid descriptor will be >= 0; a result of -1 indicates that\n> \t no backend connection is currently open.\n> <synopsis>\n> int PQsocket(const PGconn *conn);\n> </synopsis>\n> <function>PQsocket</function> should be used to obtain the backend socket descriptor\n> in preparation for executing <function>select</function>(2). This allows an\n> ! application to wait for either backend responses or other conditions.\n> If the result of <function>select</function>(2) indicates that data can be read from\n> the backend socket, then <function>PQconsumeInput</function> should be called to read the\n> data; after which, <function>PQisBusy</function>, <function>PQgetResult</function>,\n> and/or <function>PQnotifies</function> can be used to process the response.\n> </para>\n> </listitem>\n> \n> --- 1321,1363 ----\n> \n> <listitem>\n> <para>\n> + <function>PQflush</function> Attempt to flush any data queued to the backend,\n> + returns 0 if successful (or if the send queue is empty) or EOF if it failed for\n> + some reason.\n> + <synopsis>\n> + int PQflush(PGconn *conn);\n> + </synopsis>\n> + <function>PQflush</function> needs to be called on a non-blocking connection \n> + before calling <function>select</function> to determine if a responce has\n> + arrived. If 0 is returned it ensures that there is no data queued to the \n> + backend that has not actually been sent. Only applications that have used\n> + <function>PQsetnonblocking</function> have a need for this.\n> + </para>\n> + </listitem>\n> + \n> + <listitem>\n> + <para>\n> <function>PQsocket</function>\n> \t Obtain the file descriptor number for the backend connection socket.\n> ! \t A valid descriptor will be &gt;= 0; a result of -1 indicates that\n> \t no backend connection is currently open.\n> <synopsis>\n> int PQsocket(const PGconn *conn);\n> </synopsis>\n> <function>PQsocket</function> should be used to obtain the backend socket descriptor\n> in preparation for executing <function>select</function>(2). This allows an\n> ! application using a blocking connection to wait for either backend responses or\n> ! other conditions.\n> If the result of <function>select</function>(2) indicates that data can be read from\n> the backend socket, then <function>PQconsumeInput</function> should be called to read the\n> data; after which, <function>PQisBusy</function>, <function>PQgetResult</function>,\n> and/or <function>PQnotifies</function> can be used to process the response.\n> + </para>\n> + <para>\n> + Non-blocking connections (that have used <function>PQsetnonblocking</function>)\n> + should not use <function>select</function> until <function>PQflush</function>\n> + has returned 0 indicating that there is no buffered data waiting to be sent\n> + to the backend.\n> </para>\n> </listitem>\n> \n> Index: src/interfaces/libpq/fe-connect.c\n> ===================================================================\n> RCS file: /home/pgcvs/pgsql/src/interfaces/libpq/fe-connect.c,v\n> retrieving revision 1.111\n> diff -u -c -r1.111 fe-connect.c\n> *** src/interfaces/libpq/fe-connect.c\t2000/01/16 21:18:52\t1.111\n> --- src/interfaces/libpq/fe-connect.c\t2000/01/17 02:35:56\n> ***************\n> *** 594,624 ****\n> \treturn 0;\n> }\n> \n> - \n> - /* ----------\n> - * connectMakeNonblocking -\n> - * Make a connection non-blocking.\n> - * Returns 1 if successful, 0 if not.\n> - * ----------\n> - */\n> - static int\n> - connectMakeNonblocking(PGconn *conn)\n> - {\n> - #ifndef WIN32\n> - \tif (fcntl(conn->sock, F_SETFL, O_NONBLOCK) < 0)\n> - #else\n> - \tif (ioctlsocket(conn->sock, FIONBIO, &on) != 0)\n> - #endif\n> - \t{\n> - \t\tprintfPQExpBuffer(&conn->errorMessage,\n> - \t\t\t\t\t\t \"connectMakeNonblocking -- fcntl() failed: errno=%d\\n%s\\n\",\n> - \t\t\t\t\t\t errno, strerror(errno));\n> - \t\treturn 0;\n> - \t}\n> - \n> - \treturn 1;\n> - }\n> - \n> /* ----------\n> * connectNoDelay -\n> * Sets the TCP_NODELAY socket option.\n> --- 594,599 ----\n> ***************\n> *** 789,795 ****\n> \t * Ewan Mellor <[email protected]>.\n> \t * ---------- */\n> #if (!defined(WIN32) || defined(WIN32_NON_BLOCKING_CONNECTIONS)) && !defined(USE_SSL)\n> ! \tif (!connectMakeNonblocking(conn))\n> \t\tgoto connect_errReturn;\n> #endif\t\n> \n> --- 764,770 ----\n> \t * Ewan Mellor <[email protected]>.\n> \t * ---------- */\n> #if (!defined(WIN32) || defined(WIN32_NON_BLOCKING_CONNECTIONS)) && !defined(USE_SSL)\n> ! \tif (PQsetnonblocking(conn, TRUE) != 0)\n> \t\tgoto connect_errReturn;\n> #endif\t\n> \n> ***************\n> *** 898,904 ****\n> \t/* This makes the connection non-blocking, for all those cases which forced us\n> \t not to do it above. */\n> #if (defined(WIN32) && !defined(WIN32_NON_BLOCKING_CONNECTIONS)) || defined(USE_SSL)\n> ! \tif (!connectMakeNonblocking(conn))\n> \t\tgoto connect_errReturn;\n> #endif\t\n> \n> --- 873,879 ----\n> \t/* This makes the connection non-blocking, for all those cases which forced us\n> \t not to do it above. */\n> #if (defined(WIN32) && !defined(WIN32_NON_BLOCKING_CONNECTIONS)) || defined(USE_SSL)\n> ! \tif (PQsetnonblocking(conn, TRUE) != 0)\n> \t\tgoto connect_errReturn;\n> #endif\t\n> \n> ***************\n> *** 1720,1725 ****\n> --- 1695,1701 ----\n> \tconn->inBuffer = (char *) malloc(conn->inBufSize);\n> \tconn->outBufSize = 8 * 1024;\n> \tconn->outBuffer = (char *) malloc(conn->outBufSize);\n> + \tconn->nonblocking = FALSE;\n> \tinitPQExpBuffer(&conn->errorMessage);\n> \tinitPQExpBuffer(&conn->workBuffer);\n> \tif (conn->inBuffer == NULL ||\n> ***************\n> *** 1830,1835 ****\n> --- 1806,1812 ----\n> \tconn->lobjfuncs = NULL;\n> \tconn->inStart = conn->inCursor = conn->inEnd = 0;\n> \tconn->outCount = 0;\n> + \tconn->nonblocking = FALSE;\n> \n> }\n> \n> Index: src/interfaces/libpq/fe-exec.c\n> ===================================================================\n> RCS file: /home/pgcvs/pgsql/src/interfaces/libpq/fe-exec.c,v\n> retrieving revision 1.86\n> diff -u -c -r1.86 fe-exec.c\n> *** src/interfaces/libpq/fe-exec.c\t1999/11/11 00:10:14\t1.86\n> --- src/interfaces/libpq/fe-exec.c\t2000/01/14 22:47:07\n> ***************\n> *** 13,18 ****\n> --- 13,19 ----\n> */\n> #include <errno.h>\n> #include <ctype.h>\n> + #include <fcntl.h>\n> \n> #include \"postgres.h\"\n> #include \"libpq-fe.h\"\n> ***************\n> *** 24,30 ****\n> #include <unistd.h>\n> #endif\n> \n> - \n> /* keep this in same order as ExecStatusType in libpq-fe.h */\n> const char *const pgresStatus[] = {\n> \t\"PGRES_EMPTY_QUERY\",\n> --- 25,30 ----\n> ***************\n> *** 514,526 ****\n> \tconn->curTuple = NULL;\n> \n> \t/* send the query to the backend; */\n> ! \t/* the frontend-backend protocol uses 'Q' to designate queries */\n> ! \tif (pqPutnchar(\"Q\", 1, conn) ||\n> ! \t\tpqPuts(query, conn) ||\n> ! \t\tpqFlush(conn))\n> \t{\n> ! \t\thandleSendFailure(conn);\n> ! \t\treturn 0;\n> \t}\n> \n> \t/* OK, it's launched! */\n> --- 514,566 ----\n> \tconn->curTuple = NULL;\n> \n> \t/* send the query to the backend; */\n> ! \n> ! \t/*\n> ! \t * in order to guarantee that we don't send a partial query \n> ! \t * where we would become out of sync with the backend and/or\n> ! \t * block during a non-blocking connection we must first flush\n> ! \t * the send buffer before sending more data\n> ! \t *\n> ! \t * an alternative is to implement 'queue reservations' where\n> ! \t * we are able to roll up a transaction \n> ! \t * (the 'Q' along with our query) and make sure we have\n> ! \t * enough space for it all in the send buffer.\n> ! \t */\n> ! \tif (pqIsnonblocking(conn))\n> \t{\n> ! \t\t/*\n> ! \t\t * the buffer must have emptied completely before we allow\n> ! \t\t * a new query to be buffered\n> ! \t\t */\n> ! \t\tif (pqFlush(conn))\n> ! \t\t\treturn 0;\n> ! \t\t/* 'Q' == queries */\n> ! \t\t/* XXX: if we fail here we really ought to not block */\n> ! \t\tif (pqPutnchar(\"Q\", 1, conn) ||\n> ! \t\t\tpqPuts(query, conn))\n> ! \t\t{\n> ! \t\t\thandleSendFailure(conn);\t\n> ! \t\t\treturn 0;\n> ! \t\t}\n> ! \t\t/*\n> ! \t\t * give the data a push, ignore the return value as\n> ! \t\t * ConsumeInput() will do any aditional flushing if needed\n> ! \t\t */\n> ! \t\t(void) pqFlush(conn);\t\n> ! \t}\n> ! \telse\n> ! \t{\n> ! \t\t/* \n> ! \t\t * the frontend-backend protocol uses 'Q' to \n> ! \t\t * designate queries \n> ! \t\t */\n> ! \t\tif (pqPutnchar(\"Q\", 1, conn) ||\n> ! \t\t\tpqPuts(query, conn) ||\n> ! \t\t\tpqFlush(conn))\n> ! \t\t{\n> ! \t\t\thandleSendFailure(conn);\n> ! \t\t\treturn 0;\n> ! \t\t}\n> \t}\n> \n> \t/* OK, it's launched! */\n> ***************\n> *** 574,580 ****\n> --- 614,630 ----\n> \t * we will NOT block waiting for more input.\n> \t */\n> \tif (pqReadData(conn) < 0)\n> + \t{\n> + \t\t/*\n> + \t\t * for non-blocking connections\n> + \t\t * try to flush the send-queue otherwise we may never get a \n> + \t\t * responce for something that may not have already been sent\n> + \t\t * because it's in our write buffer!\n> + \t\t */\n> + \t\tif (pqIsnonblocking(conn))\n> + \t\t\t(void) pqFlush(conn);\n> \t\treturn 0;\n> + \t}\n> \t/* Parsing of the data waits till later. */\n> \treturn 1;\n> }\n> ***************\n> *** 1088,1093 ****\n> --- 1138,1153 ----\n> {\n> \tPGresult *result;\n> \tPGresult *lastResult;\n> + \tbool\tsavedblocking;\n> + \n> + \t/*\n> + \t * we assume anyone calling PQexec wants blocking behaviour,\n> + \t * we force the blocking status of the connection to blocking\n> + \t * for the duration of this function and restore it on return\n> + \t */\n> + \tsavedblocking = pqIsnonblocking(conn);\n> + \tif (PQsetnonblocking(conn, FALSE) == -1)\n> + \t\treturn NULL;\n> \n> \t/*\n> \t * Silently discard any prior query result that application didn't\n> ***************\n> *** 1102,1115 ****\n> \t\t\tPQclear(result);\n> \t\t\tprintfPQExpBuffer(&conn->errorMessage,\n> \t\t\t\t\"PQexec: you gotta get out of a COPY state yourself.\\n\");\n> ! \t\t\treturn NULL;\n> \t\t}\n> \t\tPQclear(result);\n> \t}\n> \n> \t/* OK to send the message */\n> \tif (!PQsendQuery(conn, query))\n> ! \t\treturn NULL;\n> \n> \t/*\n> \t * For backwards compatibility, return the last result if there are\n> --- 1162,1176 ----\n> \t\t\tPQclear(result);\n> \t\t\tprintfPQExpBuffer(&conn->errorMessage,\n> \t\t\t\t\"PQexec: you gotta get out of a COPY state yourself.\\n\");\n> ! \t\t\t/* restore blocking status */\n> ! \t\t\tgoto errout;\n> \t\t}\n> \t\tPQclear(result);\n> \t}\n> \n> \t/* OK to send the message */\n> \tif (!PQsendQuery(conn, query))\n> ! \t\tgoto errout;\t/* restore blocking status */\n> \n> \t/*\n> \t * For backwards compatibility, return the last result if there are\n> ***************\n> *** 1142,1148 ****\n> --- 1203,1217 ----\n> \t\t\tresult->resultStatus == PGRES_COPY_OUT)\n> \t\t\tbreak;\n> \t}\n> + \n> + \tif (PQsetnonblocking(conn, savedblocking) == -1)\n> + \t\treturn NULL;\n> \treturn lastResult;\n> + \n> + errout:\n> + \tif (PQsetnonblocking(conn, savedblocking) == -1)\n> + \t\treturn NULL;\n> + \treturn NULL;\n> }\n> \n> \n> ***************\n> *** 1431,1438 ****\n> \t\t\t \"PQendcopy() -- I don't think there's a copy in progress.\\n\");\n> \t\treturn 1;\n> \t}\n> \n> ! \t(void) pqFlush(conn);\t\t/* make sure no data is waiting to be sent */\n> \n> \t/* Return to active duty */\n> \tconn->asyncStatus = PGASYNC_BUSY;\n> --- 1500,1516 ----\n> \t\t\t \"PQendcopy() -- I don't think there's a copy in progress.\\n\");\n> \t\treturn 1;\n> \t}\n> + \n> + \t/*\n> + \t * make sure no data is waiting to be sent, \n> + \t * abort if we are non-blocking and the flush fails\n> + \t */\n> + \tif (pqFlush(conn) && pqIsnonblocking(conn))\n> + \t\treturn (1);\n> \n> ! \t/* non blocking connections may have to abort at this point. */\n> ! \tif (pqIsnonblocking(conn) && PQisBusy(conn))\n> ! \t\treturn (1);\n> \n> \t/* Return to active duty */\n> \tconn->asyncStatus = PGASYNC_BUSY;\n> ***************\n> *** 2025,2028 ****\n> --- 2103,2192 ----\n> \t\treturn 1;\n> \telse\n> \t\treturn 0;\n> + }\n> + \n> + /* PQsetnonblocking:\n> + \t sets the PGconn's database connection non-blocking if the arg is TRUE\n> + \t or makes it non-blocking if the arg is FALSE, this will not protect\n> + \t you from PQexec(), you'll only be safe when using the non-blocking\n> + \t API\n> + \t Needs to be called only on a connected database connection.\n> + */\n> + \n> + int\n> + PQsetnonblocking(PGconn *conn, int arg)\n> + {\n> + \tint\tfcntlarg;\n> + \n> + \targ = (arg == TRUE) ? 1 : 0;\n> + \t/* early out if the socket is already in the state requested */\n> + \tif (arg == conn->nonblocking)\n> + \t\treturn (0);\n> + \n> + \t/*\n> + \t * to guarantee constancy for flushing/query/result-polling behavior\n> + \t * we need to flush the send queue at this point in order to guarantee\n> + \t * proper behavior.\n> + \t * this is ok because either they are making a transition\n> + \t * _from_ or _to_ blocking mode, either way we can block them.\n> + \t */\n> + \t/* if we are going from blocking to non-blocking flush here */\n> + \tif (!pqIsnonblocking(conn) && pqFlush(conn))\n> + \t\treturn (-1);\n> + \n> + \n> + #ifdef USE_SSL\n> + \tif (conn->ssl)\n> + \t{\n> + \t\tprintfPQExpBuffer(&conn->errorMessage,\n> + \t\t\t\"PQsetnonblocking() -- not supported when using SSL\\n\");\n> + \t\treturn (-1);\n> + \t}\n> + #endif /* USE_SSL */\n> + \n> + #ifndef WIN32\n> + \tfcntlarg = fcntl(conn->sock, F_GETFL, 0);\n> + \tif (fcntlarg == -1)\n> + \t\treturn (-1);\n> + \n> + \tif ((arg == TRUE && \n> + \t\tfcntl(conn->sock, F_SETFL, fcntlarg | O_NONBLOCK) == -1) ||\n> + \t\t(arg == FALSE &&\n> + \t\tfcntl(conn->sock, F_SETFL, fcntlarg & ~O_NONBLOCK) == -1)) \n> + #else\n> + \tfcntlarg = arg;\n> + \tif (ioctlsocket(conn->sock, FIONBIO, &fcntlarg) != 0)\n> + #endif\n> + \t{\n> + \t\tprintfPQExpBuffer(&conn->errorMessage,\n> + \t\t\t\"PQsetblocking() -- unable to set nonblocking status to %s\\n\",\n> + \t\t\targ == TRUE ? \"TRUE\" : \"FALSE\");\n> + \t\treturn (-1);\n> + \t}\n> + \n> + \tconn->nonblocking = arg;\n> + \n> + \t/* if we are going from non-blocking to blocking flush here */\n> + \tif (pqIsnonblocking(conn) && pqFlush(conn))\n> + \t\treturn (-1);\n> + \n> + \treturn (0);\n> + }\n> + \n> + /* return the blocking status of the database connection, TRUE == nonblocking,\n> + \t FALSE == blocking\n> + */\n> + int\n> + PQisnonblocking(const PGconn *conn)\n> + {\n> + \n> + \treturn (pqIsnonblocking(conn));\n> + }\n> + \n> + /* try to force data out, really only useful for non-blocking users */\n> + int\n> + PQflush(PGconn *conn)\n> + {\n> + \n> + \treturn (pqFlush(conn));\n> }\n> Index: src/interfaces/libpq/fe-misc.c\n> ===================================================================\n> RCS file: /home/pgcvs/pgsql/src/interfaces/libpq/fe-misc.c,v\n> retrieving revision 1.33\n> diff -u -c -r1.33 fe-misc.c\n> *** src/interfaces/libpq/fe-misc.c\t1999/11/30 03:08:19\t1.33\n> --- src/interfaces/libpq/fe-misc.c\t2000/01/12 03:12:14\n> ***************\n> *** 86,91 ****\n> --- 86,122 ----\n> {\n> \tsize_t avail = Max(conn->outBufSize - conn->outCount, 0);\n> \n> + \t/*\n> + \t * if we are non-blocking and the send queue is too full to buffer this\n> + \t * request then try to flush some and return an error \n> + \t */\n> + \tif (pqIsnonblocking(conn) && nbytes > avail && pqFlush(conn))\n> + \t{\n> + \t\t/* \n> + \t\t * even if the flush failed we may still have written some\n> + \t\t * data, recalculate the size of the send-queue relative\n> + \t\t * to the amount we have to send, we may be able to queue it\n> + \t\t * afterall even though it's not sent to the database it's\n> + \t\t * ok, any routines that check the data coming from the\n> + \t\t * database better call pqFlush() anyway.\n> + \t\t */\n> + \t\tif (nbytes > Max(conn->outBufSize - conn->outCount, 0))\n> + \t\t{\n> + \t\t\tprintfPQExpBuffer(&conn->errorMessage,\n> + \t\t\t\t\"pqPutBytes -- pqFlush couldn't flush enough\"\n> + \t\t\t\t\" data: space available: %d, space needed %d\\n\",\n> + \t\t\t\tMax(conn->outBufSize - conn->outCount, 0), nbytes);\n> + \t\t\treturn EOF;\n> + \t\t}\n> + \t}\n> + \n> + \t/* \n> + \t * is the amount of data to be sent is larger than the size of the\n> + \t * output buffer then we must flush it to make more room.\n> + \t *\n> + \t * the code above will make sure the loop conditional is never \n> + \t * true for non-blocking connections\n> + \t */\n> \twhile (nbytes > avail)\n> \t{\n> \t\tmemcpy(conn->outBuffer + conn->outCount, s, avail);\n> ***************\n> *** 548,553 ****\n> --- 579,592 ----\n> \t\treturn EOF;\n> \t}\n> \n> + \t/* \n> + \t * don't try to send zero data, allows us to use this function\n> + \t * without too much worry about overhead\n> + \t */\n> + \tif (len == 0)\n> + \t\treturn (0);\n> + \n> + \t/* while there's still data to send */\n> \twhile (len > 0)\n> \t{\n> \t\t/* Prevent being SIGPIPEd if backend has closed the connection. */\n> ***************\n> *** 556,561 ****\n> --- 595,601 ----\n> #endif\n> \n> \t\tint sent;\n> + \n> #ifdef USE_SSL\n> \t\tif (conn->ssl) \n> \t\t sent = SSL_write(conn->ssl, ptr, len);\n> ***************\n> *** 585,590 ****\n> --- 625,632 ----\n> \t\t\t\tcase EWOULDBLOCK:\n> \t\t\t\t\tbreak;\n> #endif\n> + \t\t\t\tcase EINTR:\n> + \t\t\t\t\tcontinue;\n> \n> \t\t\t\tcase EPIPE:\n> #ifdef ECONNRESET\n> ***************\n> *** 616,628 ****\n> \t\t\tptr += sent;\n> \t\t\tlen -= sent;\n> \t\t}\n> \t\tif (len > 0)\n> \t\t{\n> \t\t\t/* We didn't send it all, wait till we can send more */\n> \n> - \t\t\t/* At first glance this looks as though it should block. I think\n> - \t\t\t * that it will be OK though, as long as the socket is\n> - \t\t\t * non-blocking. */\n> \t\t\tif (pqWait(FALSE, TRUE, conn))\n> \t\t\t\treturn EOF;\n> \t\t}\n> --- 658,688 ----\n> \t\t\tptr += sent;\n> \t\t\tlen -= sent;\n> \t\t}\n> + \n> \t\tif (len > 0)\n> \t\t{\n> \t\t\t/* We didn't send it all, wait till we can send more */\n> + \n> + \t\t\t/* \n> + \t\t\t * if the socket is in non-blocking mode we may need\n> + \t\t\t * to abort here \n> + \t\t\t */\n> + #ifdef USE_SSL\n> + \t\t\t/* can't do anything for our SSL users yet */\n> + \t\t\tif (conn->ssl == NULL)\n> + \t\t\t{\n> + #endif\n> + \t\t\t\tif (pqIsnonblocking(conn))\n> + \t\t\t\t{\n> + \t\t\t\t\t/* shift the contents of the buffer */\n> + \t\t\t\t\tmemmove(conn->outBuffer, ptr, len);\n> + \t\t\t\t\tconn->outCount = len;\n> + \t\t\t\t\treturn EOF;\n> + \t\t\t\t}\n> + #ifdef USE_SSL\n> + \t\t\t}\n> + #endif\n> \n> \t\t\tif (pqWait(FALSE, TRUE, conn))\n> \t\t\t\treturn EOF;\n> \t\t}\n> Index: src/interfaces/libpq/libpq-fe.h\n> ===================================================================\n> RCS file: /home/pgcvs/pgsql/src/interfaces/libpq/libpq-fe.h,v\n> retrieving revision 1.55\n> diff -u -c -r1.55 libpq-fe.h\n> *** src/interfaces/libpq/libpq-fe.h\t2000/01/15 05:37:21\t1.55\n> --- src/interfaces/libpq/libpq-fe.h\t2000/01/17 02:35:56\n> ***************\n> *** 263,268 ****\n> --- 263,275 ----\n> \textern int\tPQputnbytes(PGconn *conn, const char *buffer, int nbytes);\n> \textern int\tPQendcopy(PGconn *conn);\n> \n> + \t/* Set blocking/nonblocking connection to the backend */\n> + \textern int\tPQsetnonblocking(PGconn *conn, int arg);\n> + \textern int\tPQisnonblocking(const PGconn *conn);\n> + \n> + \t/* Force the write buffer to be written (or at least try) */\n> + \textern int\tPQflush(PGconn *conn);\n> + \n> \t/*\n> \t * \"Fast path\" interface --- not really recommended for application\n> \t * use\n> Index: src/interfaces/libpq/libpq-int.h\n> ===================================================================\n> RCS file: /home/pgcvs/pgsql/src/interfaces/libpq/libpq-int.h,v\n> retrieving revision 1.16\n> diff -u -c -r1.16 libpq-int.h\n> *** src/interfaces/libpq/libpq-int.h\t2000/01/15 05:37:21\t1.16\n> --- src/interfaces/libpq/libpq-int.h\t2000/01/17 02:35:56\n> ***************\n> *** 214,219 ****\n> --- 214,222 ----\n> \tint\t\t\tinEnd;\t\t\t/* offset to first position after avail\n> \t\t\t\t\t\t\t\t * data */\n> \n> + \tint\t\t\tnonblocking;\t/* whether this connection is using a blocking\n> + \t\t\t\t\t\t\t\t * socket to the backend or not */\n> + \n> \t/* Buffer for data not yet sent to backend */\n> \tchar\t *outBuffer;\t\t/* currently allocated buffer */\n> \tint\t\t\toutBufSize;\t\t/* allocated size of buffer */\n> ***************\n> *** 299,303 ****\n> --- 302,312 ----\n> #define strerror(A) (sys_errlist[(A)])\n> #endif\t /* sunos4 */\n> #endif\t /* !strerror */\n> + \n> + /* \n> + * this is so that we can check is a connection is non-blocking internally\n> + * without the overhead of a function call\n> + */\n> + #define pqIsnonblocking(conn)\t(conn->nonblocking)\n> \n> #endif\t /* LIBPQ_INT_H */\n> \n> \n> ************\n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 18 Jan 2000 00:53:50 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] docs done Re: [HACKERS] LIBPQ patches ..." }, { "msg_contents": "Applied, or did I already say that?\n\n\n> * Tom Lane <[email protected]> [000109 08:18] wrote:\n> > Don Baccus <[email protected]> writes:\n> > > At 05:27 PM 1/8/00 -0500, Tom Lane wrote:\n> > >> I also object strongly to the lack of documentation.\n> > \n> > > ... I know there are some folks who aren't native-english speakers, so\n> > > perhaps you don't want to require that the implementor of such patches\n> > > provide the final documentation wording. But the information should\n> > > be there and spelled out in a form that can be very easily moved to\n> > > the docs.\n> > \n> > Oh, absolutely. Thomas, our master of the docs, has always had the\n> > policy of \"give me some words, I'll take care of formatting and\n> > editing...\"\n> > \n> > I was probably too harsh on Alfred last night, since in fact his code\n> > was fairly well commented, and some minimal doco could have been\n> > extracted from the routine headers. But on a change like this, I think\n> > some paragraphs of coherent high-level explanation are needed: what it\n> > does, when and why you'd use it. I didn't see that anywhere...\n> \n> Here's the revised patch, it includes sgml docs and changes to\n> ensure that old style connections behave the way they are expected\n> to:\n> \n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 18 Jan 2000 14:09:46 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] docs done Re: [HACKERS] LIBPQ patches ..." }, { "msg_contents": "> * Bruce Momjian <[email protected]> [000118 11:49] wrote:\n> > Applied, or did I already say that?\n> \n> Just one mail was sent, but I cc'd patches and hackers as well as\n> yourself on the message, sorry for duplicates, but since the mailing\n> contained my revised patch I sent to -patches as well.\n> \n> I'll be a bit less zealous in the future. :)\n\nNo, that is fine. I usually catch that, but I was not sure in this\ncase. Seems I only sent out one, and had not already sent it. It is\ngood to hit multiple lists with something that has been discussed this\nmuch.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 18 Jan 2000 14:37:31 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] docs done Re: [HACKERS] LIBPQ patches ..." }, { "msg_contents": "* Bruce Momjian <[email protected]> [000118 11:49] wrote:\n> Applied, or did I already say that?\n\nJust one mail was sent, but I cc'd patches and hackers as well as\nyourself on the message, sorry for duplicates, but since the mailing\ncontained my revised patch I sent to -patches as well.\n\nI'll be a bit less zealous in the future. :)\n\nsorry,\n-Alfred\n", "msg_date": "Tue, 18 Jan 2000 11:53:04 -0800", "msg_from": "Alfred Perlstein <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] docs done Re: [HACKERS] LIBPQ patches ..." } ]
[ { "msg_contents": "I am trying my first query a postgresql database using perl (using a Redhat\n6.0 distribution). The script fails on the line:\n\n$conn=Pg::connectdb(\"dbname=mydatabase\");\n\nwith the error 'Can't locate pg.pm in @INC.\n\nI thought that I may have been missing the perl5 interface for postgres, and\ntried to find one. The linux documentation suggests the site\nftp://ftp.kciLink.com/pub/PostgresPerl-1.3.tar.gz, but its not there. Can\nanyone tell me if this missing intergace is the problem, and if it is, where\nI can get PostgresPerl-1.3.tar.gz?\n\nThanx,\nJason\n\n\n\n\n", "msg_date": "Sat, 08 Jan 2000 23:35:49 GMT", "msg_from": "\"HydroMan\" <[email protected]>", "msg_from_op": true, "msg_subject": "Postgresql Perl Problem" }, { "msg_contents": "I believe the PG modules come with the distribution. However, if you\nfollowed the instructions and installed postgres as and unpriveleged user\n(postgres) then the install of pg.pm would fail since your Perl directory is\nprobably only writable by root. Go back into your distribution directory\nand cd into \"src/utilities\" I don't have the structure in front of me but I\nthink there is a \"Perl\" directory there. cd into that, su to root, and run\n\"make install\". That ought to do it. If you don't have root privleges then\nyou can either add the following line to your perl script:\n use lib \"/path/to/pg.m_directory\";\n\nOr you can call perl with the \"-I\" option followed by the above path. There\nare other ways I'm sure. . .\n\nVince Daniels\n\n\n\n", "msg_date": "Tue, 25 Jan 2000 19:50:34 GMT", "msg_from": "<[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgresql Perl Problem" }, { "msg_contents": "\nIf you installed postgres as recommended as an unpriveleged user then\nwhen you ran make install, the perl install would fail since your perl\nlib directory is undoubtably owned by root. The perl module does come\nwith the postgres distribution and can be found in the distribution\ndirectory:\nsrc/interfaces/perl5. If you made postgres with the include perl option\nthen pg.pm is in that directory. su to root and run make install from\nthat directory and you should be set.\n\n\n-- \nVince Daniels\n", "msg_date": "Wed, 09 Feb 2000 14:12:50 GMT", "msg_from": "Vince Daniels <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgresql Perl Problem" } ]
[ { "msg_contents": "\nI'm seeing a weird problem, that I don't think I should be expecting in\nv6.5.3 of PostgreSQL ... an inability to SELECT form a database while the\nUdmSearch/indexer is running...\n\nps shows:\n\n30040 ?? R 20:54.37 /home/database/v6.5.3/bin/postgres pgsql 216.126.84.1 udmsearch UPDATE\n43846 ?? I 0:00.03 /home/database/v6.5.3/bin/postgres pgsql 216.126.84.1 udmsearch SELECT waiting\n\nAnd, if I do successive ps's in a row, the 'SELECT waiting' stays, but the\nUPDATING keeps flashing between 'UPDATING' and 'idle'...\n\n*Eventually*, the SELECT gets perform and the call returns...\n\nBut, with MVCC, I didn't think that I should see any 'hangs' on SELECT\ncalls...the process on 43846 is 'indexer -S', which just generates stats\non the database.\n\nThe problem, in the case of this particular application, is that if\nmultiple searches were to happen, while the database is being updated, it\nseems that this could be a point of contention?\n\n>From what I can tell reading through the reading through the code, there\nis never a TABLE LOCK issued when using PostgreSQL, but it does use\nBEGIN/END...\n\nAm I misunderstanding how MVCC is supposed to work? Could we have a bug\nin v6.5.3?\n\nI'm still looking through the code, to see if I've overlooked something,\nbut I figure I'd check to see if maybe I'm misunderstanding MVCC\naltogether first...\n\nI'm CCng in the author of the code, just in case this is something that\nI'm overlooking in theh code...\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Sat, 8 Jan 2000 23:17:26 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "Table locking ..." }, { "msg_contents": "\nOkay, I did find a LOCK being issued, just not sure if its required or\nnot, with MVCC...\n\nOn Sat, 8 Jan 2000, The Hermit Hacker wrote:\n\n> \n> I'm seeing a weird problem, that I don't think I should be expecting in\n> v6.5.3 of PostgreSQL ... an inability to SELECT form a database while the\n> UdmSearch/indexer is running...\n> \n> ps shows:\n> \n> 30040 ?? R 20:54.37 /home/database/v6.5.3/bin/postgres pgsql 216.126.84.1 udmsearch UPDATE\n> 43846 ?? I 0:00.03 /home/database/v6.5.3/bin/postgres pgsql 216.126.84.1 udmsearch SELECT waiting\n> \n> And, if I do successive ps's in a row, the 'SELECT waiting' stays, but the\n> UPDATING keeps flashing between 'UPDATING' and 'idle'...\n> \n> *Eventually*, the SELECT gets perform and the call returns...\n> \n> But, with MVCC, I didn't think that I should see any 'hangs' on SELECT\n> calls...the process on 43846 is 'indexer -S', which just generates stats\n> on the database.\n> \n> The problem, in the case of this particular application, is that if\n> multiple searches were to happen, while the database is being updated, it\n> seems that this could be a point of contention?\n> \n> >From what I can tell reading through the reading through the code, there\n> is never a TABLE LOCK issued when using PostgreSQL, but it does use\n> BEGIN/END...\n> \n> Am I misunderstanding how MVCC is supposed to work? Could we have a bug\n> in v6.5.3?\n> \n> I'm still looking through the code, to see if I've overlooked something,\n> but I figure I'd check to see if maybe I'm misunderstanding MVCC\n> altogether first...\n> \n> I'm CCng in the author of the code, just in case this is something that\n> I'm overlooking in theh code...\n> \n> Marc G. Fournier ICQ#7615664 IRC Nick: Scrappy\n> Systems Administrator @ hub.org \n> primary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n> \n> \n> ************\n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Sat, 8 Jan 2000 23:27:03 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Table locking ..." } ]
[ { "msg_contents": "Now that we can run the regress tests using the new psql,\nI was dismayed to discover that the arrays regress test\nfails with it. The perfectly valid query\n\nSELECT arrtest.a[1:3],\n arrtest.b[1:1][1:2][1:2],\n arrtest.c[1:2], \n arrtest.d[1:1][1:2]\n FROM arrtest;\n\nfails with\n\nERROR: parser: parse error at or near \"]\"\n\nTurning on postmaster -d reveals that what is arriving at\nthe backend is\n\nSELECT arrtest.a[1],\n arrtest.b[1][1][1],\n arrtest.c[1], \n arrtest.d[1][1]]\n FROM arrtest\n\nNeedless to say, this transformation of the query\nis several miles to the south of acceptable.\n\nOver to you, Peter...\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 08 Jan 2000 23:21:28 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Regress tests reveal *serious* psql bug" }, { "msg_contents": "I looked into the cause of the current failure of the array regress\ntest, enough to realize that there is both a garden-variety code bug\nand a serious definitional problem there. The issue is that the new\npsql converts\n\nSELECT arrtest.a[1:3],\n arrtest.b[1:1][1:2][1:2],\n arrtest.c[1:2], \n arrtest.d[1:1][1:2]\n FROM arrtest;\n\ninto\n\nSELECT arrtest.a[1],\n arrtest.b[1][1][1],\n arrtest.c[1], \n arrtest.d[1][1]]\n FROM arrtest\n\n--- or at least it tries to do so; on one machine I have handy, psql\nactually dumps core while running the array test. (It looks like that\nis because line mainloop.c:259 underestimates the amount of memory it\nneeds to malloc for the changed string, but I haven't worked through the\ndetails. I suspect the extra ']' is the result of an off-by-one kind of\nbug in this same block of code.)\n\nThe reason *why* it is doing this is that it thinks that \":3\" and so\nforth are variables that it ought to substitute for, and since it has\nno definition for them, it happily substitutes empty strings.\n\nAfter fixing the outright bugs, we could make the array test work by\nchanging \"[1:3]\" to \"[1\\:3]\" and so forth, but I think that that is the\nwrong way to deal with it. I believe that psql's variable feature needs\nto be redefined, instead.\n\nI certainly don't feel that the regress tests are graven on stone\ntablets; but when an allegedly unrelated feature change breaks one,\nI think we need to treat that as a danger signal. If psql variables\nbreak a regress test, they will likely break existing user applications\nas well.\n\nThe case at hand is particularly nasty because if psql is allowed to\ncontinue to behave this way, it will silently transform valid queries\ninto other valid queries with different results. (\"array[1:3]\" and\n\"array[1]\" don't mean the same thing.) I don't think that's acceptable.\n\nI suggest that psql's variable facility needs to be redefined so that\nit is not possible to trigger it accidentally in scripts that don't\neven know what psql variables are.\n\nA minimum requirement is that psql should *not* substitute for :x unless\nx is the name of a psql variable that the user has explicitly defined.\n\nI would also suggest tightening up the allowed names of variables;\nconventional practice is that variable names have to start with a\nletter, and I think psql ought to follow that convention. (That\nwouldn't in itself stop the array-subscript problem, though, since\nan array subscript could be a simple field reference.)\n\nIt might even be a good idea to require psql variables to contain only\nupper-case letters, so that they'd be less likely to be confused with\nSQL field names (which are usually lower case or at least mixed case)\n--- but I'm not convinced about that one.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 11 Jan 2000 11:57:47 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Regress tests reveal *serious* psql bug " }, { "msg_contents": "> The reason *why* it is doing this is that it thinks that \":3\" and so\n> forth are variables that it ought to substitute for, and since it has\n> no definition for them, it happily substitutes empty strings.\n> \n> After fixing the outright bugs, we could make the array test work by\n> changing \"[1:3]\" to \"[1\\:3]\" and so forth, but I think that that is the\n> wrong way to deal with it. I believe that psql's variable feature needs\n> to be redefined, instead.\n\nI know this is the only regression problem you found, and I am glad it\nis isolated to psql, and a design problem that can be easily addressed. \nI think the requriement that all variables begin with a letter is a good\nidea.\n\nWe recommended the : in the first place because it is the standard for\nembedded SQL variable handling.\n\nI am sure Peter can address this.\n\n> I would also suggest tightening up the allowed names of variables;\n> conventional practice is that variable names have to start with a\n> letter, and I think psql ought to follow that convention. (That\n> wouldn't in itself stop the array-subscript problem, though, since\n> an array subscript could be a simple field reference.)\n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 11 Jan 2000 12:20:11 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: Regress tests reveal *serious* psql bug" }, { "msg_contents": "Wow that sucks. Shame on me.\n\nOn 2000-01-11, Tom Lane mentioned:\n\n> psql converts\n> \n> SELECT arrtest.a[1:3],\n...\n> into\n...\n> SELECT arrtest.a[1],\n\nIn some earlier developmental stage I had the variable delimiter runtime\ndefinable since it became clear that the traditional $ wouldn't work.\nLater on someone pointed out that the SQL syntax for this is the colon\ndeal. (And you were in that discussion, if I am not completely off.) \nActually the colon deal only applies to embedded SQL, so ecpg should have\nthe same problem, but I'm not familiar with it, so I don't know how it\ncopes.\n\nThe fact is that (besides the garden-variety bugs) this is indeed a\nproblem of definition. I'm not sure if the following is valid by any\nstandard or even makes sense, but how do you interpret something like\nthis:\n\nSELECT arrtest.biggest_value_pos, arrtest.a[1:biggest_value_pos] ... ;\n\nThere's no way you can disambiguate this unless you redefine everything.\n\n> A minimum requirement is that psql should *not* substitute for :x unless\n> x is the name of a psql variable that the user has explicitly defined.\n\nThen psql becomes no better than csh, and that's certainly one of the\nworse things one could do.\n\n\nPutting blame on other people's shoulders I would suggest changing the\narray syntax to arr[1][2] like everyone else has, but that would be short\nsighted.\n\nThe best idea I have to get this working _now_ would be to once again make\nthe variable delimiter run-time configurable so that you can set it to\n: or $ or # or whatever your query doesn't use, completely off by default\n\nIf I'm going to hack around in that code, one related question: what\nshould the deal be regarding variable interpolation into quoted\nstrings? Yes/No/Maybe?\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n\n", "msg_date": "Wed, 12 Jan 2000 04:30:26 +0100", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Regress tests reveal *serious* psql bug " }, { "msg_contents": "On 2000-01-11, Bruce Momjian mentioned:\n\n> I think the requriement that all variables begin with a letter is a good\n> idea.\n\nBut it doesn't fix anything really. I left those open to be used for some\nspecial purpose like addressing the fields of the last query of\nsomething. Perhaps I should take them out for now though.\n\n> We recommended the : in the first place because it is the standard for\n> embedded SQL variable handling.\n\nSo the array syntax should be changed?\n\n> \n> I am sure Peter can address this.\n\nAddressed it will be ... ;)\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n\n", "msg_date": "Wed, 12 Jan 2000 04:30:34 +0100", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: Regress tests reveal *serious* psql bug" }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n>> We recommended the : in the first place because it is the standard for\n>> embedded SQL variable handling.\n\n> So the array syntax should be changed?\n\nBzzt, wrong answer.\n\nI'm open to alternative answers on this, but breaking existing\napplication scripts is not one of the acceptable alternatives.\n\n*Especially* not if there's no obvious failure report, as there\nwill not be if we don't change psql's behavior.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 12 Jan 2000 00:42:30 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Re: Regress tests reveal *serious* psql bug " }, { "msg_contents": "> Peter Eisentraut <[email protected]> writes:\n> >> We recommended the : in the first place because it is the standard for\n> >> embedded SQL variable handling.\n> \n> > So the array syntax should be changed?\n> \n> Bzzt, wrong answer.\n> \n> I'm open to alternative answers on this, but breaking existing\n> application scripts is not one of the acceptable alternatives.\n> \n> *Especially* not if there's no obvious failure report, as there\n> will not be if we don't change psql's behavior.\n\nI think we can live with requiring a variable name to start with an\nalphabetic or underscore.\n\n\tSELECT a[1:2]\n\nis clear and\n\n\tSELECT a[1:myvar]\n\nexpands to SELECT a[1]. I think we can live with this since having a\nvariable as an array element was never possible before 7.0. We could\nget fancy and not expand variables inside brackets. Of course, quoted\nstrings have to be skipped.\n\nIn fact, I think it should be an error to reference a variable that is\nnot defined. This will catch accidental references too. If you\nreference a variable that does not exist like :myvar, it passes the\nliteral :myvar to the backend.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 12 Jan 2000 00:53:13 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: Regress tests reveal *serious* psql bug" }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> In some earlier developmental stage I had the variable delimiter runtime\n> definable since it became clear that the traditional $ wouldn't work.\n> Later on someone pointed out that the SQL syntax for this is the colon\n> deal. (And you were in that discussion, if I am not completely off.) \n\nYah. I think at the time we were only thinking of colon as a user-\ndefinable operator (well, it's also a predefined operator, but not a\nvery essential one). I plead guilty to forgetfulness in not remembering\nthat it was also an essential component of the array grammar. Still,\nthere it is. I think this raises the bar to the point where we must\nhave a transparent backward-compatible approach to psql variables.\nI was not all that thrilled about blowing off colon as an operator,\nand blowing off array subscripts *too* is just too far above my\nthreshold of pain.\n\n> Actually the colon deal only applies to embedded SQL, so ecpg should have\n> the same problem, but I'm not familiar with it, so I don't know how it\n> copes.\n\nGood question. Michael?\n\n> The fact is that (besides the garden-variety bugs) this is indeed a\n> problem of definition. I'm not sure if the following is valid by any\n> standard or even makes sense, but how do you interpret something like\n> this:\n> SELECT arrtest.biggest_value_pos, arrtest.a[1:biggest_value_pos] ... ;\n> There's no way you can disambiguate this unless you redefine everything.\n\nHuh? There was nothing in the least ambiguous about it, until you\nredefined psql. It's still not ambiguous in any other pgsql interface,\nexcept possibly ecpg...\n\n>> A minimum requirement is that psql should *not* substitute for :x unless\n>> x is the name of a psql variable that the user has explicitly defined.\n\n> Then psql becomes no better than csh, and that's certainly one of the\n> worse things one could do.\n\ncsh isn't one of my favorite programming languages either, but failure\nto detect undefined substitution variables is a pretty venial sin\ncompared to silently transforming *valid* queries into wrong queries.\nThe former will not bring villagers with pitchforks to your doorstep,\nbut the latter will.\n\nFurthermore, if you are trying to help the substitution-variable\nprogrammer detect his mistakes, then silently substituting (wrong)\nempty values is not my idea of helpfulness. You could maybe make a\ndefensible case for rejecting the whole query with \":x is not defined\".\nWhat you have chosen is the worst of all possible worlds, because it\nbreaks existing scripts that are ignorant of the new feature without\ndoing anything particularly helpful for people who *are* using the new\nfeature.\n\nFinally, if the would-be user of psql variables misspells :foo as\n:fop, I think he's much more likely to realize he's made a mistake if\npsql passes :fop as-is to the backend rather than quietly discarding it.\n\n> Putting blame on other people's shoulders I would suggest changing the\n> array syntax to arr[1][2] like everyone else has,\n\nSay what? That's the syntax for a 2-D array, not the syntax for\nan array slice.\n\n> The best idea I have to get this working _now_ would be to once again make\n> the variable delimiter run-time configurable so that you can set it to\n> : or $ or # or whatever your query doesn't use, completely off by default\n\nIf it's off by default, that would eliminate the backwards-compatibility\nproblem --- but I still think the potential dual use of whatever\ncharacter you happen to pick would just be a gotcha waiting to bite\nanyone who uses the feature. We still ought to think about tightening\nup the substitution rules so they are less likely to trap the unwary.\n\n> If I'm going to hack around in that code, one related question: what\n> should the deal be regarding variable interpolation into quoted\n> strings? Yes/No/Maybe?\n\nNo bloody way, IMHO --- that increases the odds of unwanted\nsubstitutions by many orders of magnitude. If you want to allow\nsubstitutions into quoted strings, there should be a very special\nsyntax for it, perhaps along the lines of\n\t\t'a quoted ':foo' string'\nwhich could translate to 'a quoted foobar string' if :foo expands\nto foobar.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 12 Jan 2000 01:15:00 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Regress tests reveal *serious* psql bug " }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> I think we can live with requiring a variable name to start with an\n> alphabetic or underscore.\n\n> \tSELECT a[1:2]\n\n> is clear and\n\n> \tSELECT a[1:myvar]\n\n> expands to SELECT a[1].\n\nNo go --- SELECT a[1:b] where b is a field name is a valid query\ncurrently.\n\nI don't really see exactly what the benefit is of assuming that\n:foo should expand to nothing rather than :foo, in the absence\nof an actual definition for :foo. My feeling is that a safe solution\nwould be\n (a) psql doesn't do variables at all without an explicitly enabling\n command line switch; and\n (b) if psql sees :foo where foo is not defined, it spits out an\n error message.\nAccepting undeclared variables went out with Basic and Fortran;\nwhy are we intent on re-inventing a concept that's so obviously\ndangerous?\n\n> In fact, I think it should be an error to reference a variable that is\n> not defined. This will catch accidental references too. If you\n> reference a variable that does not exist like :myvar, it passes the\n> literal :myvar to the backend.\n\nThat's two different answers, not one... but I could live with either\none of them...\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 12 Jan 2000 01:22:33 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Re: Regress tests reveal *serious* psql bug " }, { "msg_contents": "Tom Lane wrote:\n\n> Bruce Momjian <[email protected]> writes:\n> > I think we can live with requiring a variable name to start with an\n> > alphabetic or underscore.\n>\n> > SELECT a[1:2]\n>\n> > is clear and\n>\n> > SELECT a[1:myvar]\n>\n> > expands to SELECT a[1].\n>\n> No go --- SELECT a[1:b] where b is a field name is a valid query\n> currently.\n\nThe colon in array syntax is quite a special case. It should be relatively\neasy to figure out whether you are in a construct of the form\n[<token>:<token>]. And then there should be no problem in figuring out that\nin\n[1:b] b refers to a column and in [1::b] ':b' is a variable. Do colons\napear anywhere else?\n\nBtw, i agree that variables should start with a letter and by default\nvariables have to be declared.\n\nAdriaan\n\n", "msg_date": "Wed, 12 Jan 2000 08:26:34 +0000", "msg_from": "Adriaan Joubert <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: Regress tests reveal *serious* psql bug" }, { "msg_contents": "On 2000-01-12, Tom Lane mentioned:\n\n> I think this raises the bar to the point where we must\n> have a transparent backward-compatible approach to psql variables.\n> I was not all that thrilled about blowing off colon as an operator,\n> and blowing off array subscripts *too* is just too far above my\n> threshold of pain.\n\nTo clear something out here: I'm with you all the way. I didn't make up\nthat syntax, and I too was forgetful about the array issue. If y'all think\nthat a variable must be defined to be substituted and that that will fix\nthings to a reasonable state, thus it shall be. My concern was more that\nthis would only work around this particular problem, while being short\nsighted. Just want to make sure we have a consensus.\n\n> > If I'm going to hack around in that code, one related question: what\n> > should the deal be regarding variable interpolation into quoted\n> > strings? Yes/No/Maybe?\n> \n> No bloody way, IMHO --- that increases the odds of unwanted\n\nI'll take that as a No. ;)\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n\n", "msg_date": "Thu, 13 Jan 2000 00:29:36 +0100 (CET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Regress tests reveal *serious* psql bug " }, { "msg_contents": "On 2000-01-12, Tom Lane mentioned:\n\n> Accepting undeclared variables went out with Basic and Fortran;\n> why are we intent on re-inventing a concept that's so obviously\n> dangerous?\n\nThey came back with Perl, Python, Tcl, what-have-you. But okay, I'm beaten\ninto submission.\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n\n", "msg_date": "Thu, 13 Jan 2000 00:29:41 +0100 (CET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: Regress tests reveal *serious* psql bug " } ]
[ { "msg_contents": "\nWhat exactly does this mean:\n\nNOTICE: Index word_url: Pages 16645; Tuples 5004183. Elapsed 3/9 sec.\n\nI'm curious about the Elapsed ... it took several minutes to before that\npop'd up on the screen, which is why I ask...\n\nThanks...\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Sun, 9 Jan 2000 03:16:06 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "VACUUM VERBOSE ..." }, { "msg_contents": "The Hermit Hacker <[email protected]> writes:\n> NOTICE: Index word_url: Pages 16645; Tuples 5004183. Elapsed 3/9 sec.\n\n> I'm curious about the Elapsed ... it took several minutes to before that\n> pop'd up on the screen, which is why I ask...\n\nThat'd been bothering me too. A glance at the vacuum code makes it\nclear that what's being reported is not elapsed time at all: the numbers\nare user and system CPU time. OK, that's cool, but the wording of the\nnotice message needs to be changed to identify the numbers correctly.\n\nDo we need to have actual wall clock time in there too?\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 09 Jan 2000 11:17:31 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] VACUUM VERBOSE ... " }, { "msg_contents": "On Sun, 9 Jan 2000, Tom Lane wrote:\n\n> The Hermit Hacker <[email protected]> writes:\n> > NOTICE: Index word_url: Pages 16645; Tuples 5004183. Elapsed 3/9 sec.\n> \n> > I'm curious about the Elapsed ... it took several minutes to before that\n> > pop'd up on the screen, which is why I ask...\n> \n> That'd been bothering me too. A glance at the vacuum code makes it\n> clear that what's being reported is not elapsed time at all: the numbers\n> are user and system CPU time. OK, that's cool, but the wording of the\n> notice message needs to be changed to identify the numbers correctly.\n> \n> Do we need to have actual wall clock time in there too?\n\nI don't have what I would consider an \"absolutely quiet system\", nor is my\nsystem particularly loaded since we moved the news server to a dedicated\nmachine...so its basically running a web server and database server right\nnow...3/9sec of user/sys time vs >5min of real time sounds like a major\ndifference in time...\n\nI don't think we need actual wall clock time in there, since that is easy\nto calculate :)\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Sun, 9 Jan 2000 15:41:58 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] VACUUM VERBOSE ... " } ]
[ { "msg_contents": "Since we have to go through the process of regenerating regress test\nresult files anyway, now seemed like a good time to take care of\nsomething that's been bugging me for a while. I have just committed\nchanges that allow multiple platforms to share platform-specific\nregress test result files.\n\nFor example, there are a lot of machines where the int2 regress test\nproduces \n ERROR: pg_atoi: error reading \"100000\": Result too large\ninstead of the reference platform's\n ERROR: pg_atoi: error reading \"100000\": Numerical result out of range\nWe can now have all these platforms share a single result file,\nwhich I've called expected/int2-too-large.out, rather than having\nto have duplicate result files for each such platform. There is\na mapping file src/test/regress/resultmap that identifies which file\nto use for each platform --- it's a lot like src/template/.similar,\nif you've messed around with that.\n\nSo far I've only put entries into resultmap for my own platform (HPUX)\nbut I'm sure many more will get added over the next few weeks.\n\nThe parallel regress test script, run_check.sh, doesn't seem to work\nwith this scheme yet. It *ought* to work but, at least on my machine,\nit seems like /bin/sh has problems with nested \"while read\" loops.\nI've run out of steam to work on this for tonight --- perhaps someone\nelse can see how to fix it.\n\nAlso, I updated src/test/regress/README but didn't touch the SGML\ndoco yet...\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 09 Jan 2000 03:08:14 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "New scheme for managing regress test result files" }, { "msg_contents": "On Sun, 9 Jan 2000, Tom Lane wrote:\n\n> Since we have to go through the process of regenerating regress test\n> result files anyway, now seemed like a good time to take care of\n> something that's been bugging me for a while. I have just committed\n> changes that allow multiple platforms to share platform-specific\n> regress test result files.\n> \n> For example, there are a lot of machines where the int2 regress test\n> produces \n> ERROR: pg_atoi: error reading \"100000\": Result too large\n> instead of the reference platform's\n> ERROR: pg_atoi: error reading \"100000\": Numerical result out of range\n> We can now have all these platforms share a single result file,\n> which I've called expected/int2-too-large.out, rather than having\n> to have duplicate result files for each such platform. There is\n> a mapping file src/test/regress/resultmap that identifies which file\n> to use for each platform --- it's a lot like src/template/.similar,\n> if you've messed around with that.\n> \n> So far I've only put entries into resultmap for my own platform (HPUX)\n> but I'm sure many more will get added over the next few weeks.\n\nWorking on FreeBSD right now...\n\n\n", "msg_date": "Sun, 9 Jan 2000 04:20:48 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] New scheme for managing regress test result files" }, { "msg_contents": "The Hermit Hacker <[email protected]> writes:\n>> So far I've only put entries into resultmap for my own platform (HPUX)\n>> but I'm sure many more will get added over the next few weeks.\n\n> Working on FreeBSD right now...\n\nCool. BTW, I realized it would probably be a lot easier to write the\nmap patterns if the platform names could be given as pattern\nexpressions, not just prefixes --- for example, int2/hppa*hpux10*=...\nWill work on fixing that today.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 09 Jan 2000 10:40:19 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] New scheme for managing regress test result files " } ]
[ { "msg_contents": "I have modified following files.\n\nbin/pg_ctl/pg_ctl.sh\ninclude/miscadmin.h\nbackend/postmaster/poastmaster.c\nbackend/tcop/postgres.c\nbackend/utils/init/miscinit.c\n\nThe reason for the changes is to prevent starting postmaster if\n(standalone) postgres is running and vice versa. Also, to know the pid\nin postmaster.pid is postmaster or postgres, I set following\nconvention:\n\npid > 0: postmaster\npid < 0: (standalone) postgres\n--\nTatsuo Ishii\n", "msg_date": "Sun, 09 Jan 2000 21:46:56 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": true, "msg_subject": "postmaster.c postgres.c pg_ctl etc. updated" } ]
[ { "msg_contents": "[Charset iso-8859-1 unsupported, filtering to ASCII...]\n> Hello Bruce,\n> \n> I've just remembered the other problem with PG that needs to be listed for\n> fixing.\n> \n> This is the 7 field index limit\n> \n> If the need is for a index for the purpose of unique-ness enforcing, there\n> needs to be more than 7 fields.\n> \n> My system needed about 12.\n\nI am working on this now. 7.0 will have a postgres.h parameter that can\nbe changed. Default is 8.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 9 Jan 2000 23:21:59 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Postgres Features for 7.X" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n>> I've just remembered the other problem with PG that needs to be listed for\n>> fixing.\n>> \n>> This is the 7 field index limit\n\nIt's 8, not 7, afaik...\n\n> I am working on this now. 7.0 will have a postgres.h parameter that can\n> be changed. Default is 8.\n\nI looked at this a while ago and realized that the fundamental problem\nis that pg_index depends on types oid8 and int28 (hardwired 8-element\narrays of oid and int2, respectively). Are you going to rename these\ntypes to oidN and int2N and make the value of N a config parameter?\nSeems like a good idea ... but that magic constant 8 is buried in\na depressingly large number of places, a lot of which aren't even\nsymbolic constants :-(\n\nIf you do fix this, I'd suggest bumping the default N up to 16 or so;\nseems like that would make a lot of people happier than N=8...\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 09 Jan 2000 23:56:21 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: Postgres Features for 7.X " }, { "msg_contents": "> Bruce Momjian <[email protected]> writes:\n> >> I've just remembered the other problem with PG that needs to be listed for\n> >> fixing.\n> >> \n> >> This is the 7 field index limit\n> \n> It's 8, not 7, afaik...\n\nOh, OK.\n\n> \n> > I am working on this now. 7.0 will have a postgres.h parameter that can\n> > be changed. Default is 8.\n> \n> I looked at this a while ago and realized that the fundamental problem\n> is that pg_index depends on types oid8 and int28 (hardwired 8-element\n> arrays of oid and int2, respectively). Are you going to rename these\n> types to oidN and int2N and make the value of N a config parameter?\n> Seems like a good idea ... but that magic constant 8 is buried in\n> a depressingly large number of places, a lot of which aren't even\n> symbolic constants :-(\n\nI have looked at every 8 in the source tree, and I think I have them\nall. I have now moved INDEX_MAX_KEYS to config.h.in, where it belongs.\n\nI have not changed the type names. I am going to keep them called int28\nand oid8 until we decide we want them to be 16 and I will change the\ntype names. They function fine as oid8 even if they are 16 long. :-)\n\nI am not sure how the index code handles this so I am a little scared to\nbump it up by default.\n\nThere was really only some code in oid8in and int28in that required\nrecoding because the sscanf was using 8 params. The new code loops\nover an sscanf. The other changes were just replacement of 8 with the\ndefine.\n\n\n> \n> If you do fix this, I'd suggest bumping the default N up to 16 or so;\n> seems like that would make a lot of people happier than N=8...\n\nOh, OK, just make it 16. That should work, and be a good way to test my\nchanges. However, I am not sure everything will work so I will keep it\nat 8 until we can test it to see what happens. Only very large data\nsets with very long indexes is going to trigger the index code.\n\nMay as well see if someone _knows_ if the index code will work with >8\nindexed fields.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 10 Jan 2000 00:16:26 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Re: Postgres Features for 7.X" } ]
[ { "msg_contents": "I have moved INDEX_MAX_KEYS to postgres.h, and have removed the\nhard-coded limits that it is 8 fields. I hope I got all of them. The\ndefault is still 8.\n\nThere were only a few places left that had the 8 hard-coded.\n\nI haven't tested non-8 values but they should work.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 9 Jan 2000 23:35:17 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Number of index fields configurable" }, { "msg_contents": "> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]]On Behalf Of Bruce Momjian\n> \n> I have moved INDEX_MAX_KEYS to postgres.h, and have removed the\n> hard-coded limits that it is 8 fields. I hope I got all of them. The\n> default is still 8.\n> \n> There were only a few places left that had the 8 hard-coded.\n> \n> I haven't tested non-8 values but they should work.\n>\n\nShouldn't the following catalog be changed ?\n\nCATALOG(pg_index)\n{\n....\n\tint28\t\tindkey;\n\t^^^^^\n\toid8\t\tindclass;\n\t^^^^^\n\nRegards.\n\nHiroshi Inoue\[email protected] \n", "msg_date": "Mon, 10 Jan 2000 14:09:25 +0900", "msg_from": "\"Hiroshi Inoue\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: [HACKERS] Number of index fields configurable" }, { "msg_contents": "[Charset iso-8859-1 unsupported, filtering to ASCII...]\n> > -----Original Message-----\n> > From: [email protected]\n> > [mailto:[email protected]]On Behalf Of Bruce Momjian\n> > \n> > I have moved INDEX_MAX_KEYS to postgres.h, and have removed the\n> > hard-coded limits that it is 8 fields. I hope I got all of them. The\n> > default is still 8.\n> > \n> > There were only a few places left that had the 8 hard-coded.\n> > \n> > I haven't tested non-8 values but they should work.\n> >\n> \n> Shouldn't the following catalog be changed ?\n> \n> CATALOG(pg_index)\n> {\n> ....\n> \tint28\t\tindkey;\n> \t^^^^^\n> \toid8\t\tindclass;\n> \t^^^^^\n\nThe underlying definitions of the types are now based in the #define\nparameter. Not sure if this is going to work so I have not change the\nactual type names yet. I have a few more changes to commit now.\n\nAlso, what should the new names be? Can't call it int16. Does anyone\noutside the source tree rely on those type names?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 10 Jan 2000 00:17:49 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Number of index fields configurable" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n>> Shouldn't the following catalog be changed ?\n>> \n>> CATALOG(pg_index)\n>> {\n>> ....\n>> int28\t\tindkey;\n>> ^^^^^\n>> oid8\t\tindclass;\n>> ^^^^^\n\n> The underlying definitions of the types are now based in the #define\n> parameter. Not sure if this is going to work so I have not change the\n> actual type names yet. I have a few more changes to commit now.\n\nIf we think the parameter works, then we should test it by changing\nthe value ;-)\n\n> Also, what should the new names be? Can't call it int16.\n\nI like oidN and int2N, or oidn and int2n if you object to uppercase\nnames.\n\n> Does anyone outside the source tree rely on those type names?\n\nI was worried about that at first --- we couldn't change the names\nif it would break pg_dump files. But the system catalogs themselves\ndon't get dumped as such, so it shouldn't be a problem. There might\nbe a few folks out there who are using oid8 or int28 as column types\nin user tables, but surely not many. What it comes down to is that\na few people might have to tweak their code or dump files, but not\nvery many compared to the number of people who will be glad of the\nimprovement.\n\nBut if these types are to have parameterizable sizes, I think it's\ncritical that oidNin() and int2Nin() be robust about the number of\ninput values they see. I suggest that they ought to work like this:\n\n* if the number of supplied values is less than the currently configured\nvalue of N, silently fill in zeroes for the extra places.\n\n* if the number of supplied values is more than N, check the extra\nvalues to see if any are not 0. Complain if any are not 0, but\nif they are all 0 then silently accept it.\n\nThis will allow interoperability of pg_dump files across different\nvalues of N, and raise an error only if there's really a problem.\n\nYou have the first behavior but not the second...\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 10 Jan 2000 00:44:33 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Number of index fields configurable " }, { "msg_contents": "> Bruce Momjian <[email protected]> writes:\n> >> Shouldn't the following catalog be changed ?\n> >> \n> >> CATALOG(pg_index)\n> >> {\n> >> ....\n> >> int28\t\tindkey;\n> >> ^^^^^\n> >> oid8\t\tindclass;\n> >> ^^^^^\n> \n> > The underlying definitions of the types are now based in the #define\n> > parameter. Not sure if this is going to work so I have not change the\n> > actual type names yet. I have a few more changes to commit now.\n> \n> If we think the parameter works, then we should test it by changing\n> the value ;-)\n> \n> > Also, what should the new names be? Can't call it int16.\n> \n> I like oidN and int2N, or oidn and int2n if you object to uppercase\n> names.\n\nHow about oidvector and int2vector. A vector is a 1-dimmensional array.\nCalling it an array is too confusing.\n\n> \n> > Does anyone outside the source tree rely on those type names?\n> \n> I was worried about that at first --- we couldn't change the names\n> if it would break pg_dump files. But the system catalogs themselves\n> don't get dumped as such, so it shouldn't be a problem. There might\n> be a few folks out there who are using oid8 or int28 as column types\n> in user tables, but surely not many. What it comes down to is that\n> a few people might have to tweak their code or dump files, but not\n> very many compared to the number of people who will be glad of the\n> improvement.\n\n> \n> But if these types are to have parameterizable sizes, I think it's\n> critical that oidNin() and int2Nin() be robust about the number of\n> input values they see. I suggest that they ought to work like this:\n> \n> * if the number of supplied values is less than the currently configured\n> value of N, silently fill in zeroes for the extra places.\n> \n> * if the number of supplied values is more than N, check the extra\n> values to see if any are not 0. Complain if any are not 0, but\n> if they are all 0 then silently accept it.\n> \n> This will allow interoperability of pg_dump files across different\n> values of N, and raise an error only if there's really a problem.\n\nI will tweek the code to properly check for trailing numbers. Right now\nmultiple spaces cause problems, and trailing numbers are ignored. With\noidn, we can get away with trailing zeros because an oid of 0 is\ninvalid, but with int2n, a zero is valid, so I think we can't just ignore\nextra trailing zeros. We can pad with zeros, however. Comments?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 10 Jan 2000 09:11:55 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Number of index fields configurable" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> I will tweek the code to properly check for trailing numbers. Right now\n> multiple spaces cause problems, and trailing numbers are ignored. With\n> oidn, we can get away with trailing zeros because an oid of 0 is\n> invalid, but with int2n, a zero is valid, so I think we can't just ignore\n> extra trailing zeros. We can pad with zeros, however. Comments?\n\nFor the primary use of these things, which is attribute numbers in\npg_index, padding or dropping zeroes is correct behavior --- unused\npositions in the vector will have zero values, same as for the oid\nvector. I think it's OK to define the type's behavior suitably for\nthe system's use, because it's not intended as a general-purpose user\ntype; users oughta be using int2[]. (Really, the only reason we have\nthese types at all is that we depend on having compile-time-constant\nfield sizes in the system catalogs that are accessed via\ninclude/catalog/'s struct declarations...)\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 10 Jan 2000 10:06:58 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Number of index fields configurable " }, { "msg_contents": "> But if these types are to have parameterizable sizes, I think it's\n> critical that oidNin() and int2Nin() be robust about the number of\n> input values they see. I suggest that they ought to work like this:\n> \n> * if the number of supplied values is less than the currently configured\n> value of N, silently fill in zeroes for the extra places.\n> \n> * if the number of supplied values is more than N, check the extra\n> values to see if any are not 0. Complain if any are not 0, but\n> if they are all 0 then silently accept it.\n> \n> This will allow interoperability of pg_dump files across different\n> values of N, and raise an error only if there's really a problem.\n\nOK, different solution. I decided there is no need to be dumping out\nzeros to pad the type. New code does the following. This looks very\nclean to me:\n\n\ttest=> create table x (y int28);\n\tCREATE\n\ttest=> insert into x values ('1 2 3');\n\tINSERT 18697 1\n\ttest=> select * from x;\n\t y \n\t-------\n\t 1 2 3\n\t(1 row)\n\ttest=> insert into x values ('1 2 3 4 5 6 7 8');\n\tINSERT 18699 1\n\ttest=> select * from x;\n\t y \n\t-----------------\n\t 1 2 3\n\t 1 2 3 4 5 6 7 8\n\t(3 rows)\n\t\n\ttest=> insert into x values ('1 2 3 4 5 6 7 8 9');\n\tERROR: int28 value has too many values\n\ttest=> insert into x values ('1 2 3 4 5 6 7 8 0');\n\tERROR: int28 value has too many values\n\nNotice the trailing zero is treated as an error. Because we trim\ntrailing zeros, we can affort do handle things this way.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 10 Jan 2000 10:37:32 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Number of index fields configurable" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> OK, different solution. I decided there is no need to be dumping out\n> zeros to pad the type.\n\nOh, that's a thought. You haven't really gained anything in generality,\nsince the code is still treating zero as a special case; but I agree it\nlooks nicer (and is easier to check for too many values).\n\nOnly worry I have is whether it will interoperate comfortably with the\nold code. Let's see:\n\n* old dump to new: no problem, unless you've reduced MAX_INDEX_KEYS\n below 8 (doesn't seem likely).\n\n* new to old: fails for every case except where there's exactly 8\n non zero entries.\n\nThe latter is a bit bothersome, but may not be a big deal --- in reality\nwe don't dump and reload pg_index this way.\n\nBTW, be sure you are only suppressing *trailing* zeroes not *embedded*\nzeroes. I know that oid8 has to deal with embedded zeroes (some of\nthe pg_proc entries look like that); int28 might not, but the code\nshould probably act the same for both.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 10 Jan 2000 10:54:31 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Number of index fields configurable " }, { "msg_contents": "> Bruce Momjian <[email protected]> writes:\n> > OK, different solution. I decided there is no need to be dumping out\n> > zeros to pad the type.\n> \n> Oh, that's a thought. You haven't really gained anything in generality,\n> since the code is still treating zero as a special case; but I agree it\n> looks nicer (and is easier to check for too many values).\n> \n> Only worry I have is whether it will interoperate comfortably with the\n> old code. Let's see:\n> \n> * old dump to new: no problem, unless you've reduced MAX_INDEX_KEYS\n> below 8 (doesn't seem likely).\n> \n> * new to old: fails for every case except where there's exactly 8\n> non zero entries.\n\nNot sure about this. Old code did sscanf on 8 entries, but if it\nreturned fewer, it padded with zeros, so new->old should work.\n\n> \n> The latter is a bit bothersome, but may not be a big deal --- in reality\n> we don't dump and reload pg_index this way.\n> \n> BTW, be sure you are only suppressing *trailing* zeroes not *embedded*\n> zeroes. I know that oid8 has to deal with embedded zeroes (some of\n> the pg_proc entries look like that); int28 might not, but the code\n> should probably act the same for both.\n\nYes, only trailing. New code walks from end to beginning until it finds\na non-zero. If the entry is all zeros, you get a zero-length string\noutput.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 10 Jan 2000 10:59:46 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Number of index fields configurable" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n>> * new to old: fails for every case except where there's exactly 8\n>> non zero entries.\n\n> Not sure about this. Old code did sscanf on 8 entries, but if it\n> returned fewer, it padded with zeros, so new->old should work.\n\nOh, OK. Nevermind then...\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 10 Jan 2000 11:02:26 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Number of index fields configurable " }, { "msg_contents": "> Bruce Momjian <[email protected]> writes:\n> > I will tweek the code to properly check for trailing numbers. Right now\n> > multiple spaces cause problems, and trailing numbers are ignored. With\n> > oidn, we can get away with trailing zeros because an oid of 0 is\n> > invalid, but with int2n, a zero is valid, so I think we can't just ignore\n> > extra trailing zeros. We can pad with zeros, however. Comments?\n> \n> For the primary use of these things, which is attribute numbers in\n> pg_index, padding or dropping zeroes is correct behavior --- unused\n> positions in the vector will have zero values, same as for the oid\n> vector. I think it's OK to define the type's behavior suitably for\n> the system's use, because it's not intended as a general-purpose user\n> type; users oughta be using int2[]. (Really, the only reason we have\n> these types at all is that we depend on having compile-time-constant\n> field sizes in the system catalogs that are accessed via\n> include/catalog/'s struct declarations...)\n\nRenamed oid8 ->oidvector and int28->int2vector. initdb everyone. New\ntype names require it.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 10 Jan 2000 11:09:36 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Number of index fields configurable" } ]
[ { "msg_contents": "OK, I have fixed int28in and oid8 so they properly skip over the\nspaces. Thanks to Tom for the tip.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 10 Jan 2000 00:24:13 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "oid8in and int28in" } ]
[ { "msg_contents": "Hi\n\nI have just committed changes to cache invalidation stuff.\nMaybe this would fix the following TODO.\n* elog() flushes cache, try invalidating just entries from current xact,\n perhaps using invalidation cache \n\n1) In case of abort,catalog cache and relation cache will\n be invalidated for system tuples marked by Relation-\n Mark4RollbackHeapTuple(). Both heap_insert() and\n heap_update() call RelationMark4RollbackHeapTuple(). \n\n2) CommandCounterIncrement() calls AtCommit_LocalCache()\n instead of AtCommit_Cache(). Registration of cache\n invalidation for other backends was postponed until commit.\n\n3) The new function ImmediateSharedRelationCacheInvalidate()\n is called from smgrunlink()/smgrtruncate() in order to register\n relation cache invalidation immediately.\n\nRegards.\n\nHiroshi Inoue\[email protected]\n", "msg_date": "Mon, 10 Jan 2000 15:38:27 +0900", "msg_from": "\"Hiroshi Inoue\" <[email protected]>", "msg_from_op": true, "msg_subject": "Cache invalidation was changed a little" } ]
[ { "msg_contents": "the $$ syntax is where make passes $ to the command line. ie it should\npass something like:\n\n\t`java makeVersion`\n\nAnyhow, you can bypass this, by running one of the jdbc2 or java2 rules\ninstead of all.\n\nPeter\n\n-- \nPeter Mount\nEnterprise Support\nMaidstone Borough Council\nAny views stated are my own, and not those of Maidstone Borough Council.\n\n\n\n-----Original Message-----\nFrom: Allan Huffman [mailto:[email protected]]\nSent: Friday, January 07, 2000 4:41 PM\nTo: [email protected]\nSubject: [HACKERS] make JDBC postgresql.jar error\n\n\nI'm trying to compile the src/interfaces/jdbc postgresql.jar file but I\nget a syntax error (pg 6.5.2):\n/bin/sh: syntax error at line 1: '(' unexpected\nmake: *** [all] Error 2\n\nFunny thing is that I'm under the C shell.....using gcc, Solaris 7 on a\nSparc.\n\nI tried to replace $( ) with ' ' per the instructions but there is some\nsyntax that I am not familiar with: $$($(JAVA) makeVersion). How should\nthis look after the replacement?\n\nReally appreciate help. I've developed 3k SLOC under Visual Cafe\n(managed to rewrite out all Symantec classes). It's running fine in the\nVisual Cafe environment. Now I need to field it under the Netscape\nSuitespot server.\n\nThanks\n\nAllan in Belgium\n\n\n\n************\n", "msg_date": "Mon, 10 Jan 2000 09:04:53 -0000", "msg_from": "Peter Mount <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] make JDBC postgresql.jar error" } ]
[ { "msg_contents": "While chasing the VACUUM problem reported by Stephen Birch, I noticed\nsomething that looks like a potential trouble spot. Vacuum's initial\nscan routine, vc_scanheap, runs through the table to mark tuples as\nknown committed or known dead, if possible (consulting the transaction\nlog for tuples not yet so marked). It does the right things as far as\nmarking committed/dead if it sees a tuple marked HEAP_MOVED_OFF or\nHEAP_MOVED_IN, which could only be there if a prior VACUUM failed\npartway through. But it doesn't *clear* those bits. Seems to me that\nthat will screw up the subsequent vc_rpfheap procedure --- in\nparticular, leaving a HEAP_MOVED_IN flag set will cause vc_rpfheap to\ncomplain (correctly!) about 'HEAP_MOVED_IN not expected', whereas\nleaving HEAP_MOVED_OFF set will confuse vc_rpfheap because it will\nthink it moved the tuple itself.\n\nIn short, if we really want to recover from a failed VACUUM then we'd\nbetter clear those bits during vc_scanheap. I am thinking that the\ncode starting at about line 720 ought to look like\n\n if (!(tuple.t_data->t_infomask & HEAP_XMIN_COMMITTED))\n {\n if (tuple.t_data->t_infomask & HEAP_XMIN_INVALID)\n tupgone = true;\n else if (tuple.t_data->t_infomask & HEAP_MOVED_OFF)\n {\n // mark tuple commited or invalid as appropriate,\n // same as before\nadd >>> tuple.t_data->t_infomask &= ~HEAP_MOVED_OFF;\n }\n else if (tuple.t_data->t_infomask & HEAP_MOVED_IN)\n {\n // mark tuple commited or invalid as appropriate,\n // same as before\nadd >>> tuple.t_data->t_infomask &= ~HEAP_MOVED_IN;\n }\n else\n {\n // other cases same as before\n }\n }\n\nadd >>> if (tuple.t_data->t_infomask & (HEAP_MOVED_OFF | HEAP_MOVED_IN))\nadd >>> {\nadd >>> elog(NOTICE, \"Clearing unexpected HEAP_MOVED flag\");\nadd >>> tuple.t_data->t_infomask &= ~(HEAP_MOVED_OFF | HEAP_MOVED_IN);\nadd >>> pgchanged = true;\nadd >>> }\n\nComments anyone?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 10 Jan 2000 10:44:07 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Potential vacuum bug?" }, { "msg_contents": "> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]]On Behalf Of Tom Lane\n> \n> While chasing the VACUUM problem reported by Stephen Birch, I noticed\n> something that looks like a potential trouble spot. Vacuum's initial\n> scan routine, vc_scanheap, runs through the table to mark tuples as\n> known committed or known dead, if possible (consulting the transaction\n> log for tuples not yet so marked). It does the right things as far as\n> marking committed/dead if it sees a tuple marked HEAP_MOVED_OFF or\n> HEAP_MOVED_IN, which could only be there if a prior VACUUM failed\n> partway through. But it doesn't *clear* those bits. Seems to me that\n> that will screw up the subsequent vc_rpfheap procedure --- in\n> particular, leaving a HEAP_MOVED_IN flag set will cause vc_rpfheap to\n> complain (correctly!) about 'HEAP_MOVED_IN not expected', whereas\n> leaving HEAP_MOVED_OFF set will confuse vc_rpfheap because it will\n> think it moved the tuple itself.\n>\n\nI'm for your change.\nAnyway it's not good to hold useless flags unnecessarily.\n\nHowever I could hardly find the case that would cause a trouble.\nIt may occur in the following rare cases though I'm not sure.\n\nHEAP_MOVED_OFF and (neither HEAP_XMIN_COMMITTED nor\nHEAP_XMIN_INVALID) and the tuple was recently delete/updated.\n\nThis means that the previous VACUUM couldn't remove the tuple\nbecause old transactions were running then and moreover the\nVACUUM half successed(i.e aborted between internal commit and\nexternal commit). Now VACUUM marks this tuple as tupgone once\nbut would turn it off later if old transctions are still running.\n\nRegards.\n\nHiroshi Inoue\[email protected]\n", "msg_date": "Tue, 11 Jan 2000 10:58:27 +0900", "msg_from": "\"Hiroshi Inoue\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: [HACKERS] Potential vacuum bug?" }, { "msg_contents": "\"Hiroshi Inoue\" <[email protected]> writes:\n> I'm for your change.\n> However I could hardly find the case that would cause a trouble.\n> It may occur in the following rare cases though I'm not sure.\n\n> HEAP_MOVED_OFF and (neither HEAP_XMIN_COMMITTED nor\n> HEAP_XMIN_INVALID) and the tuple was recently delete/updated.\n\nI'm not sure if HEAP_MOVED_OFF is really dangerous, but I am sure\nthat HEAP_MOVED_IN is dangerous --- vc_rpfheap will error out if\nit hits a tuple marked that way. So, if a VACUUM fails partway\nthrough vc_rpfheap (I guess this would have to happen after the\ninternal commit), it'd be possible that later VACUUMs wouldn't\nwork anymore.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 11 Jan 2000 10:31:13 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Potential vacuum bug? " }, { "msg_contents": "> -----Original Message-----\n> From: Tom Lane [mailto:[email protected]]\n> \n> \"Hiroshi Inoue\" <[email protected]> writes:\n> > I'm for your change.\n> > However I could hardly find the case that would cause a trouble.\n> > It may occur in the following rare cases though I'm not sure.\n> \n> > HEAP_MOVED_OFF and (neither HEAP_XMIN_COMMITTED nor\n> > HEAP_XMIN_INVALID) and the tuple was recently delete/updated.\n> \n> I'm not sure if HEAP_MOVED_OFF is really dangerous, but I am sure\n> that HEAP_MOVED_IN is dangerous --- vc_rpfheap will error out if\n> it hits a tuple marked that way. So, if a VACUUM fails partway\n> through vc_rpfheap (I guess this would have to happen after the\n> internal commit), it'd be possible that later VACUUMs wouldn't\n> work anymore.\n>\n\nIIRC,there's no HEAP_MOVED_INd and not HEAP_XMIN_COMMITTED\ntuples when vc_rpfheap() is called because such tuples has already\nbeen marked unsued in vc_scanheap().\n\nRegards.\n\nHiroshi Inoue\[email protected]\n", "msg_date": "Wed, 12 Jan 2000 09:04:31 +0900", "msg_from": "\"Hiroshi Inoue\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: [HACKERS] Potential vacuum bug? " } ]
[ { "msg_contents": "\nCan anyone here help?\n\nVince.\n\n---------- Forwarded message ----------\nDate: Mon, 10 Jan 2000 08:52:06 +0000\nFrom: Jude Weaver <[email protected]>\nTo: [email protected]\nSubject: Simmultanous Connections\n\nWe are a company that writes academic software . We are converting our\nsoftware to use either PostgreSQL or MySQL. We are leaning toward\nPostgreSQL, but, I still have several questions.\nI hope someone can answer these for me.\n\n1. I have read the Q&A for postgreSQL and would like to know the\ndifference between a temporary\n and a permanant connection. Do you have a connection when you open\nthe database or only when\n the frontend sends a job to the backend? If 32 people are running\na module that opens a database\n is that 32 connections or will it vary as users read and write to\nthe database?\n\n2. I saw in the Q&A that to run more than 32 simmultanous connects could\nbe a big drain on our re-\n sources. Our Linux boxes , in general, are Intel 166 to 500s, 128MG\nof RAM and 6.2 to 13 GIG.\n Can anyone tell me roughly how much resources per connection does\nPostgreSQL use?\n\n3. If I have 90 teachers posting grades at the same time, (the grade\nposting program opens 5 dif-\n ferent databases) and 25 secretaries and administrators poking\naround in assorted databases\n looking at information, will postgresql handle that much traffic?\n\nI would appreciate any information you can give me,\nThank you - Jude Weaver.\n\n\n", "msg_date": "Mon, 10 Jan 2000 11:09:06 -0500 (EST)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": true, "msg_subject": "Simmultanous Connections (fwd)" }, { "msg_contents": "\n> Can anyone here help?\n>\n> Vince.\n>\n> ---------- Forwarded message ----------\n> Date: Mon, 10 Jan 2000 08:52:06 +0000\n> From: Jude Weaver <[email protected]>\n> To: [email protected]\n> Subject: Simmultanous Connections\n>\n> We are a company that writes academic software . We are converting our\n> software to use either PostgreSQL or MySQL. We are leaning toward\n> PostgreSQL, but, I still have several questions.\n> I hope someone can answer these for me.\n>\n> 1. I have read the Q&A for postgreSQL and would like to know the\n> difference between a temporary\n>\tand a permanant connection. Do you have a connection when you open\n> the database or only when\n>\t the frontend sends a job to the backend? If 32 people are running\n> a module that opens a database\n>\t is that 32 connections or will it vary as users read and write to\n> the database?\n\nSounds like she may looking at postgres in PHP - at least PHP uses\nthat temporary and permanant connection concept. My experience is\nthat PHP persistent connections are not worth it - the time to\nestablish a new connection is pretty small, and stale connections can\ncause problems.\n\n> 2. I saw in the Q&A that to run more than 32 simmultanous connects could\n> be a big drain on our re-\n> sources. Our Linux boxes , in general, are Intel 166 to 500s, 128MG\n> of RAM and 6.2 to 13 GIG.\n> Can anyone tell me roughly how much resources per connection does\n> PostgreSQL use?\n\nIf an idle psql connection is left open, we're looking at about 1 MB\nRAM plus 4MB swap on my linux box.\n\nAs I noted above, I'd generally recommend against persistent\nconnections when there are more than a few users.\n\nSounds like the machines have the capacity for what sounds like a\nfairly small task. Of course, there would generally be only one\nserver machine, so I would recommend choosing one of the faster ones.\nBut it should be stable and usable ath eith end of the spectrum, at\nleast from my experience.\n\n> 3. If I have 90 teachers posting grades at the same time, (the grade\n> posting program opens 5 dif-\n> ferent databases) and 25 secretaries and administrators poking\n> around in assorted databases\n>\tlooking at information, will postgresql handle that much traffic?\n\nPostgres should handle that easily.\n\nJust my $0.02 worth. Hope it's helpful.\n\n-- \nKarl DeBisschop <[email protected]>\n617.832.0332 (Fax: 617.956.2696)\n\nInformation Please - your source for FREE online reference\nhttp://www.infoplease.com - Your Ultimate Fact Finder\nhttp://kids.infoplease.com - The Great Homework Helper\n\nNetsaint Plugins Development\nhttp://netsaintplug.sourceforge.net\n", "msg_date": "Mon, 10 Jan 2000 11:46:27 -0500", "msg_from": "Karl DeBisschop <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Simmultanous Connections (fwd)" }, { "msg_contents": "On Mon, 10 Jan 2000, Vince Vielhaber wrote:\n\n> 1. I have read the Q&A for postgreSQL and would like to know the\n> difference between a temporary and a permanant connection. Do you have\n> a connection when you open the database or only when the frontend\n> sends a job to the backend? If 32 people are running a module that\n> opens a database is that 32 connections or will it vary as users read\n> and write to the database?\n\nYou will have 32 connections open to the backend ...\n\n> 2. I saw in the Q&A that to run more than 32 simmultanous connects\n> could be a big drain on our re- sources. Our Linux boxes , in general,\n> are Intel 166 to 500s, 128MG of RAM and 6.2 to 13 GIG. Can anyone\n> tell me roughly how much resources per connection does PostgreSQL use?\n\nIt depends on what the connections are doing...if someone is doing a\n'SELECT...ORDER BY', it will take more resources then if you are doing\nsomething that doesn't involve any sort routines...\n\n> 3. If I have 90 teachers posting grades at the same time, (the grade\n> posting program opens 5 dif- ferent databases) and 25 secretaries and\n> administrators poking around in assorted databases looking at\n> information, will postgresql handle that much traffic?\n\n\t5 different databases, vs 5 different tables? 5 different\ndatabases will mean 90 x 5 (450) connections opened up...whereas 5 tables\nwould be just 90 connections...\n\n\t... but, either way, will it handle that much traffic? give it\nenough RAM, and I personally don't see why not, but I've yet to hit *that*\nkind of a load on it. Right now, I have PostgreSQL setup to handle\nseveral databases, and the postmaster processes each take up ~4-5Meg:\n\nhub> ps aux | grep data\npgsql 895 0.0 0.2 4508 1416 d0- S 6:52AM 0:00.98 /home/database/v\npgsql 896 0.0 0.2 3976 1308 d0- I 6:52AM 0:00.02 /home/database/v\n\nWhen I open up a session/connection to a database, I'm seeing:\n\npgsql 71041 5.1 0.4 5028 3492 ?? R 11:40AM 0:00.54 /home/database/v\npgsql 71032 0.0 0.4 4992 3148 ?? S 11:40AM 0:00.02 /home/database/v\npgsql 71034 0.0 0.4 4980 2976 ?? S 11:40AM 0:00.02 /home/database/v\n\nNow, I always get this backwards/confused, but...the first value (ie. 4508) is \nthe binary size, which is mis-informed due to the use of shared libraries...\nthe important one is the second value (ie. 1416), which, again, if I recall\ncorrectly, is the datasize...for the udmsearch database, just starting up\n'psql udmsearch', each database is taking <3.5Meg...depending on the sizes of\nyour queries and whatnot, figure that I'd need 3.5Meg*450 (~1.5gig) of memory\non this machine to handle it (I have half of that now)...bear in mind that\nnot all 450 connections would be active, so there is room for some processes\nto be swap'd out and whatnot..\n\nMy personal opinion is that there isn't anything that PostgreSQL hasn't been\nable to handle so far, to the best of my knowledge...my next step for my \nsystem is to go dual-processor, and bring on a full gig of RAM, but my machine\nalso does alot more then just PostgreSQL :)\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Mon, 10 Jan 2000 12:46:41 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Simmultanous Connections (fwd)" }, { "msg_contents": "At 11:46 AM 1/10/00 -0500, Karl DeBisschop wrote:\n\n>Sounds like she may looking at postgres in PHP - at least PHP uses\n>that temporary and permanant connection concept. My experience is\n>that PHP persistent connections are not worth it - the time to\n>establish a new connection is pretty small, and stale connections can\n>cause problems.\n\nBoy, persistent connections in AOLserver sure help a lot (ask Lamar\nOwen!). If stale connections cause problems in your PHP environment,\nthen the PHP persistent connection implementation needs some work.\n\nForking a new backend is actually considerably more expensive then\njust passing back the PID of an existing backend...\n\nOn Sun Solaris systems, forking is about 25 times as costly as \nstarting up a new thread (according to data from Sun). Of course,\nreturning an existing persistent db connection's even cheaper than\nstarting a new thread. And that comparative cost will vary between\nOS.\n\nBut not necessarily in a direction favoring more forking :)\n\nI sent her a private note saying she really probably shouldn't be looking\nat MySQL for her application, presumably having a real transaction-based\ndb is a Good Thing when maintaining a database of student grades. Told\nher she should be looking at various real RDBMS solutions and should leave\nMySQL out of the picture entirely (while also telling her I thought PG\nwould work fine for her needs, of course).\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n", "msg_date": "Mon, 10 Jan 2000 10:07:24 -0800", "msg_from": "Don Baccus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Simmultanous Connections (fwd)" }, { "msg_contents": "Don Baccus wrote:\n> \n> At 11:46 AM 1/10/00 -0500, Karl DeBisschop wrote:\n> \n> >Sounds like she may looking at postgres in PHP - at least PHP uses\n> >that temporary and permanant connection concept. My experience is\n> >that PHP persistent connections are not worth it - the time to\n> >establish a new connection is pretty small, and stale connections can\n> >cause problems.\n \n> Boy, persistent connections in AOLserver sure help a lot (ask Lamar\n> Owen!). If stale connections cause problems in your PHP environment,\n> then the PHP persistent connection implementation needs some work.\n\nLet's work some math.\n\nUnder AOLserver, using the pooled connection paradigm that it uses, for\n5 databases, you would need to define 5 pools. You then can control how\nmany instances of each pool can be opened at any given time. So, if all\ndatabases need the same number of connections average, you raise the max\non pool instances until users quit getting busy messages during normal\nusage -- which usually , for a small number of users (~25 here), is only\n2 or 3 instances. \n\nThe persistent pooled model avoids fork() penalties -- after all, there\nis overhead there, regardless of how small that overhead may be.\n\nI have gone as far as reducing the instances to 1 here -- it's amazing\nhow few people actually do simultaneous accesses! I currently am\nrunning with an instance max of 3 -- and users get busy's very rarely.\n\nWith 90 users on a single database with 5 tables, an instance max of\n10-20 would probably give less than a 10% busy rate. And, as you add\nmore RAM, you can up your instance max to adjust.\n\nI don't know how close to the AOLserver model PHP is (I think it is\npretty close, as the beta of PHP4 is buildable to run as a module under\nAOLserver), but the concept of pooled persistent connections is a sound\none, and eliminates some grief (as long as you watch your transactions\n-- don't want two connections that happen to share a pool instance to\nshare a transaction roolback!). Plus, you can service that required\nnnumber of users at varying satisfaction/busy levels depending upon your\ncurrent server resources.\n\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Mon, 10 Jan 2000 13:35:38 -0500", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Simmultanous Connections (fwd)" }, { "msg_contents": "\n> Boy, persistent connections in AOLserver sure help a lot (ask Lamar\n> Owen!). If stale connections cause problems in your PHP environment,\n> then the PHP persistent connection implementation needs some work.\n\nThis isn't really a hackers issue, so I'll try to be brief but also\ngive a little more info than I originally did. Maybe any further\ndiscussion would be best placed in pgsql-general.\n\nBasically, I think it may depend on the use - for our website, we get\nconnections from a variety of sources - most of them don't repeat for\na long time, if ever. Which means a bunch sit around at any given\ntime, never to be reused. If the new connections come fast enough,\nthis can translate to real problems unless they timeout quickly, which\ndefeats the purpose.\n\nThat being said, maybe the PHP implementaion does need some work, or\nmaybe there are site parameters we could tune to make it work. But\nwhenever we use it, we do eventually end up in trouble as a result.\n\nSo, personally, I don't recommend it in situations where alot of\ndifferent clients will be connecting to the DBMS - at least if low\nmaintennence is a key goal.\n\n> Forking a new backend is actually considerably more expensive then\n> just passing back the PID of an existing backend...\n\n>From the point of view of the server, absolutely. But that connection\ntime is still a very small part of the user's total trransaction time.\nAnd, although I am making alot of guesses as to the nature of the\nplanned DB will be, my guess is that overall machine load will not be\nso high that the process forking becomes critical. My guess is that\nsupport will be hard to come by in alot of public school environments,\nso I'd guess their building for trouble-free operation before speed.\n\n> I sent her a private note saying she really probably shouldn't be looking\n> at MySQL for her application, presumably having a real transaction-based\n> db is a Good Thing when maintaining a database of student grades. Told\n> her she should be looking at various real RDBMS solutions and should leave\n> MySQL out of the picture entirely (while also telling her I thought PG\n> would work fine for her needs, of course).\n\nThat's a good summary of my intended take-home point as well, though\nyou said it much more clearly. All the rest was just personal\nexperience that applies to our environment but my not apply to yours\nor hers.\n\nKarl\n", "msg_date": "Mon, 10 Jan 2000 13:38:14 -0500", "msg_from": "Karl DeBisschop <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Simmultanous Connections (fwd)" }, { "msg_contents": "At 01:35 PM 1/10/00 -0500, Lamar Owen wrote:\n\n>I don't know how close to the AOLserver model PHP is (I think it is\n>pretty close, as the beta of PHP4 is buildable to run as a module under\n>AOLserver), but the concept of pooled persistent connections is a sound\n>one, and eliminates some grief (as long as you watch your transactions\n>-- don't want two connections that happen to share a pool instance to\n>share a transaction roolback!).\n\nSpoken like a long-suffering user of AOLserver's original postgres\ndriver :)\n\nI've solved this particular problem in the latest version of the driver,\nand other problems related to backends crashing and the like. This is\nwhy I suggest that if there are problems with PHPs persistent database\nconnections and Postgres that the PHP implementation of such connections\nneeds work. I know from experience that persistent pooled connections\ncan be implemented in a non-robust fashion (the old postgres driver for\nAOLserver) but I also know that they can be made robust, from personal\nexperience.\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n", "msg_date": "Mon, 10 Jan 2000 11:23:45 -0800", "msg_from": "Don Baccus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Simmultanous Connections (fwd)" }, { "msg_contents": "At 01:38 PM 1/10/00 -0500, Karl DeBisschop wrote:\n\n>This isn't really a hackers issue, so I'll try to be brief but also\n>give a little more info than I originally did. Maybe any further\n>discussion would be best placed in pgsql-general.\n\nPerhaps. I'll give one brief answer here, though. It probably\ndoesn't hurt the developers to see how their product is used in\nreal-life scenarios anyway...\n\n>Basically, I think it may depend on the use - for our website, we get\n>connections from a variety of sources - most of them don't repeat for\n>a long time, if ever. Which means a bunch sit around at any given\n>time, never to be reused. If the new connections come fast enough,\n>this can translate to real problems unless they timeout quickly, which\n>defeats the purpose.\n\n>That being said, maybe the PHP implementaion does need some work, or\n>maybe there are site parameters we could tune to make it work. But\n>whenever we use it, we do eventually end up in trouble as a result.\n\nMy short answer: yes, it does need work if it works as you describe.\nThe whole point of pooling persistent connections is to allow re-use.\nIt sounds like either PHP makes it hard/impossible or that (maybe?)\nyou folks haven't quite figured out how fully exploit their implementation\nof pooled connections.\n\n>So, personally, I don't recommend it in situations where alot of\n>different clients will be connecting to the DBMS - at least if low\n>maintennence is a key goal.\n\nThe problem isn't persistent connections, the problem is the particular\nimplementation you're using. AOLserver's implementation is trouble\nfree, for Postgres, Sybase, Oracle, and Solid. And totally\ntransparent to scripts and dynamic pages (other than SQL differences\ndue to the dbs themselves). The PHP folks are making it available \nwithin AOLserver, as Lamar Owen has pointed out. If they also plug\ninto the AOLserver implementation of pooled persistent database\nconnections, then PHP users will also have a platform available which \nreliably supports such connections.\n\n>> Forking a new backend is actually considerably more expensive then\n>> just passing back the PID of an existing backend...\n>\n>>From the point of view of the server, absolutely. But that connection\n>time is still a very small part of the user's total trransaction time.\n\nDepends on how you're using the database. If you're using it to\npersonalize pages, for instance, you'll be using a lot of simple,\nquick selects. If you're only using the database for complicated,\nslow queries then perhaps you're right.\n\nLet's put it this way ... folks who have a lot more experience than\nme at running very busy database-backed web sites have observed that it\nDOES make a large difference in the scalability of a site. These,\nthough, are sites make heavy use of the database when serving up\npages.\n\nIf forking weren't a problem, the Apache folks wouldn't've bothered\nbuilding modPerl, for instance...\n\n>And, although I am making alot of guesses as to the nature of the\n>planned DB will be, my guess is that overall machine load will not be\n>so high that the process forking becomes critical. My guess is that\n>support will be hard to come by in alot of public school environments,\n>so I'd guess their building for trouble-free operation before speed.\n\nNothing to disagree with here, other than the fact that my own \npersonal experience tells me that persistent connections needn't be a\nsource of trouble. If they are PHP users, though, and if the\nsite really is using PHP as you suspect, then they should probably\navoid them if your experience is an accurate reflection of the state\nof the implementation of persistent connections available to PHP\nusers.\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n", "msg_date": "Mon, 10 Jan 2000 11:40:52 -0800", "msg_from": "Don Baccus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Simmultanous Connections (fwd)" }, { "msg_contents": "When I was researching PG vs MySQL - the big kicker was transaction support\n(mySQL doesn't have it). It looks like MySQL is faster than PG and so is\ngreat for serving data to a web site. But if you also need to perform\nupdates to multiple tables, you probably will want to use transactions - so\nuse PostgreSQL.\n\nI chose PostgreSQL for this reason.\n\nThe PostgreSQL team is incredible. To try and contribute a little, we\nswitched our development from the released software to the PG development\nsoftware (so we can report, and sometimes fix problems) - any bugs we\ndiscovered but couldn't fix ourselves were fixed within hours. Our own\nmodifications were checked and entered in their development tree in less\nthan an hour. Cool.\n\nThe code itself is of very high quality.\n\nSteve\n\n\n\nVince Vielhaber wrote:\n\n> Can anyone here help?\n>\n> Vince.\n>\n> ---------- Forwarded message ----------\n> Date: Mon, 10 Jan 2000 08:52:06 +0000\n> From: Jude Weaver <[email protected]>\n> To: [email protected]\n> Subject: Simmultanous Connections\n>\n> We are a company that writes academic software . We are converting our\n> software to use either PostgreSQL or MySQL. We are leaning toward\n> PostgreSQL, but, I still have several questions.\n> I hope someone can answer these for me.\n>\n> 1. I have read the Q&A for postgreSQL and would like to know the\n> difference between a temporary\n> and a permanant connection. Do you have a connection when you open\n> the database or only when\n> the frontend sends a job to the backend? If 32 people are running\n> a module that opens a database\n> is that 32 connections or will it vary as users read and write to\n> the database?\n>\n> 2. I saw in the Q&A that to run more than 32 simmultanous connects could\n> be a big drain on our re-\n> sources. Our Linux boxes , in general, are Intel 166 to 500s, 128MG\n> of RAM and 6.2 to 13 GIG.\n> Can anyone tell me roughly how much resources per connection does\n> PostgreSQL use?\n>\n> 3. If I have 90 teachers posting grades at the same time, (the grade\n> posting program opens 5 dif-\n> ferent databases) and 25 secretaries and administrators poking\n> around in assorted databases\n> looking at information, will postgresql handle that much traffic?\n>\n> I would appreciate any information you can give me,\n> Thank you - Jude Weaver.\n>\n> ************\n\n", "msg_date": "Mon, 10 Jan 2000 12:08:33 -0800", "msg_from": "Stephen Birch <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Simmultanous Connections (fwd)" }, { "msg_contents": "Karl DeBisschop writes:\n > Sounds like she may looking at postgres in PHP - at least PHP uses\n > that temporary and permanant connection concept. My experience is\n > that PHP persistent connections are not worth it - the time to\n > establish a new connection is pretty small, and stale connections can\n > cause problems.\n > \n > > 2. I saw in the Q&A that to run more than 32 simmultanous connects could\n > > be a big drain on our re-\n > > sources. Our Linux boxes , in general, are Intel 166 to 500s, 128MG\n > > of RAM and 6.2 to 13 GIG.\n > > Can anyone tell me roughly how much resources per connection does\n > > PostgreSQL use?\n > \n > If an idle psql connection is left open, we're looking at about 1 MB\n > RAM plus 4MB swap on my linux box.\n > \n > As I noted above, I'd generally recommend against persistent\n > connections when there are more than a few users.\n\n As an example, I have systems with 2 or 3 hundred simultaneos\nconnections and besides being short time connections it's impossible to\nhave 200 or 300 backends running at the same time.\n In this case, I had to create a proxy to use few connections. I have \nAF_INET and AF_UNIX versions.\n\n []'s\n\nMateus Cordeiro Inssa\n---------------------\nLinux User: 76186 Kernel: 2.3.36\nICQ (Licq): 15243895\n---------------------\[email protected]\[email protected]\n\nTue Jan 11 08:45:00 EDT 2000\n", "msg_date": "Tue, 11 Jan 2000 08:45:51 -0200 (EDT)", "msg_from": "Mateus Cordeiro Inssa <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Simmultanous Connections (fwd)" }, { "msg_contents": "At 08:45 AM 1/11/00 -0200, Mateus Cordeiro Inssa wrote:\n\n> As an example, I have systems with 2 or 3 hundred simultaneos\n>connections and besides being short time connections it's impossible to\n>have 200 or 300 backends running at the same time.\n\nAgain, the problem isn't persistent connections but rather an\nlousy implementation of pooled persistent connections. \n\n> In this case, I had to create a proxy to use few connections.\n\nAnother approach is to throttle the number of connections in the\npersistent pool manager. This is how AOLserver deals with the \nproblem. You tell it the max number of connections to fire up\nand only that many handles are doled out to threads, the rest\nwaiting for others to complete. There's another parameter which\nplaces a ceiling on the number of threads allowed to wait for\na pool connection, which allows me to return a \"too busy\" \nmessage to the user if I so choose. Of course, if a server\nstarts getting too many of these it's time to upgrade to\nsomething faster, to dig into one's queries looking for\nneedless inefficiency, or maybe to remember that you forgot\nto say \"vacuum analyze\" (who, me?)\n\nSome folks like to roll their own. I'm lazy and picked a web\nserver that has already solved such problems for me.\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n", "msg_date": "Tue, 11 Jan 2000 07:21:43 -0800", "msg_from": "Don Baccus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Simmultanous Connections (fwd)" } ]
[ { "msg_contents": "Tom, I can't find any cases where 7 or 9 are used to represent the\nmaximum number of attributes indexed.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 10 Jan 2000 11:24:44 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "oid8/int28" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> Tom, I can't find any cases where 7 or 9 are used to represent the\n> maximum number of attributes indexed.\n\nCould be they're all gone; I know I've seen some in the past though.\n\nSome brave soul should try increasing the constant and then using\nindexes with > 8 columns... better compile -g and --enable-casserts ;-)\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 10 Jan 2000 19:30:01 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] oid8/int28 " } ]
[ { "msg_contents": "I have created a new define FUNC_MAX_ARGS which is has the same value as\nINDEX_MAX_KEYS. Currently they are both 8, but I think they can now be\nchanged to higher values.\n\nI removed two old maxfuncarg defines that were confusing.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 10 Jan 2000 12:10:00 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Number of args to a function configurable" } ]
[ { "msg_contents": "We have the oidvector and int2vector length's set in pg_type.h. Is\nthere any way to make those values configurable from defines in\nconfig.h?\n\nIf not, I will have to move the defines to postgres.h so people can not\nchange them. Maybe that is a better place for them anyway.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 10 Jan 2000 13:51:17 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Changing oidvector length" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> We have the oidvector and int2vector length's set in pg_type.h. Is\n> there any way to make those values configurable from defines in\n> config.h?\n\n> If not, I will have to move the defines to postgres.h so people can not\n> change them. Maybe that is a better place for them anyway.\n\nActually, I suspect they should be in postgres_ext.h, which is where\nNAMEDATALEN is. All of these values are potentially visible to code\noutside Postgres, so postgres_ext.h seems like the right place.\n\nconfig.h would be appropriate for something you could tweak without\nchanging the external API of Postgres...\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 10 Jan 2000 19:35:57 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Changing oidvector length " }, { "msg_contents": "You have:\n\n#define FUNC_MAX_ARGS (INDEX_MAX_KEYS+1)\n\nThis is WRONG, I'm pretty sure --- FUNC_MAX_ARGS should be the same\nas the length of oidvector.\n\nUser-declared functions can definitely only have as many args as there\nare slots in oidvector, because that's all the room there is to declare\nthem in pg_proc.\n\nYou may have gotten confused because fmgr.c allowed 9 args to be passed,\neven though there's no way to declare such a function; I think this was\na hack to support some special system usage --- possibly selectivity\nestimators had 9 args at one time. (They don't now, so the 9th-arg\nsupport was dead code as far as I can tell.) But if we are going to\nincrease the default MAX_ARGS above 8 then the issue goes away anyway,\nand there's no need for fmgr.c to support more args than can normally\nbe declared.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 10 Jan 2000 19:52:46 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Changing oidvector length " }, { "msg_contents": "> You have:\n> \n> #define FUNC_MAX_ARGS (INDEX_MAX_KEYS+1)\n> \n> This is WRONG, I'm pretty sure --- FUNC_MAX_ARGS should be the same\n> as the length of oidvector.\n> \n> User-declared functions can definitely only have as many args as there\n> are slots in oidvector, because that's all the room there is to declare\n> them in pg_proc.\n\n\nI was going to ask about that. The original value for this was 9, while\noid8 was only 8 long. When I went to 16, FUNC_MAX_ARGS has to 17 or\ninitdb fails on int4in. No idea why, and want to ask if anyone knows\nwhy this is required. I know it should be 16, but I can't figure out\nwhy it doesn't work at 16, only at 17.\n\n\n> \n> You may have gotten confused because fmgr.c allowed 9 args to be passed,\n> even though there's no way to declare such a function; I think this was\n> a hack to support some special system usage --- possibly selectivity\n\nbtbuild, I believe.\n\n> estimators had 9 args at one time. (They don't now, so the 9th-arg\n> support was dead code as far as I can tell.) But if we are going to\n> increase the default MAX_ARGS above 8 then the issue goes away anyway,\n> and there's no need for fmgr.c to support more args than can normally\n> be declared.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 10 Jan 2000 20:25:14 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Changing oidvector length" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n>> You may have gotten confused because fmgr.c allowed 9 args to be passed,\n>> even though there's no way to declare such a function; I think this was\n>> a hack to support some special system usage --- possibly selectivity\n>\n> btbuild, I believe.\n\nAh, you are right (it just blew up on me when I tried it at eight ;-)).\n\n>> But if we are going to\n>> increase the default MAX_ARGS above 8 then the issue goes away anyway,\n>> and there's no need for fmgr.c to support more args than can normally\n>> be declared.\n\nThis still holds though --- we will just require INDEX_MAX_ARGS to be\nat least 9 from here on out...\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 10 Jan 2000 21:25:51 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Changing oidvector length " }, { "msg_contents": "> Bruce Momjian <[email protected]> writes:\n> >> You may have gotten confused because fmgr.c allowed 9 args to be passed,\n> >> even though there's no way to declare such a function; I think this was\n> >> a hack to support some special system usage --- possibly selectivity\n> >\n> > btbuild, I believe.\n> \n> Ah, you are right (it just blew up on me when I tried it at eight ;-)).\n> \n> >> But if we are going to\n> >> increase the default MAX_ARGS above 8 then the issue goes away anyway,\n> >> and there's no need for fmgr.c to support more args than can normally\n> >> be declared.\n> \n> This still holds though --- we will just require INDEX_MAX_ARGS to be\n> at least 9 from here on out...\n\nBut it bombs on 16.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 10 Jan 2000 21:34:48 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Changing oidvector length" } ]
[ { "msg_contents": "Well, this function was introduced, because the query string limit was\nremoved from the libpq library. However, I don't believe that any other\nlibraries have been worked on.\n\nMikeA\n\n\n\n-----Original Message-----\nFrom: Bruce Momjian\nTo: Oliver Elphick\nCc: [email protected]\nSent: 00/01/10 05:06\nSubject: Re: [HACKERS] Shared library version\n\n> There appear to have been changes in the shared library libpq.\n> \n> The default library from 6.5.3 with psql from current tree gives:\n> \n> olly@linda$ psql template1\n> ...\n> psql: error in loading shared libraries: psql: undefined symbol: \n> createPQExpBuffer\n> \n> olly@linda$ LD_PRELOAD=/usr/local/pgsql/lib/libpq.so.2.0 psql\ntemplate1\n> ...\n> template1=>\n> \n> Since the library has changed, it needs to have a new version number.\n\nSeems I should just kick up every minor version number for 7.0 for all\ninterfaces. OK?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania\n19026\n\n************\n", "msg_date": "Mon, 10 Jan 2000 21:07:41 +0200", "msg_from": "\"Ansley, Michael\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] Shared library version" } ]
[ { "msg_contents": "I have tried updating the max arg/index lengths to 16, but initdb is now\nfailing. Any ideas? I am looking for the cause, but can't find it.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 10 Jan 2000 15:47:47 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "cvs tree is broken" } ]
[ { "msg_contents": "I have found the problem. Fixing now. Current source will work.\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 10 Jan 2000 18:08:05 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "initdb fixed" } ]
[ { "msg_contents": "Tom, why are the non-trailing zeros in the *vector types as initialized\nin the catalog/*.h files.\n\nYou mentioned you knew what they meant.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 11 Jan 2000 00:14:50 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "zeros in oidvector type" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> Tom, why are the non-trailing zeros in the *vector types as initialized\n> in the catalog/*.h files.\n\n> You mentioned you knew what they meant.\n\nIn pg_proc's proargtypes entries, a zero in a valid argument position\n(ie, one of the first 'pronargs' positions) can mean either \"any type\nis acceptable\" or \"opaque argument type\" (not sure if those are quite\nthe same thing or not!) or \"C string input to a datatype's typinput\nconversion routine\" (definitely not the same thing).\n\nThe entries in pg_proc.h call out these zeroes explicitly even when\nthey are trailing arguments --- generally, the number of values shown\nin the proargtypes column should equal pronargs.\n\nI don't think there's any good way for oidvectorout to duplicate that\nstring, if that's what you were wondering about; oidvectorout has no\naccess to the value of pronargs.\n\nSomeday I would like to replace these special meanings of \"zero type\noid\" with definite nonzero type OIDs (this has been discussed before,\nat least for the C-string case). Then the issue goes away.\n\nBTW, I just managed to pass the regression tests with INDEX_MAX_KEYS\nset to 10. Will commit a couple more fixes momentarily. Next thing\nis to see if functions and indexes with >8 args actually work...\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 11 Jan 2000 00:55:10 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: zeros in oidvector type " }, { "msg_contents": "> Bruce Momjian <[email protected]> writes:\n> > Tom, why are the non-trailing zeros in the *vector types as initialized\n> > in the catalog/*.h files.\n> \n> > You mentioned you knew what they meant.\n> \n> In pg_proc's proargtypes entries, a zero in a valid argument position\n> (ie, one of the first 'pronargs' positions) can mean either \"any type\n> is acceptable\" or \"opaque argument type\" (not sure if those are quite\n> the same thing or not!) or \"C string input to a datatype's typinput\n> conversion routine\" (definitely not the same thing).\n> \n> The entries in pg_proc.h call out these zeroes explicitly even when\n> they are trailing arguments --- generally, the number of values shown\n> in the proargtypes column should equal pronargs.\n\nThe reason I ask is that there are some parts of the code that try to\nfind the number of args by looking for the _first_ non-zero entry in the\nlist. I changed those to look for the _last_ non-zero entry, but it\nsounds like that is still wrong.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 11 Jan 2000 06:36:42 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: zeros in oidvector type" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> The reason I ask is that there are some parts of the code that try to\n> find the number of args by looking for the _first_ non-zero entry in the\n> list.\n\nWhere? This is certainly broken for anything that needs to deal with\nan arbitrary pg_proc entry, but it might be OK in limited contexts.\nAlso, if you are thinking of stuff that looks at *index* definitions\nrather than *function* definitions, I think it's OK.\n\n> I changed those to look for the _last_ non-zero entry, but it\n> sounds like that is still wrong.\n\nI'm dubious about changing something like that without fairly close\ninvestigation and/or a known bug to fix. If those bits of code are\nwrong, they were wrong before the FUNC_MAX_ARGS change ... and if\nthey weren't wrong, maybe they are now.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 11 Jan 2000 09:53:54 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: zeros in oidvector type " } ]
[ { "msg_contents": "At least it passes regress tests --- haven't tried making an index\nwith more than 8 keys yet.\n\nI just committed config.h.in with default settings of\nINDEX_MAX_KEYS = FUNC_MAX_ARGS = 16. This forces initdb!\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 11 Jan 2000 01:03:22 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "INDEX_MAX_KEYS = 16 ... it works, too" }, { "msg_contents": "BTW, I have done quick-and-dirty tests of indexes with up to 16 keys\nand SQL functions with up to 16 arguments, and both seem to work.\nDid not try plpgsql or pltcl functions --- perhaps someone who's\nhandier than I am with those languages can check them.\n\nI also committed a patch to ensure that index declarations with\nmore than INDEX_MAX_KEYS will be rejected; this oversight was\nnoted by Hiroshi a while ago.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 12 Jan 2000 02:01:58 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: INDEX_MAX_KEYS = 16 ... it works, too " } ]
[ { "msg_contents": "What went wrong here? This is from a freshly updated and compiled source.\n\npeter ~$ pg-install/bin/initdb -D $HOME/pg-install/data\nThis database system will be initialized with username \"peter\".\nThis user will own all the data files and must also own the server\nprocess.\n \nCreating database system directory /home/peter/pg-install/data\nCreating database system directory /home/peter/pg-install/data/base\nCreating database XLOG directory /home/peter/pg-install/data/pg_xlog\nCreating template database in /home/peter/pg-install/data/base/template1\nERROR: TypeCreate: function 'int4in(opaque)' does not exist\nERROR: TypeCreate: function 'int4in(opaque)' does not exist\nCreating global relations in /home/peter/pg-install/data/base\nAdding template1 database to pg_database\n \n \ninitdb failed.\nRemoving /home/peter/pg-install/data.\n\n\nI don't feel like digging this up now, surely someone must have noticed\nit.\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n\n", "msg_date": "Tue, 11 Jan 2000 14:26:44 +0100", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Who fried this?" }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> What went wrong here? This is from a freshly updated and compiled source.\n> ERROR: TypeCreate: function 'int4in(opaque)' does not exist\n\nThat was one of the intermediate states last night while Bruce and I\nwere fixing the FUNC_MAX_ARGS code. Update and rebuild (including\nconfigure) and you should be OK.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 11 Jan 2000 10:06:10 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Who fried this? " }, { "msg_contents": "> Peter Eisentraut <[email protected]> writes:\n> > What went wrong here? This is from a freshly updated and compiled source.\n> > ERROR: TypeCreate: function 'int4in(opaque)' does not exist\n> \n> That was one of the intermediate states last night while Bruce and I\n> were fixing the FUNC_MAX_ARGS code. Update and rebuild (including\n> configure) and you should be OK.\n\nAnd many thanks to Tom for getting me out of this problem I caused.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 11 Jan 2000 11:25:15 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Who fried this?" } ]
[ { "msg_contents": "sorry, i never hit \"send\" on this.\n\nAndrew Yu <[email protected]> writes:\n>> From: Bruce Momjian <[email protected]>\n>> Subject: Re: [HACKERS] Historical trivia (was Re: First Major Open Source Database)\n>> To: Tom Lane <[email protected]>\n>> Date: Sat, 8 Jan 2000 01:31:40 -0500 (EST)\n>> CC: PostgreSQL-development <[email protected]>, [email protected],\n>> [email protected]\n>> \n>> I am CC'ing Jolly and Andrew on this. They may know the answer.\n>> \n>> ---------------------------------------------------------------------------\n>> \n>>> Thomas Lockhart <[email protected]> writes:\n>>>>> It did not use any Ingres code, as told to me by Jolly, I\n>>>>> think. My book has Ingres mentioned as an \"ancestor\" of\n>>>>> Postgres.\n>>> \n>>>> I suppose we could have figured this out ourselves, since\n>>>> Postgres was originally written in Lisp, and afaik Ingres was\n>>>> always C or somesuch traditional compiled-only code. We still\n>>>> see evidence of this in our code tree with the way lists and\n>>>> parser nodes are handled.\n>>> \n>>> It's clear from both the comments and remnants of coding\n>>> conventions that the planner/optimizer was originally Lisp code,\n>>> and was hand- translated to C at some point in the dim mists of\n>>> prehistory (early 1990s, possibly ;-)). That Lisp heritage is\n>>> responsible for some of the better things about the code, and\n>>> also some of the worse things.\n>>> \n>>> But I'm not sure I believe that *all* of the code was originally\n>>> Lisp. I've never heard of a Lisp interface for yacc-generated\n>>> parsers, for example. The parts of the executor I've looked at\n>>> don't seem nearly as Lispy as the parser/planner/optimizer,\n>>> either. So it seems possible that parts of Postgres were\n>>> written afresh in Lisp while other parts were lifted from an\n>>> older C implementation.\n>>> \n>>> </idle speculation>\n>>> \n>>> Does anyone here still recall the origins of Postgres? I'm\n>>> curious to know more about the history of this beast.\n> \n> I was under the impression that postgres was all lisp at one point\n> and that it does not share code with Ingres. Unfortuantely, all this\n> happen before my time. You must know about this for sure? I guess I\n> can ask my boss (Zelaine Fong) too. :)\n\nthe project started with jeff anton, steven grady and some grad\nstudents. they started from scratch. no code was inherited from\ningres. (later, i ripped off a little bit of code from ingres to\nimplement one of the numeric types...i think i might have replaced it\nwith posix calls before pg95.)\n\nfrom the beginning until late 1989, the system was divided into two\nparts: one written in C, the other written in franz lisp. the line\nwas drawn below the traffic cop, query executor and query optimizer;\nhence, the flow of control went back and forth from the lisp \"top\"\nhalf into the C \"bottom\" half. the interface was relatively narrow\n(parser, system catalogs, access method interface) and used the\nessentially undocumented franz lisp equivalent of JNI.\n\nsteven wrote and maintained the parser, which was always written using\nlex/yacc (i.e., C) but had to generate (in C!) a lisp parse tree.\njeff wrote the traffic cop and (the first version of) a lot of the\nrest of the C software. mike hirohama joined early on and wrote stuff\nlike the first b-tree implementation before becoming chief programmer\nin 1988. chinheng hong (an MS student, now at oracle) wrote the query\nexecutor in franz lisp. zelaine fong (an MS student, now at informix)\nwrote the query optimizer in franz lisp. lisp/C integration was still\ngoing on when philip chang and i joined the project in late 1986.\n\nin late 1989, jeff goh wrote a postgres-specific lisp->C translator\nthat automated most of the conversion.\n\ni think the executor looks more C-like for a couple of reasons.\nfirst, people have hacked on the (C version of the) executor a lot\nmore because it's a lot less mysterious and fragile than the\nstill-lispy-looking optimizer. chinheng's executor was basically gone\nby 1991. second, the executor mostly just walks the query plan and\n\"does stuff,\" whereas the optimizer is actually creating nodes and\nlists and manipulating those data structures. furthermore, it has\nmany more such data structures than just the parse tree and query plan\ndata structures (i.e., the ones for its own internal use). this means\nthat a larger fraction of the code has to do with getting/setting\nthese lispy structures.\n\nhope this helps,\n-- paul\n", "msg_date": "Tue, 11 Jan 2000 14:26:10 PST", "msg_from": "\"Paul M. Aoki\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Historical trivia (was Re: First Major Open Source\n\tDatabase)" } ]
[ { "msg_contents": "\n> And, of course, that Oracle, Informix and the rest ought to \n> get off their\n> collective asses and support SQL 92. After all, they \n> undoubtably contributed\n> to the development of those standards - I can't believe they \n> didn't fund\n> representatives to the committees.\n\nInformix does support the full SQL92 outer join syntax since Version 7.31.\n\nAndreas \n", "msg_date": "Tue, 11 Jan 2000 18:47:52 +0100", "msg_from": "Zeugswetter Andreas SB <[email protected]>", "msg_from_op": true, "msg_subject": "AW: [HACKERS] Enhancing PGSQL to be compatible with Informix SQL" }, { "msg_contents": "[Charset iso-8859-1 unsupported, filtering to ASCII...]\n> \n> > And, of course, that Oracle, Informix and the rest ought to \n> > get off their\n> > collective asses and support SQL 92. After all, they \n> > undoubtably contributed\n> > to the development of those standards - I can't believe they \n> > didn't fund\n> > representatives to the committees.\n> \n> Informix does support the full SQL92 outer join syntax since Version 7.31.\n\n7.31 is a fairly new release, in the past year or two.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 11 Jan 2000 13:47:46 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: AW: [HACKERS] Enhancing PGSQL to be compatible with Informix SQL" } ]
[ { "msg_contents": "\n> > \t5/\tserial data type\n> > \t\to\tSerial type must return inserted key value\n> \n> How does Informix return the value?\n\nsqlca.sqlerrd[1] in esqlc\n\nAndreas\n", "msg_date": "Tue, 11 Jan 2000 18:53:25 +0100", "msg_from": "Zeugswetter Andreas SB <[email protected]>", "msg_from_op": true, "msg_subject": "AW: [HACKERS] Enhancing PGSQL to be compatible with Informix SQL" } ]
[ { "msg_contents": "Today I ran into an inconsistency between two versions of postgresql\nin how psql handles copies from stdin. At this point I am not sure\nhow the rewrite of psql does things, but thought I'd mention the\nproblem in case someone with it installed can check.\n\nThe issue is how the command\n\n psql -f test.sql db < test.dat\n\nis treated, given the following files:\n\n -- test.sql\n drop table test;\n create table test (name text);\n copy test from stdin;\n select * from test;\n\nand\n\n test.dat\n a\n b\n\nSpecifically v6.4.2 and v6.5.2 differ in the outcome, with v6.4.2\nproducing what I would expect and v6.5.2 producing anomalous output.\nNote that performing the copy as\n\n psql -c \"copy test from stdin\" db < test.dat\n\nworks fine in either case.\n\nv6.4.2 output: The contents of test.dat are read into the table as\none might expect having redirected that file to stdin and copying from\nstdin.\n\nv6.5.2 output: The contents of test.dat are not read into the table at\nall. Instead, the remainder of the test.sql file (i.e., select * ...)\nare read into the table.\n\nHow does the current version behave when performing these copies? If\nit still behaves like 6.5.2, I suspect there is some bug in handling\nthe copy command.\n\nCheers,\nBrook\n\n", "msg_date": "Tue, 11 Jan 2000 16:35:24 -0700 (MST)", "msg_from": "Brook Milligan <[email protected]>", "msg_from_op": true, "msg_subject": "psql -f inconsistency with \"copy from stdin\"" }, { "msg_contents": "7.0 behaves like 6.5.* in this regard because the code is pretty much the\nsame. Thanks for pointing this out.\n\nOn 2000-01-12, Brook Milligan mentioned:\n\n> Today I ran into an inconsistency between two versions of postgresql\n> in how psql handles copies from stdin. At this point I am not sure\n> how the rewrite of psql does things, but thought I'd mention the\n> problem in case someone with it installed can check.\n> \n> The issue is how the command\n> \n> psql -f test.sql db < test.dat\n> \n> is treated, given the following files:\n> \n> -- test.sql\n> drop table test;\n> create table test (name text);\n> copy test from stdin;\n> select * from test;\n> \n> and\n> \n> test.dat\n> a\n> b\n> \n> Specifically v6.4.2 and v6.5.2 differ in the outcome, with v6.4.2\n> producing what I would expect and v6.5.2 producing anomalous output.\n> Note that performing the copy as\n> \n> psql -c \"copy test from stdin\" db < test.dat\n> \n> works fine in either case.\n> \n> v6.4.2 output: The contents of test.dat are read into the table as\n> one might expect having redirected that file to stdin and copying from\n> stdin.\n> \n> v6.5.2 output: The contents of test.dat are not read into the table at\n> all. Instead, the remainder of the test.sql file (i.e., select * ...)\n> are read into the table.\n> \n> How does the current version behave when performing these copies? If\n> it still behaves like 6.5.2, I suspect there is some bug in handling\n> the copy command.\n> \n> Cheers,\n> Brook\n> \n> \n> ************\n> \n> \n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n\n", "msg_date": "Wed, 12 Jan 2000 04:30:43 +0100", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] psql -f inconsistency with \"copy from stdin\"" }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> 7.0 behaves like 6.5.* in this regard because the code is pretty much the\n> same. Thanks for pointing this out.\n\nOf course, the question is which way is right...\n\nI can see the potential usefulness of doing\n\tpsql -f driving.script <data.file\nbut on the other hand, it bothers me a good deal that a script\ncontaining\n\tCOPY table FROM STDIN;\n\t... data here ...\n\t\\.\n(as generated by such unheard-of, seldom-used utilities as pg_dump)\nwould work when sourced by psql <pgdump.script and *fail* when sourced\nby psql -f pgdump.script. But that's what will happen if we change\nit back.\n\nI suspect the change in behavior from 6.4 to 6.5 may have been a\ndeliberate change to avoid this failure mode. It'd be worth checking\nthe archives to see if you can find any discussion about it.\n\nIt seems to me that we ought to provide both behaviors, but make sure\nthat the one that supports data-in-the-script is the one invoked by\nCOPY FROM STDIN (since that's what pg_dump uses). Perhaps psql's \\copy\ncommand can be set up to support the other alternative.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 12 Jan 2000 01:42:56 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] psql -f inconsistency with \"copy from stdin\" " }, { "msg_contents": " Of course, the question is which way is right...\n\n I can see the potential usefulness of doing\n\t psql -f driving.script <data.file\n but on the other hand, it bothers me a good deal that a script\n containing\n\t COPY table FROM STDIN;\n\t ... data here ...\n\t \\.\n (as generated by such unheard-of, seldom-used utilities as pg_dump)\n would work when sourced by psql <pgdump.script and *fail* when sourced\n by psql -f pgdump.script. But that's what will happen if we change\n it back.\n\n I suspect the change in behavior from 6.4 to 6.5 may have been a\n deliberate change to avoid this failure mode. It'd be worth checking\n the archives to see if you can find any discussion about it.\n\n It seems to me that we ought to provide both behaviors, but make sure\n that the one that supports data-in-the-script is the one invoked by\n COPY FROM STDIN (since that's what pg_dump uses). Perhaps psql's \\copy\n command can be set up to support the other alternative.\n\nBut isn't there a greater difference between copy and \\copy than this?\nDoesn't one act on the frontend and one on the backend? There needs\nto be a mechanism for copying data in through the front end without\nspecial permissions.\n\nAlso, it seems unfortunate from a semantics point of view to have COPY\nFROM STDIN not actually refer to the stdin file of the process.\nPerhaps that is necessary to preserve compatability with old pg_dump\n(new versions could be changed in this regard of course), but it is\nnot what I would naturally expect STDIN to mean in the context of 30\nyears of Unix development. Further, this use of STDIN clearly\nconflicts with the meaning of STDOUT in the analogous copy out command\nwhich doesn't insert the output into the script file but rather\ndirects it to the stdout file. \n\nIn order to maintain some compatability with these broader uses of the\nterms STDIN/STDOUT (while still supporting previous pg_dump scripts,\nat least for awhile), I think it is worth exploring some options. A\nfew ideas are:\n\n- Introduce a new syntax for the 6.5.2 here-doc semantics.\n Possibilities might include COPY FROM HERE (copy ends at EOF or \\.)\n or COPY UNTIL <tag> (copy ends at matching <tag>, like shell\n here-docs). pg_dump would have to be changed to correspond.\n\n- Introduce a new flag to psql to differentiate the interpretation of\n COPY FROM STDIN. This seems confusing to users, but might be\n worthwhile (but become deprecated after a few releases) if the\n syntax is changed and old pg_dump scripts need supporting. New\n scripts and new pg_dump needn't worry about this if they use the new\n syntax.\n\nCheers,\nBrook\n", "msg_date": "Wed, 12 Jan 2000 08:19:23 -0700 (MST)", "msg_from": "Brook Milligan <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] psql -f inconsistency with \"copy from stdin\"" }, { "msg_contents": "Brook Milligan <[email protected]> writes:\n> It seems to me that we ought to provide both behaviors, but make sure\n> that the one that supports data-in-the-script is the one invoked by\n> COPY FROM STDIN (since that's what pg_dump uses). Perhaps psql's \\copy\n> command can be set up to support the other alternative.\n\n> But isn't there a greater difference between copy and \\copy than this?\n> Doesn't one act on the frontend and one on the backend?\n\nNot when it's COPY FROM STDIN or TO STDOUT --- from the backend's point of\nview, that means transfer data from or to the frontend. What psql\ndoes with it is psql's concern.\n\n(Actually, \\copy is implemented by sending a COPY FROM STDIN/TO STDOUT\ncommand to the backend; the backend can't tell the difference between\nthe two cases, and has no way to know where the data is really coming\nfrom or going to on the client side.)\n\n> - Introduce a new syntax for the 6.5.2 here-doc semantics.\n> Possibilities might include COPY FROM HERE (copy ends at EOF or \\.)\n\nChanging the SQL command is the wrong thing to think about, because\nthe parameter would only be known at the backend which is not where\nit needs to be known to change psql's behavior. Furthermore, from\nthe backend's point of view it *is* sending to or from the only\n\"user interface\" it's got. So I don't think there's anything wrong\nwith the definition of the SQL COPY command. You should be thinking\nabout adding options to psql's \\copy, instead, if you want more\nflexibility in controlling where psql gets or puts data.\n\n> pg_dump would have to be changed to correspond.\n\nIMHO any proposal that requires changing pg_dump is a non-starter,\nbecause it will fail when people try to load 6.5 or earlier dumps\ninto 7.0. But fortunately, pg_dump doesn't use \\copy ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 12 Jan 2000 10:49:34 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] psql -f inconsistency with \"copy from stdin\" " }, { "msg_contents": "On Wed, 12 Jan 2000, Tom Lane wrote:\n\n> > pg_dump would have to be changed to correspond.\n> \n> IMHO any proposal that requires changing pg_dump is a non-starter,\n> because it will fail when people try to load 6.5 or earlier dumps\n> into 7.0. But fortunately, pg_dump doesn't use \\copy ...\n\nI'm confused here...why would \"any proposal that requires changing pg_dump\nis a non-starter\"? How does changing pg_dump in v7.0 affect pg_dump in\nv6.5?\n\nAs long as I can reload my v6.5 data into a v7.0 database using the\npg_dump from v6.5, confused as to why v7.0s pg_dump matters...\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Wed, 12 Jan 2000 12:19:59 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] psql -f inconsistency with \"copy from stdin\" " }, { "msg_contents": "After further contemplation I am not completely sure which way is the\ncorrect behaviour. Consider me doing this:\n\n--test.sql\nCOPY foo FROM stdin;\ndata\tdata\ndata\tdata\nSELECT * FROM foo;\n\nand running psql -f test.sql < (anything) on it. Then I would expect it\nto behave the other way.\n\nThe -f option is just another way of saying \"get the input from there\". If\nyou use both -f and stdin you're in essence saying \"get the input from\nthere and there\", and that feature does not exist in psql and would be\nhard to extend to the general case.\n\n\nOn 2000-01-12, Brook Milligan mentioned:\n\n> Today I ran into an inconsistency between two versions of postgresql\n> in how psql handles copies from stdin. At this point I am not sure\n> how the rewrite of psql does things, but thought I'd mention the\n> problem in case someone with it installed can check.\n> \n> The issue is how the command\n> \n> psql -f test.sql db < test.dat\n> \n> is treated, given the following files:\n> \n> -- test.sql\n> drop table test;\n> create table test (name text);\n> copy test from stdin;\n> select * from test;\n> \n> and\n> \n> test.dat\n> a\n> b\n> \n> Specifically v6.4.2 and v6.5.2 differ in the outcome, with v6.4.2\n> producing what I would expect and v6.5.2 producing anomalous output.\n> Note that performing the copy as\n> \n> psql -c \"copy test from stdin\" db < test.dat\n> \n> works fine in either case.\n> \n> v6.4.2 output: The contents of test.dat are read into the table as\n> one might expect having redirected that file to stdin and copying from\n> stdin.\n> \n> v6.5.2 output: The contents of test.dat are not read into the table at\n> all. Instead, the remainder of the test.sql file (i.e., select * ...)\n> are read into the table.\n> \n> How does the current version behave when performing these copies? If\n> it still behaves like 6.5.2, I suspect there is some bug in handling\n> the copy command.\n> \n> Cheers,\n> Brook\n> \n> \n> ************\n> \n> \n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n\n", "msg_date": "Wed, 12 Jan 2000 20:38:29 +0100 (CET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] psql -f inconsistency with \"copy from stdin\"" }, { "msg_contents": " The -f option is just another way of saying \"get the input from there\". If\n you use both -f and stdin you're in essence saying \"get the input from\n there and there\", and that feature does not exist in psql and would be\n hard to extend to the general case.\n\nBut there are specifically two kinds of input involved here [*]:\n\n- input of SQL commands and such to psql\n- input of data to a COPY command\n\nTo me these are conceptually very distinct (in much the same way you\nhave distinguished already between various output streams; in fact,\nI'm not sure how you have matched those with the output stream from\nCOPY, but it might be relevant to think about that in light of this\ndiscussion). Thus, to me it makes sense to say \"take input from there\nand there,\" as long as it is clear that one \"there\" refers to one\ninput stream and the other to the other one. For example, -f\nnaturally refers to the first one above, while the STDIN naturally\nrefers to the second.\n\nSaying that -f should override all other sources of input is\ninconsistent in its own way; after all, that doesn't override a COPY\nFROM \"filename\" command, does it? In that case, you maintain a\ndistinction between two different input streams. It seems that\ndropping that distinction for the special case of \"filename\" == STDIN\nis introducing unnecessary confusion into the semantics of commands.\n\nIn short, I'm not really convinced that it is unreasonable to expect a\ncommand like COPY (or \\copy) to be able to associate itself with an\ninput (or output) stream that is different from that implied by -f,\ngiven that the nature of the various I/O streams is so different and\nclearly defined.\n\nCheers,\nBrook\n\n[*] I'm not sure what you mean by the \"general case,\" but I can't\nthink of any other commands, at least SQL commands, that are naturally\nassociated with more than one input stream, namely the source of the\ncommand itself which may include embedded data. Unless I'm missing\nsomething here, I suspect the \"general case\" is just fine and doesn't\ninteract with the problem I raised. What is problematical is the\nspecial case of a command (perhaps there are others?) that inherently\ninvolves more than one input stream: the source of the command itself\nand the source of data upon which the command operates.\n", "msg_date": "Wed, 12 Jan 2000 13:54:20 -0700 (MST)", "msg_from": "Brook Milligan <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] psql -f inconsistency with \"copy from stdin\"" }, { "msg_contents": "On 2000-01-12, Tom Lane mentioned:\n\n> It seems to me that we ought to provide both behaviors, but make sure\n> that the one that supports data-in-the-script is the one invoked by\n> COPY FROM STDIN (since that's what pg_dump uses). Perhaps psql's \\copy\n> command can be set up to support the other alternative.\n\n\\copy from stdin is not used yet. That would work. There might be issues\nI'm overlooking now, but anything else that came up in this thread will\nmost likely not work in the general case.\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n\n", "msg_date": "Thu, 13 Jan 2000 00:29:31 +0100 (CET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] psql -f inconsistency with \"copy from stdin\" " }, { "msg_contents": "The Hermit Hacker <[email protected]> writes:\n> On Wed, 12 Jan 2000, Tom Lane wrote:\n>> IMHO any proposal that requires changing pg_dump is a non-starter,\n>> because it will fail when people try to load 6.5 or earlier dumps\n>> into 7.0. But fortunately, pg_dump doesn't use \\copy ...\n\n> I'm confused here...why would \"any proposal that requires changing pg_dump\n> is a non-starter\"? How does changing pg_dump in v7.0 affect pg_dump in\n> v6.5?\n\nBecause people will be using 6.5 pg_dump to make dump scripts that they\nwill then try to load into 7.0 with 7.0's psql. If we change the way\nthat COPY FROM STDIN is interpreted, we risk trouble with those scripts.\n\nI like Peter's suggestion of defining \"\\copy from stdin\" to mean\n\"read from psql's stdin\". That would leave the SQL command COPY FROM\nSTDIN for the other case where the data is in-line in the script.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 12 Jan 2000 19:20:04 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] psql -f inconsistency with \"copy from stdin\" " }, { "msg_contents": "Okay, this is the new law:\n\ncopy x from stdin;\n\n\"stdin\" is whereever the actual copy from stdin command line came from.\nThis is that way because \"stdin\" in that case does not really refer to\nstdin in the classical sense but tells the backend to get the data from\nthe same stream the command came from (namely the network connection), and\nthat's what we're doing.\n\ncopy x from stdout;\n\nThe output goes to whereever select * from x would go to, in particular \\o\naffects this. This is purely because I said so, but I think it's\nreasonable.\n\n\\copy x from stdin\n\nThe input comes from psql's stdin. (Which is correcter in this case since\nit's a _frontend_ copy.)\n\n\\copy x to stdout\n\npsql's stdout\n\n\nI hope everyone's happy now. ;)\n\n\n\nOn 2000-01-12, Brook Milligan mentioned:\n\n> But there are specifically two kinds of input involved here [*]:\n> \n> - input of SQL commands and such to psql\n> - input of data to a COPY command\n> \n> To me these are conceptually very distinct (in much the same way you\n> have distinguished already between various output streams; in fact,\n> I'm not sure how you have matched those with the output stream from\n> COPY, but it might be relevant to think about that in light of this\n> discussion). Thus, to me it makes sense to say \"take input from there\n> and there,\" as long as it is clear that one \"there\" refers to one\n> input stream and the other to the other one. For example, -f\n> naturally refers to the first one above, while the STDIN naturally\n> refers to the second.\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n\n\n", "msg_date": "Fri, 14 Jan 2000 23:25:44 +0100 (CET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] psql -f inconsistency with \"copy from stdin\"" } ]
[ { "msg_contents": "I looked in the \"TODO\" list and didn't find any mention of\navg() or relevant items regarding aggregates so here's my\nbug du jour:\n\nacs=> create table foo(p numeric(9,2));\nCREATE\nacs=> select avg(p) from foo;\nERROR: overflow on numeric ABS(value) >= 10^-1 for field with precision 0\nscale 1723\nacs=> \nacs=> insert into foo values(3);\nINSERT 1014409 1\nacs=> select avg(p) from foo;\nERROR: overflow on numeric ABS(value) >= 10^-1 for field with precision 0\nscale 1723\nacs=> select p from foo;\n p\n----\n3.00\n(1 row)\n\nacs=> select max(p) from foo;\n max\n----\n3.00\n(1 row)\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n", "msg_date": "Tue, 11 Jan 2000 18:15:39 -0800", "msg_from": "Don Baccus <[email protected]>", "msg_from_op": true, "msg_subject": "bug in 6.5.3..." }, { "msg_contents": "Don Baccus <[email protected]> writes:\n> I looked in the \"TODO\" list and didn't find any mention of\n> avg() or relevant items regarding aggregates so here's my\n> bug du jour:\n\n> acs=> create table foo(p numeric(9,2));\n> CREATE\n> acs=> select avg(p) from foo;\n> ERROR: overflow on numeric ABS(value) >= 10^-1 for field with precision 0\n> scale 1723\n> acs=> \n> acs=> insert into foo values(3);\n> INSERT 1014409 1\n> acs=> select avg(p) from foo;\n> ERROR: overflow on numeric ABS(value) >= 10^-1 for field with precision 0\n> scale 1723\n\nThat's a known bug I believe (Jan, are you paying attention?). It seems\nto be platform-dependent --- in current sources, I see no failure on an\nHPUX box, but a Linux box fails with\nERROR: overflow on numeric ABS(value) >= 10^-1 for field with precision 2077 scale 22808\nMaybe a big-vs-little-endian kind of problem?\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 12 Jan 2000 01:33:01 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] bug in 6.5.3... " } ]
[ { "msg_contents": "Tom, I see you found the case where the array was being written past the\nend, or so I thought. I now see I was wrong:\n\n /*\n * Need to make these arrays large enough to be sure there is a\n * terminating 0 at the end of each one.\n */\n info->classlist = (Oid *) palloc(sizeof(Oid) * (INDEX_MAX_KEYS+1));\n info->indexkeys = (int *) palloc(sizeof(int) * (INDEX_MAX_KEYS+1));\n info->ordering = (Oid *) palloc(sizeof(Oid) * (INDEX_MAX_KEYS+1));\n...\n\n for (i = 0; i < INDEX_MAX_KEYS; i++)\n info->indexkeys[i] = index->indkey[i];\n+ info->indexkeys[INDEX_MAX_KEYS] = 0;\n for (i = 0; i < INDEX_MAX_KEYS; i++)\n info->classlist[i] = index->indclass[i];\n+ info->classlist[INDEX_MAX_KEYS] = (Oid) 0;\n\n\nThanks again, Tom.\n\nVadim used to bail me out of the jams I get into. Tom, looks like\nyou're my new savior.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 11 Jan 2000 21:44:33 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "patch re-added" } ]
[ { "msg_contents": "Hmmm, I got the following this morning on version 6.5.2 on DEC Alpha\nduring a vacuum verbose analyze. Ended up with duplicate rows of\neverything.\n\nNOTICE: --Relation tasksids--\nNOTICE: Pages 1356: Changed 349, Reapped 875, Empty 0, New 0; Tup\n88946: Vac 0, Keep/VTL 0/0, Crash 0, UnUsed 123921, MinLen 41, MaxLen\n41; Re-using: Free/Avail. Space 5965708/5965708; EndEmpty/Avail. Pages\n0/875. Elapsed 0/0 sec.\nNOTICE: Rel tasksids: Pages: 1356 --> 567; Tuple(s) moved: 31746.\nElapsed 0/0 sec.\nNOTICE: BlowawayRelationBuffers(tasksids, 567): block 764 is referenced\n(private 0, last 0, global 1)\npqReadData() -- backend closed the channel unexpectedly.\n This probably means the backend terminated abnormally\n before or while processing the request.\nWe have lost the connection to the backend, so further processing is\nimpossible. Terminating.\n\nAccording to the mailing list archive\n\nhttp://www.postgresql.org/mhonarc/pgsql-hackers/1999-02/msg00052.html\n\na bug in this area was fixed in 6.4. I seem to remember that somebody is\nlooking at vacuum at the moment, so this may be something to keep in\nmind.\n\nAdriaan\n\n", "msg_date": "Wed, 12 Jan 2000 07:50:10 +0000", "msg_from": "Adriaan Joubert <[email protected]>", "msg_from_op": true, "msg_subject": "BlowAwayRelationBuffers" }, { "msg_contents": "Adriaan Joubert <[email protected]> writes:\n> Hmmm, I got the following this morning on version 6.5.2 on DEC Alpha\n> during a vacuum verbose analyze. Ended up with duplicate rows of\n> everything.\n\nReally!? The referencecount failure doesn't surprise me a whole lot,\ngiven the refcount bugs that I fixed a couple months ago (no, those\nfixes are not in 6.5.* :-(). But VACUUM is supposed to be guaranteed\nproof against generating duplicate tuples by design --- that's what\nall the HEAP_MOVED_OFF and HEAP_MOVED_IN foofaraw is about.\n\nPerhaps there is a glitch in the tuple validity checking logic for\nHEAP_MOVED_OFF/HEAP_MOVED_IN? Anyone see it?\n\nGiven that this was on an Alpha, it could be a 64-bit-platform-\ndependency kind of bug...\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 12 Jan 2000 03:24:42 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] BlowAwayRelationBuffers " }, { "msg_contents": "Tom Lane wrote:\n\n> Adriaan Joubert <[email protected]> writes:\n> > Hmmm, I got the following this morning on version 6.5.2 on DEC Alpha\n> > during a vacuum verbose analyze. Ended up with duplicate rows of\n> > everything.\n>\n> Really!? The referencecount failure doesn't surprise me a whole lot,\n> given the refcount bugs that I fixed a couple months ago (no, those\n> fixes are not in 6.5.* :-(). But VACUUM is supposed to be guaranteed\n> proof against generating duplicate tuples by design --- that's what\n> all the HEAP_MOVED_OFF and HEAP_MOVED_IN foofaraw is about.\n>\n> Perhaps there is a glitch in the tuple validity checking logic for\n> HEAP_MOVED_OFF/HEAP_MOVED_IN? Anyone see it?\n>\n> Given that this was on an Alpha, it could be a 64-bit-platform-\n> dependency kind of bug...\n\nThis is not the first time that I've ended up with duplicate tuples: I\neven have a standard mechanism to deal with them :-(! Initially I thought\nthis was due to tables getting corrupted by having index entries that\nwere too large, but that has been fixed (and has caused no problems since\nthe fix you sent -- thanks again!), and this still happens. It seems to\nhappen most frequently when there have been a very large number of\nchanges to the tables between vacuums.\n\nAdriaan\n\n", "msg_date": "Wed, 12 Jan 2000 08:42:32 +0000", "msg_from": "Adriaan Joubert <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] BlowAwayRelationBuffers" }, { "msg_contents": "> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]]On Behalf Of Tom Lane\n> \n> Adriaan Joubert <[email protected]> writes:\n> > Hmmm, I got the following this morning on version 6.5.2 on DEC Alpha\n> > during a vacuum verbose analyze. Ended up with duplicate rows of\n> > everything.\n> \n> Really!? The referencecount failure doesn't surprise me a whole lot,\n> given the refcount bugs that I fixed a couple months ago (no, those\n> fixes are not in 6.5.* :-(). But VACUUM is supposed to be guaranteed\n> proof against generating duplicate tuples by design --- that's what\n> all the HEAP_MOVED_OFF and HEAP_MOVED_IN foofaraw is about.\n> \n> Perhaps there is a glitch in the tuple validity checking logic for\n> HEAP_MOVED_OFF/HEAP_MOVED_IN? Anyone see it?\n>\n\nI commited the following change to REL tree after 6.5.2.\nIt might be late for Adriaan.\n\nRegards.\n\nHiroshi Inoue\[email protected]\n\n*** xact.c.orig Wed Jan 12 17:53:19 2000\n--- xact.c Tue Oct 19 11:54:39 1999\n***************\n*** 733,741 ****\n /*\n * Have the transaction access methods record the status of\n * this transaction id in the pg_log relation. We skip it\n! * if no one shared buffer was changed by this transaction.\n */\n! if (SharedBufferChanged)\n TransactionIdAbort(xid);\n\n ResetBufferPool();\n--- 733,742 ----\n /*\n * Have the transaction access methods record the status of\n * this transaction id in the pg_log relation. We skip it\n! * if no one shared buffer was changed by this transaction\n! * or this transaction has been committed already.\n */\n! if (SharedBufferChanged && !TransactionIdDidCommit(xid))\n TransactionIdAbort(xid);\n\n\n ResetBufferPool();\n", "msg_date": "Wed, 12 Jan 2000 18:02:33 +0900", "msg_from": "\"Hiroshi Inoue\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: [HACKERS] BlowAwayRelationBuffers " }, { "msg_contents": "* Tom Lane <[email protected]> [000112 00:56] wrote:\n> Adriaan Joubert <[email protected]> writes:\n> > Hmmm, I got the following this morning on version 6.5.2 on DEC Alpha\n> > during a vacuum verbose analyze. Ended up with duplicate rows of\n> > everything.\n> \n> Really!? The referencecount failure doesn't surprise me a whole lot,\n> given the refcount bugs that I fixed a couple months ago (no, those\n> fixes are not in 6.5.* :-(). But VACUUM is supposed to be guaranteed\n> proof against generating duplicate tuples by design --- that's what\n> all the HEAP_MOVED_OFF and HEAP_MOVED_IN foofaraw is about.\n> \n> Perhaps there is a glitch in the tuple validity checking logic for\n> HEAP_MOVED_OFF/HEAP_MOVED_IN? Anyone see it?\n> \n> Given that this was on an Alpha, it could be a 64-bit-platform-\n> dependency kind of bug...\n\nWe've seen this on postgresql 6.5.3 on i386+FreeBSD 4.0, the only\nway I was able to fix it was by dumping the entire table, running\nsort on it and re-importing it.\n\nBtw, I'd be interested in your opinion on the issues I recently\nbrought up with libpq when you have the time.\n\n-Alfred\n\n\n\n> \n> \t\t\tregards, tom lane\n", "msg_date": "Wed, 12 Jan 2000 03:13:08 -0800", "msg_from": "Alfred Perlstein <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] BlowAwayRelationBuffers" }, { "msg_contents": "Thanks Hiroshi, I will patch my database and see whether that helps. Guess\ni really ought to upgrade to 6.5.3, but I had some compile problems on\nAlpha which I haven't looked at closely yet.\n\nthanks again,\n\nAdriaan\n\n", "msg_date": "Wed, 12 Jan 2000 11:20:15 +0000", "msg_from": "Adriaan Joubert <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] BlowAwayRelationBuffers" }, { "msg_contents": "> -----Original Message-----\n> From: [email protected] [mailto:[email protected]]\n>\n> Thanks Hiroshi, I will patch my database and see whether that helps. Guess\n> i really ought to upgrade to 6.5.3, but I had some compile problems on\n> Alpha which I haven't looked at closely yet.\n>\n\nUnfortunately the patch could neither recover your current status\nnor prevent the occurrence of BlowAwayRelationBuffers.\nIt may only prevent the occurrence of inconsistency after\nthe error.\n\nBlowAwayRelationBuffers is called immediately before truncation of\nthe target relation file in VACUUM. Without applying my patch,\nHEAP_MOVED_OFF tuples would revive after BlowAwayRelationBuffers\nerror.\n\nRegards.\n\nHiroshi Inoue\[email protected]\n\n", "msg_date": "Thu, 13 Jan 2000 09:03:06 +0900", "msg_from": "\"Hiroshi Inoue\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: [HACKERS] BlowAwayRelationBuffers" }, { "msg_contents": "[Charset iso-8859-1 unsupported, filtering to ASCII...]\n> > -----Original Message-----\n> > From: [email protected] [mailto:[email protected]]\n> >\n> > Thanks Hiroshi, I will patch my database and see whether that helps. Guess\n> > i really ought to upgrade to 6.5.3, but I had some compile problems on\n> > Alpha which I haven't looked at closely yet.\n> >\n> \n> Unfortunately the patch could neither recover your current status\n> nor prevent the occurrence of BlowAwayRelationBuffers.\n> It may only prevent the occurrence of inconsistency after\n> the error.\n> \n> BlowAwayRelationBuffers is called immediately before truncation of\n> the target relation file in VACUUM. Without applying my patch,\n> HEAP_MOVED_OFF tuples would revive after BlowAwayRelationBuffers\n> error.\n\nWow, our team is really getting good at understanding this low-level code.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 12 Jan 2000 19:41:59 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] BlowAwayRelationBuffers]" }, { "msg_contents": "\"Hiroshi Inoue\" <[email protected]> writes:\n> I commited the following change to REL tree after 6.5.2.\n> It might be late for Adriaan.\n\n> ! if (SharedBufferChanged)\n> TransactionIdAbort(xid);\n\n> ! if (SharedBufferChanged && !TransactionIdDidCommit(xid))\n> TransactionIdAbort(xid);\n\nOK, I guess the point is that if VACUUM aborts at some time after\nit's done its internal commit, this code would have un-done the\ncommit, thereby allowing HEAP_MOVED_OFF tuples to spring back to\nlife?\n\nI was trying to figure out if this change might fix the duplicate-\ntuples-after-failed-VACUUM problems that we've just been hearing\nabout. Certainly there is plenty of stuff going on in VACUUM after\nits internal commit, so plenty of places that could elog(ERROR).\nBut it looks like the very first thing that happens after commit\nis a scan to commit HEAP_MOVED_IN tuples and kill HEAP_MOVED_OFF\ntuples, so this couldn't help much unless the failure happened\nduring that scan. Which doesn't seem really likely...\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Wed, 12 Jan 2000 22:40:48 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] BlowAwayRelationBuffers " }, { "msg_contents": "> -----Original Message-----\n> From: Tom Lane [mailto:[email protected]]\n> \n> \"Hiroshi Inoue\" <[email protected]> writes:\n> > I commited the following change to REL tree after 6.5.2.\n> > It might be late for Adriaan.\n> \n> > ! if (SharedBufferChanged)\n> > TransactionIdAbort(xid);\n> \n> > ! if (SharedBufferChanged && !TransactionIdDidCommit(xid))\n> > TransactionIdAbort(xid);\n> \n> OK, I guess the point is that if VACUUM aborts at some time after\n> it's done its internal commit, this code would have un-done the\n> commit, thereby allowing HEAP_MOVED_OFF tuples to spring back to\n> life?\n>\n\nYes.\n \n> I was trying to figure out if this change might fix the duplicate-\n> tuples-after-failed-VACUUM problems that we've just been hearing\n> about. Certainly there is plenty of stuff going on in VACUUM after\n> its internal commit, so plenty of places that could elog(ERROR).\n> But it looks like the very first thing that happens after commit\n> is a scan to commit HEAP_MOVED_IN tuples and kill HEAP_MOVED_OFF\n\nCertainly when BlowAwayRelationBuffers() is called,commit to HEAP_\nMOVED_IN(OFF) was already completed.\nHowever it seems that the pages which are about to be truncated\nare not touched.\n\nRegards.\n\nHiroshi Inoue\[email protected]\n", "msg_date": "Thu, 13 Jan 2000 13:05:41 +0900", "msg_from": "\"Hiroshi Inoue\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: [HACKERS] BlowAwayRelationBuffers " } ]
[ { "msg_contents": "hello,\ni have pb with the postgre sql installation :\ni compile all .. ok\nbut when i want to run postmaster or anything else it say : no db dir :\ndata/../template1 ...\ni have no data dir in my postgre dinstalled dir ....\nhow can i have this dir with all into ?\n\n\n", "msg_date": "Wed, 12 Jan 2000 09:52:06 +0100", "msg_from": "\"Netra systems\" <[email protected]>", "msg_from_op": true, "msg_subject": "postgresql installation" }, { "msg_contents": "\"Netra systems\" <[email protected]> writes:\n\n> hello,\n> i have pb with the postgre sql installation :\n> i compile all .. ok\n> but when i want to run postmaster or anything else it say : no db dir :\n> data/../template1 ...\n> i have no data dir in my postgre dinstalled dir ....\n> how can i have this dir with all into ?\n\nHave you run initdb? My guess is no.\n\nAlso, when you run initdb, the data directory it uses (I use /var/lib/pgsql,\nwhich I set up by having my .zshrc file export PGDATA=/var/lib/pgsql\nfor my postgres account) needs to be the same as that passed to postmaster\nvia the -D option.\n\nOh, initdb really should be run as the postgres user. Definately not as\nroot! Make sure your permissions are set up ahead of time!\n\n+C\n\n\n\n-- \nHave you signed up to be a bone marrow doner? All it takes is a simple \nblood test, and it can save a life. <http://www.marrow.org>\n\nCory Kempf Macintosh / Unix Consulting & Software Development\[email protected] <http://www.enigami.com/~ckempf/>\n", "msg_date": "Wed, 12 Jan 2000 16:23:35 GMT", "msg_from": "Cory Kempf <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgresql installation" } ]
[ { "msg_contents": "Hello!\n\n We have been starting a project 3 months ago with postgresql. It\nis a portal site.\n Now we have 12000 users, and every page hit generates an update to\nthis table. This makes it _very_ fragmented after one day (60000\nupdate/day), so we run vacuum hourly (only for this table) and daily\n(for the whole database), and once a week a full backup-restore\nsession in made. The server has 256M memory, and a raid5 disk in it.\n I thought the once-an-hour vacuum would be good, but it is not.\nNormally the \"vaccumdb --table users dbname\" is finished in 10\nseconds, but in heavy load this can took 5-6 minutes, and the web is\nunusable in this period! This is a very big problem. We have the\nusual performance inprovements added (-o -F, more buffers, etc), but\nit seems that sometimes the VACUUM doesn't work properly.\n When vaccum is active, the other postgres processes eats up my cpu\ntime, and _this_ makes the situation wronger!\n We are using postgersql 6.5.2 on a Pentium II 450 machine (RedHat\nLinux 6.0 with security patches).\n The database contains a word-index table which have about 2\nmillion entries. It is not so often updated, so this is not required\nto be vacuumed more frequent than one days.\n\n If we fail to make vacuum in the scheduled period (e.g pg_vlock\nstucked in a crash), the postgres processes usually takes more up\nCPU time than usual.\n\n Sometimes I see this in my top:\n\nPID USER PRI NI SIZE SWAP RSS SHARE STAT LIB %CPU %MEM TIME COMMAND\n31944 postgres 7 0 3916 0 3916 3232 R 0 6.0 1.5 0:00\n/opt/postgres/bin/postgres localhost www kapu idle\n31600 postgres 0 0 3928 0 3928 3232 S 0 5.2 1.5 0:46\n/opt/postgres/bin/postgres localhost www kapu idle\n31982 postgres 0 0 3752 0 3752 3176 S 0 3.4 1.4 0:01\n/opt/postgres/bin/postgres localhost www kapu idle\n\nWhy idle processes eats 6% CPU time? Is it normal?\n\n Do you have any performance-improvement-ideas? We don't want to\nspend lotsa money for a commercial dbms (e.g.Adabas D) only because\nof the vacuum problem. Postgresql has many users, testers, and\nthat's why the support is cannot be compared to any commercial\nproduct (maybe for Oracle, but it is too expensive).\n I am not in the list, so please reply to my personal address also.\n\n Thanks for the help in advance.\n\ndLux\n--\n\"There are two kinds of people, those who do the work and those who\ntake the credit. Try to be in the first group; there is less\ncompetiton there.\"\n", "msg_date": "Wed, 12 Jan 2000 14:19:21 +0100", "msg_from": "dLux <[email protected]>", "msg_from_op": true, "msg_subject": "PostgreSQL performance problems: heavy load " } ]
[ { "msg_contents": "I've been send a patch to the parser that changes the FETCH statement to not\naccept an empty portal name anymore and and allows FETCH without IN/FROM.\n\nFirst of all I really like to add this to ECPG since the different FETCH\nsyntax is a major compatibility problem. But I do not like to have ECPG's\nparser accept the statement while the backend does not. Since this is not a\nstandard feature I wonder what others think about it.\n\nMy point of view is that I'd like to have as much compatibility as possible.\n\nMichael\n-- \nMichael Meskes | Go SF 49ers!\nTh.-Heuss-Str. 61, D-41812 Erkelenz | Go Rhein Fire!\nTel.: (+49) 2431/72651 | Use Debian GNU/Linux!\nEmail: [email protected] | Use PostgreSQL!\n", "msg_date": "Wed, 12 Jan 2000 16:12:16 +0100", "msg_from": "Michael Meskes <[email protected]>", "msg_from_op": true, "msg_subject": "FETCH without FROM/IN" }, { "msg_contents": "Michael Meskes <[email protected]> writes:\n> I've been send a patch to the parser that changes the FETCH statement to not\n> accept an empty portal name anymore and and allows FETCH without IN/FROM.\n> First of all I really like to add this to ECPG since the different FETCH\n> syntax is a major compatibility problem. But I do not like to have ECPG's\n> parser accept the statement while the backend does not. Since this is not a\n> standard feature I wonder what others think about it.\n\nIt looks to me like the backend grammar *does* accept FETCH without\nIN/FROM cursor. Which seems pretty bizarre --- I don't understand how\nit makes sense to omit a cursor name from FETCH.\n\nLooking at the SQL92 spec, it seems we are mighty far away from any\ndefensible reading of the spec :-(. The spec says\n\n <fetch statement> ::=\n FETCH [ [ <fetch orientation> ] FROM ]\n <cursor name> INTO <fetch target list>\n\n <fetch orientation> ::=\n NEXT\n | PRIOR\n | FIRST\n | LAST\n | { ABSOLUTE | RELATIVE } <simple value specification>\n\n <fetch target list> ::=\n <target specification> [ { <comma> <target specification> }... ]\n\nwhereas gram.y has\n\nFetchStmt: FETCH opt_direction fetch_how_many opt_portal_name\n | MOVE opt_direction fetch_how_many opt_portal_name\n ;\n\nopt_direction: FORWARD\n | BACKWARD\n | RELATIVE\n | ABSOLUTE\n | /*EMPTY*/\n ;\n\nfetch_how_many: Iconst\n | '-' Iconst\n | ALL\n | NEXT\n | PRIOR\n | /*EMPTY*/\n ;\n\nopt_portal_name: IN name\n | FROM name\n | /*EMPTY*/\n ;\n\nAre we compatible with anything at all???\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 12 Jan 2000 11:09:11 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] FETCH without FROM/IN " }, { "msg_contents": "> Looking at the SQL92 spec, it seems we are mighty far away from any\n> defensible reading of the spec :-(...\n> Are we compatible with anything at all???\n\nAlthough not rigorously compatible, it appears that we do allow\ncompatible syntax:\n\nFETCH 4 FROM t1;\nFETCH NEXT FROM t1;\n\nBut afaik our cursor behavior does not currently allow supporting\n\nFETCH FIRST FROM t1; -- cursor can't be positioned to first/last\nFETCH ABSOLUTE 4 FROM t1; -- not sure about this one...\nFETCH RELATIVE 4 FROM t1; -- this could be a MOVE/FETCH combination?\n\nso we, uh, don't support it (yet). \n\nI'd suggest definitely supporting all SQL92 syntax that the cursor can\nmanage, and also supporting the existing Postgres behaviors (which may\nonly be a simple subset). If we have just *alternate* syntax for the\nsame thing, then v7.0 would be a good time to straighten it up.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Wed, 12 Jan 2000 16:43:07 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] FETCH without FROM/IN" }, { "msg_contents": "On Wed, Jan 12, 2000 at 11:09:11AM -0500, Tom Lane wrote:\n> It looks to me like the backend grammar *does* accept FETCH without\n> IN/FROM cursor. Which seems pretty bizarre --- I don't understand how\n> it makes sense to omit a cursor name from FETCH.\n\nYes, it does accept if NO portal name is given. This is corrected by the\npatch. But what I wanted to talk about is the IN/FROM keyword. \n\n> <fetch statement> ::=\n> FETCH [ [ <fetch orientation> ] FROM ]\n> <cursor name> INTO <fetch target list>\n\nTo me this seems to say that FROM is just optional. Okay, if I make it\noptional in our parser?\n\nMichael\n-- \nMichael Meskes | Go SF 49ers!\nTh.-Heuss-Str. 61, D-41812 Erkelenz | Go Rhein Fire!\nTel.: (+49) 2431/72651 | Use Debian GNU/Linux!\nEmail: [email protected] | Use PostgreSQL!\n", "msg_date": "Wed, 12 Jan 2000 17:53:26 +0100", "msg_from": "Michael Meskes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] FETCH without FROM/IN" }, { "msg_contents": "On Wed, Jan 12, 2000 at 04:43:07PM +0000, Thomas Lockhart wrote:\n> FETCH RELATIVE 4 FROM t1; -- this could be a MOVE/FETCH combination?\n> \n> so we, uh, don't support it (yet). \n\nHow about\n\nFETCH t1;?\n\nMichael\n-- \nMichael Meskes | Go SF 49ers!\nTh.-Heuss-Str. 61, D-41812 Erkelenz | Go Rhein Fire!\nTel.: (+49) 2431/72651 | Use Debian GNU/Linux!\nEmail: [email protected] | Use PostgreSQL!\n", "msg_date": "Wed, 12 Jan 2000 17:56:37 +0100", "msg_from": "Michael Meskes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] FETCH without FROM/IN" }, { "msg_contents": "Michael Meskes <[email protected]> writes:\n> <fetch statement> ::=\n> FETCH [ [ <fetch orientation> ] FROM ]\n> <cursor name> INTO <fetch target list>\n\n> To me this seems to say that FROM is just optional. Okay, if I make it\n> optional in our parser?\n\nCareful --- notice that FROM is only optional if you *also* omit all the\npreceding optional clauses. Otherwise there will be a reduce conflict\nthat you could only resolve by removing all of FETCH's secondary\nkeywords from the ColId list. I don't think that would be an acceptable\ntradeoff.\n\nI think, though, that you could make our syntax work like\n\tFETCH [ opt_direction fetch_how_many FROM/IN ] portal_name\nwithout conflicts. That'd be good since it'd be more like SQL92.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 12 Jan 2000 20:13:02 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] FETCH without FROM/IN " }, { "msg_contents": "On Wed, Jan 12, 2000 at 08:13:02PM -0500, Tom Lane wrote:\n> I think, though, that you could make our syntax work like\n> \tFETCH [ opt_direction fetch_how_many FROM/IN ] portal_name\n> without conflicts. That'd be good since it'd be more like SQL92.\n\nYes. I just read the patch I got from Rene in detail and it seems to\nimplement:\n\nFETCH [ <direction> [ <fetch_how_many> ]] [ FROM/IN ] portal_name;\n\nBoth direction and fetch_how_many are no longer optional in that they could\nbe replaced by an empty string I wonder if this is correct. It would mean\nthat we have to specify an amount resp. all everytime we do give a\ndirection.\n\nHowever, I think it should be possible to make it:\n\nFETCH [ <direction> ][ <fetch_how_many> ] [ FROM/IN ] portal_name;\n\nThis seems better, isn't it?\n\nMichael\n- \nMichael Meskes | Go SF 49ers!\nTh.-Heuss-Str. 61, D-41812 Erkelenz | Go Rhein Fire!\nTel.: (+49) 2431/72651 | Use Debian GNU/Linux!\nEmail: [email protected] | Use PostgreSQL!\n", "msg_date": "Thu, 13 Jan 2000 08:48:16 +0100", "msg_from": "Michael Meskes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] FETCH without FROM/IN" }, { "msg_contents": "Michael Meskes <[email protected]> writes:\n> On Wed, Jan 12, 2000 at 08:13:02PM -0500, Tom Lane wrote:\n>> I think, though, that you could make our syntax work like\n>> FETCH [ opt_direction fetch_how_many FROM/IN ] portal_name\n>> without conflicts. That'd be good since it'd be more like SQL92.\n\nNote: I was assuming the same definitions of 'opt_direction' and\n'fetch_how_many' as are in the current gram.y; namely, they can\nexpand to either an option phrase or empty. So what I was really\nsaying is\n FETCH [ [ direction ] [ how_many ] FROM/IN ] portal_name\n\n> Yes. I just read the patch I got from Rene in detail and it seems to\n> implement:\n\n> FETCH [ <direction> [ <fetch_how_many> ]] [ FROM/IN ] portal_name;\n\nCertainly we currently accept a how_many clause without a preceding\ndirection, so if the patch removes that possibility then it's wrong.\n\n> However, I think it should be possible to make it:\n\n> FETCH [ <direction> ][ <fetch_how_many> ] [ FROM/IN ] portal_name;\n\n> This seems better, isn't it?\n\nIf you do it like that (ie, the portal name is now required), I *think*\nit will work without shift-reduce conflicts in the current grammar,\nbut we may regret it later when we try to do more of SQL92. I would\nrecommend sticking to an SQL92-like syntax, in which FROM/IN is not\noptional if direction and/or how_many appear.\n\nThe reason I'm concerned about this is that all of the direction and\nhowmany keywords are considered valid ColIds (and if we take them out\nof the ColIds list, we risk breaking databases that work at present).\nThat means that the parser has some difficulty in figuring out whether\nan apparent keyword is really a keyword, or a portal name that happens\nto be the same as a keyword. For example, consider\n\n\tFETCH NEXT;\n\nIf both FROM and portal_name were optional, this statement would\nactually be ambiguous: is it FETCH NEXT from the default portal,\nor FETCH with default options from a cursor named NEXT?\n\nIn the syntax you are proposing, this statement is valid and not\nambiguous --- NEXT must be a cursor name --- but the only way an\nLR(1) parser can figure that out is to look ahead one token to see\nthat semicolon comes next.\n\nWhat I'm concerned about is that SQL92 allows other options *after*\nthe cursor name, and we may someday want to support those. We could\neasily find that the grammar is no longer LR(1) (ie, it takes more than\none token lookahead to decide whether we have the portal name or not);\nand then we've got trouble. If FROM is required after FETCH options\nthen this risk is much reduced.\n\nAnother reason for requiring FROM/IN is that we will not be able to\nexpand the \"FETCH n\" syntax for how_many to handle a more general\nexpression (as opposed to a bare signed-integer constant, as we have\nnow) unless there is a delimiter between it and the portal name.\nWe still have unresolved headaches in the grammar that come from the\nlack of delimiters around constant expressions in column DEFAULT\noptions; let's not add another source of the same kind of trouble.\n\nIn short, the syntax\n\n FETCH [ [ direction ] [ how_many ] FROM/IN ] portal_name\n\nposes much less risk for future expansion than the syntax\n\n FETCH [ direction ] [ how_many ] [ FROM/IN ] portal_name\n\nand that's why I think we'd be safer to stick with the former.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 13 Jan 2000 11:34:31 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] FETCH without FROM/IN " }, { "msg_contents": "On Thu, Jan 13, 2000 at 11:34:31AM -0500, Tom Lane wrote:\n> FETCH [ [ direction ] [ how_many ] FROM/IN ] portal_name\n\nLooks good to me.\n\n> Certainly we currently accept a how_many clause without a preceding\n> direction, so if the patch removes that possibility then it's wrong.\n\nSure. Rene already send me a fix.\n\n> If you do it like that (ie, the portal name is now required), I *think*\n> it will work without shift-reduce conflicts in the current grammar,\n> but we may regret it later when we try to do more of SQL92. I would\n> recommend sticking to an SQL92-like syntax, in which FROM/IN is not\n> optional if direction and/or how_many appear.\n> \n> The reason I'm concerned about this is that all of the direction and\n> howmany keywords are considered valid ColIds (and if we take them out\n> of the ColIds list, we risk breaking databases that work at present).\n> That means that the parser has some difficulty in figuring out whether\n> an apparent keyword is really a keyword, or a portal name that happens\n> to be the same as a keyword. For example, consider\n> \n> \tFETCH NEXT;\n> \n> If both FROM and portal_name were optional, this statement would\n> actually be ambiguous: is it FETCH NEXT from the default portal,\n\nDo we have a default portal?\n\n> or FETCH with default options from a cursor named NEXT?\n> ... \n> What I'm concerned about is that SQL92 allows other options *after*\n> the cursor name, and we may someday want to support those. We could\n> easily find that the grammar is no longer LR(1) (ie, it takes more than\n> one token lookahead to decide whether we have the portal name or not);\n> and then we've got trouble. If FROM is required after FETCH options\n> then this risk is much reduced.\n\nYes, I completely agree on this one.\n\nI will try to change the syntax to what you proposed.\n\nMichael\n-- \nMichael Meskes | Go SF 49ers!\nTh.-Heuss-Str. 61, D-41812 Erkelenz | Go Rhein Fire!\nTel.: (+49) 2431/72651 | Use Debian GNU/Linux!\nEmail: [email protected] | Use PostgreSQL!\n", "msg_date": "Fri, 14 Jan 2000 07:47:17 +0100", "msg_from": "Michael Meskes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] FETCH without FROM/IN" }, { "msg_contents": "Michael Meskes <[email protected]> writes:\n>> FETCH NEXT;\n>> \n>> If both FROM and portal_name were optional, this statement would\n>> actually be ambiguous: is it FETCH NEXT from the default portal,\n\n> Do we have a default portal?\n\nDarn if I know, but the current gram.y thinks so. If I try it\nwithout any preparation, I get:\n\nregression=# fetch;\nNOTICE: PerformPortalFetch: blank portal unsupported\nFETCH\n\nbut maybe with some magic DECLARE beforehand, it'd work?\nAnyone know?\n\nSince the SQL92 spec clearly requires a cursor name to be provided,\nI'd be willing to see us remove the option of defaulting the cursor\nname. Is there anyone out there who knows what it does and wants\nto argue we should keep it?\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 14 Jan 2000 02:38:04 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] FETCH without FROM/IN " } ]
[ { "msg_contents": "I have updated the TODO list to mark all the items that are completed\nfor 7.0.\n\nAre there any additional ones? Are there some names I have forgotten to\nattribute to items?\n\nLet me know.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 12 Jan 2000 10:41:53 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "TODO list updated" }, { "msg_contents": "On 2000-01-12, Bruce Momjian mentioned:\n\n> I have updated the TODO list to mark all the items that are completed\n> for 7.0.\n> \n\nWow, we're at 32% done!\n\n> Are there any additional ones? Are there some names I have forgotten to\n> attribute to items?\n\n* Better interface for adding to pg_group\n\nIt's de facto done.\n\n* Make postgres user have a password by default\n\nThere's an initdb switch.\n\n* User who can create databases can modify pg_database table\n\nis on the hit list. I believe the reason this had to be allowed is\ncreatedb() using an actual insert statement to do its thing, which it\nwon't do any longer once I get all my code together. Some please correct\nme if I'm wrong, otherwise I'll yank that code. (Yes, there is code that\nspecifically _allows_ this.)\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n\n\n", "msg_date": "Thu, 13 Jan 2000 00:29:45 +0100 (CET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] TODO list updated" }, { "msg_contents": "[Charset ISO-8859-1 unsupported, filtering to ASCII...]\n> On 2000-01-12, Bruce Momjian mentioned:\n> \n> > I have updated the TODO list to mark all the items that are completed\n> > for 7.0.\n> > \n> \n> Wow, we're at 32% done!\n\nActually, there are tons of _done_ items on the list. I mentioned only\nthe big undone ones.\n\n> \n> > Are there any additional ones? Are there some names I have forgotten to\n> > attribute to items?\n> \n> * Better interface for adding to pg_group\n> \n> It's de facto done.\n\nGreat.\n\n> \n> * Make postgres user have a password by default\n> \n> There's an initdb switch.\n\nOK, now we have to decide if we are going to require this be done as\npart of initdb. I am inclined to say the user _has_ to be _prompted_ in\na secure matter for the password as part of initdb. Have a command-line\nswitch for the password is not secure, IMHO, though it is better than\nnothing.\n\nLet's get people's opinions on this, and we can mark it as done.\n\n> \n> * User who can create databases can modify pg_database table\n> \n> is on the hit list. I believe the reason this had to be allowed is\n> createdb() using an actual insert statement to do its thing, which it\n> won't do any longer once I get all my code together. Some please correct\n> me if I'm wrong, otherwise I'll yank that code. (Yes, there is code that\n> specifically _allows_ this.)\n\nGreat. Also dropping a database required this too.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 12 Jan 2000 18:50:09 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] TODO list updated" }, { "msg_contents": "On Wed, 12 Jan 2000, Bruce Momjian wrote:\n\n> OK, now we have to decide if we are going to require this be done as\n> part of initdb. I am inclined to say the user _has_ to be _prompted_ in\n> a secure matter for the password as part of initdb. Have a command-line\n> switch for the password is not secure, IMHO, though it is better than\n> nothing.\n\nIf we do a 'CREATE USER <user> WITH PASSWORD <pass>', its no more secure\nthen using a command line switch for password ... \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Wed, 12 Jan 2000 21:02:38 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] TODO list updated" }, { "msg_contents": "> On Wed, 12 Jan 2000, Bruce Momjian wrote:\n> \n> > OK, now we have to decide if we are going to require this be done as\n> > part of initdb. I am inclined to say the user _has_ to be _prompted_ in\n> > a secure matter for the password as part of initdb. Have a command-line\n> > switch for the password is not secure, IMHO, though it is better than\n> > nothing.\n> \n> If we do a 'CREATE USER <user> WITH PASSWORD <pass>', its no more secure\n> then using a command line switch for password ... \n\nWhy is that? ps shows command args, righ?\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 12 Jan 2000 20:12:54 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] TODO list updated" }, { "msg_contents": "> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]]On Behalf Of Bruce Momjian\n\n>\n> I have updated the TODO list to mark all the items that are completed\n> for 7.0.\n>\n> Are there any additional ones? Are there some names I have forgotten to\n> attribute to items?\n>\n> Let me know.\n>\n\nHmmm,who solved ????\n* -spinlock stuck problem when elog(FATAL) and elog(ERROR) inside bufmgr\n\nAnd I have felt that the followings are almost same.\n* Allow LIMIT ability on single-table queries that have no ORDER BY to use\n a matching index [limit]\n* Improve LIMIT processing by using index to limit rows processed [limit]\n* Have optimizer take LIMIT into account when considering index scans\n[limit]\n\nAnd Isn't it preferable to omit 'in ORDER BY' from\n* Use indexes in ORDER BY for restrictive data sets, min(), max()\n?\n\nRegards.\n\nHiroshi Inoue\[email protected]\n\n", "msg_date": "Thu, 13 Jan 2000 10:16:15 +0900", "msg_from": "\"Hiroshi Inoue\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: [HACKERS] TODO list updated" }, { "msg_contents": "The Hermit Hacker <[email protected]> writes:\n> On Wed, 12 Jan 2000, Bruce Momjian wrote:\n>> OK, now we have to decide if we are going to require this be done as\n>> part of initdb. I am inclined to say the user _has_ to be _prompted_ in\n>> a secure matter for the password as part of initdb. Have a command-line\n>> switch for the password is not secure, IMHO, though it is better than\n>> nothing.\n\n> If we do a 'CREATE USER <user> WITH PASSWORD <pass>', its no more secure\n> then using a command line switch for password ... \n\nYes it is --- if you have a shell script that is invoked by\n\tinitdb --password pgsqlPassword ...\nthen anyone else on the same machine who happens to be doing a \"ps\"\nmeanwhile will see your password.\n\nNote that if initdb is a shell script, then it still has to be very\ncareful what it does with the password; put it in any command line\nfor a program invoked by the script, and the leak is back with you.\nA C-program version of initdb would be a lot safer. But in theory you\ncan pass the password to the backend without exposing it in any command\nline (put it in a data file instead, say).\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 12 Jan 2000 20:26:54 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] TODO list updated " }, { "msg_contents": "> Hmmm,who solved ????\n> * -spinlock stuck problem when elog(FATAL) and elog(ERROR) inside bufmgr\n\nI thought you or Tatsuo fixed that. I will remove the mark.\n\n> \n> And I have felt that the followings are almost same.\n> * Allow LIMIT ability on single-table queries that have no ORDER BY to use\n> a matching index [limit]\n> * Improve LIMIT processing by using index to limit rows processed [limit]\n> * Have optimizer take LIMIT into account when considering index scans\n> [limit]\n> \n> And Isn't it preferable to omit 'in ORDER BY' from\n> * Use indexes in ORDER BY for restrictive data sets, min(), max()\n> ?\n\nI have now made it two items:\n\n\t* Use indexes in ORDER BY for restrictive data sets \n\t* Use indexes in ORDER BY for min(), max()\n\nWe currently do not use indexes to handle ORDER BY because it is slower,\nbut for queries returning only a few rows, we could use the index and\nskip the ORDER BY. Not sure if this is done yet, or if it is important.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 12 Jan 2000 20:52:22 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] TODO list updated" }, { "msg_contents": "On Wed, 12 Jan 2000, Bruce Momjian wrote:\n\n> > On Wed, 12 Jan 2000, Bruce Momjian wrote:\n> > \n> > > OK, now we have to decide if we are going to require this be done as\n> > > part of initdb. I am inclined to say the user _has_ to be _prompted_ in\n> > > a secure matter for the password as part of initdb. Have a command-line\n> > > switch for the password is not secure, IMHO, though it is better than\n> > > nothing.\n> > \n> > If we do a 'CREATE USER <user> WITH PASSWORD <pass>', its no more secure\n> > then using a command line switch for password ... \n> \n> Why is that? ps shows command args, righ?\n\nPoint. You won me over :)\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Wed, 12 Jan 2000 21:54:02 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] TODO list updated" }, { "msg_contents": "> -----Original Message-----\n> From: Bruce Momjian [mailto:[email protected]]\n>\n> > Hmmm,who solved ????\n> > * -spinlock stuck problem when elog(FATAL) and elog(ERROR) inside bufmgr\n>\n> I thought you or Tatsuo fixed that. I will remove the mark.\n>\n\nI have had a fix for it for 3 months but not committed because I don't\nknow how WAL would change it.\nOK I would commit it after some checking.\n\n> >\n> > And I have felt that the followings are almost same.\n> > * Allow LIMIT ability on single-table queries that have no\n> ORDER BY to use\n> > a matching index [limit]\n> > * Improve LIMIT processing by using index to limit rows\n> processed [limit]\n> > * Have optimizer take LIMIT into account when considering index scans\n> > [limit]\n> >\n> > And Isn't it preferable to omit 'in ORDER BY' from\n> > * Use indexes in ORDER BY for restrictive data sets, min(), max()\n> > ?\n>\n> I have now made it two items:\n>\n> \t* Use indexes in ORDER BY for restrictive data sets\n> \t* Use indexes in ORDER BY for min(), max()\n>\n> We currently do not use indexes to handle ORDER BY because it is slower,\n> but for queries returning only a few rows, we could use the index and\n> skip the ORDER BY. Not sure if this is done yet, or if it is important.\n>\n\nTom has changed to take IndexScan into account even when no qual exists.\n* -Allow optimizer to prefer plans that match ORDER BY(Tom)\nCurrently optimizer is too eager to use index scan. He is planning to take\nlimit into account AFAIK, He has mentioned it many times and I have been\nlooking forward to his change.\n\nRegards.\n\nHiroshi Inoue\[email protected]\n\n", "msg_date": "Thu, 13 Jan 2000 11:19:00 +0900", "msg_from": "\"Hiroshi Inoue\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: [HACKERS] TODO list updated" }, { "msg_contents": "[Charset iso-8859-1 unsupported, filtering to ASCII...]\n> > -----Original Message-----\n> > From: Bruce Momjian [mailto:[email protected]]\n> >\n> > > Hmmm,who solved ????\n> > > * -spinlock stuck problem when elog(FATAL) and elog(ERROR) inside bufmgr\n> >\n> > I thought you or Tatsuo fixed that. I will remove the mark.\n> >\n> \n> I have had a fix for it for 3 months but not committed because I don't\n> know how WAL would change it.\n> OK I would commit it after some checking.\n\nAh, so my memory isn't that bad. WAL is not going into 7.0, so it\nshould be fine.\n\n> > We currently do not use indexes to handle ORDER BY because it is slower,\n> > but for queries returning only a few rows, we could use the index and\n> > skip the ORDER BY. Not sure if this is done yet, or if it is important.\n> >\n> \n> Tom has changed to take IndexScan into account even when no qual exists.\n> * -Allow optimizer to prefer plans that match ORDER BY(Tom)\n> Currently optimizer is too eager to use index scan. He is planning to take\n> limit into account AFAIK, He has mentioned it many times and I have been\n> looking forward to his change.\n\nOK, TODO updated.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 12 Jan 2000 21:34:15 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] TODO list updated" }, { "msg_contents": "TODO item comments:\n\n* -SELECT foo UNION SELECT foo is incorrectly simplified to SELECT foo\n\nThe above is NOT done.\n\n* prevent primary key that exceeds max index columns [primary]\n\nThe above is done as of yesterday.\n\n* Fix memory leak for expressions[memory](Tom?) \n\nThis isn't going to happen for 7.0, looks like :-(\n\n* -Allow compression of large fields or a compressed field type\n\nThis has to be marked not-done again, unless Jan manages to squeeze\nit back in via the toaster before Feb.\n\n* Pull requested data directly from indexes, bypassing heap data\n\nI doubt this is ever going to happen --- to make it possible, we'd\nhave to store tuple-commit status in index entries as well as in the\ntuples themselves. That would be a substantial space and speed penalty;\nis the potential gain really worth it?\n\n* -Convert function(constant) into a constant for index use(Tom)\n\nBernard Frankpitt should get the bulk of the credit for that one, not me.\n\n* Allow LIMIT ability on single-table queries that have no ORDER BY to use\n a matching index [limit]\n* Improve LIMIT processing by using index to limit rows processed [limit]\n* Have optimizer take LIMIT into account when considering index scans [limit]\n\nI agree with Hiroshi that these entries are redundant.\n\n* -Make index creation use psort code, because it is now faster(Vadim)\n\nI did that, not Vadim.\n\n* -elog() flushes cache, try invalidating just entries from current xact,\n perhaps using invalidation cache\n\nI don't think this is done?\n\n* -Process const = const parts of OR clause in separate pass(Tom)\n\nAgain, mostly Frankpitt.\n\n\nSome other things I did that aren't mentioned in TODO, but perhaps\ndeserve to be shown as 7.0 fixes:\n\n* Interlock to prevent DROP DATABASE on a database with running backends\n\n* Buffer reference counting bugfixes\n\n* Fix libpq bug that causes it to drop backend error message sent\n just before connection closure (ie, any FATAL error message :-().\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 12 Jan 2000 21:41:28 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] TODO list updated " }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> We currently do not use indexes to handle ORDER BY because it is slower,\n\nEr, actually, we *do* use indexes for ORDER BY currently:\n\nregression=# explain select * from tenk1 order by unique1;\nNOTICE: QUERY PLAN:\nIndex Scan using tenk1_unique1 on tenk1 (cost=760.00 rows=10000 width=148)\n\nIf you start psql with PGOPTIONS=\"-fi\" you can see that the optimizer\nbelieves an explicit sort would be much slower:\n\nregression=# explain select * from tenk1 order by unique1;\nNOTICE: QUERY PLAN:\nSort (cost=3233.91 rows=10000 width=148)\n -> Seq Scan on tenk1 (cost=563.00 rows=10000 width=148)\n\nbut (at least on my machine) the explicit sort is marginally faster.\nEvidently, the cost estimate for an explicit sort is *way* too high.\n\nI have been poking at this and am currently thinking that the CPU-vs-\ndisk scaling constants (_cpu_page_weight_ and cpu_index_page_weight_)\nmay be drastically off for modern hardware. This is one of the\noptimizer issues that I'm hoping to resolve for 7.0.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 12 Jan 2000 21:55:13 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] TODO list updated " }, { "msg_contents": "> TODO item comments:\n> \n> * -SELECT foo UNION SELECT foo is incorrectly simplified to SELECT foo\n> \n> The above is NOT done.\n\nFixed.\n\n> \n> * prevent primary key that exceeds max index columns [primary]\n> \n> The above is done as of yesterday.\n\nOK.\n\n> \n> * Fix memory leak for expressions[memory](Tom?) \n> \n> This isn't going to happen for 7.0, looks like :-(\n\nI figured.\n\n> \n> * -Allow compression of large fields or a compressed field type\n> \n> This has to be marked not-done again, unless Jan manages to squeeze\n> it back in via the toaster before Feb.\n\nI was optimistic. I will take it off mark.\n\n> \n> * Pull requested data directly from indexes, bypassing heap data\n> \n> I doubt this is ever going to happen --- to make it possible, we'd\n> have to store tuple-commit status in index entries as well as in the\n> tuples themselves. That would be a substantial space and speed penalty;\n> is the potential gain really worth it?\n\nIngres does this. Not sure if it worth it. Comments?\n\n> \n> * -Convert function(constant) into a constant for index use(Tom)\n> \n> Bernard Frankpitt should get the bulk of the credit for that one, not me.\n\nUpdated.\n\n> \n> * Allow LIMIT ability on single-table queries that have no ORDER BY to use\n> a matching index [limit]\n> * Improve LIMIT processing by using index to limit rows processed [limit]\n> * Have optimizer take LIMIT into account when considering index scans [limit]\n> \n> I agree with Hiroshi that these entries are redundant.\n\nOnly one remains now.\n\n> \n> * -Make index creation use psort code, because it is now faster(Vadim)\n> \n> I did that, not Vadim.\n\nVadim had claimed it. You did it. Updated.\n\n> \n> * -elog() flushes cache, try invalidating just entries from current xact,\n> perhaps using invalidation cache\n> \n> I don't think this is done?\n\nI thought we fixed this. Hiroshi? I could swear this came in the past\nfew weeks.\n\n> \n> * -Process const = const parts of OR clause in separate pass(Tom)\n> \n> Again, mostly Frankpitt.\n\nUpdated.\n\n> \n> \n> Some other things I did that aren't mentioned in TODO, but perhaps\n> deserve to be shown as 7.0 fixes:\n> \n> * Interlock to prevent DROP DATABASE on a database with running backends\n> \n> * Buffer reference counting bugfixes\n> \n> * Fix libpq bug that causes it to drop backend error message sent\n> just before connection closure (ie, any FATAL error message :-().\n\nAll added to reliability section.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 12 Jan 2000 22:01:20 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] TODO list updated" }, { "msg_contents": "> Bruce Momjian <[email protected]> writes:\n> > We currently do not use indexes to handle ORDER BY because it is slower,\n> \n> Er, actually, we *do* use indexes for ORDER BY currently:\n> \n> regression=# explain select * from tenk1 order by unique1;\n> NOTICE: QUERY PLAN:\n> Index Scan using tenk1_unique1 on tenk1 (cost=760.00 rows=10000 width=148)\n> \n> If you start psql with PGOPTIONS=\"-fi\" you can see that the optimizer\n> believes an explicit sort would be much slower:\n> \n> regression=# explain select * from tenk1 order by unique1;\n> NOTICE: QUERY PLAN:\n> Sort (cost=3233.91 rows=10000 width=148)\n> -> Seq Scan on tenk1 (cost=563.00 rows=10000 width=148)\n> \n> but (at least on my machine) the explicit sort is marginally faster.\n> Evidently, the cost estimate for an explicit sort is *way* too high.\n\nBut it shouldn't be using the ORDER BY, except when the number of rows\nprocessed is less than the full table, right?\n\n> \n> I have been poking at this and am currently thinking that the CPU-vs-\n> disk scaling constants (_cpu_page_weight_ and cpu_index_page_weight_)\n> may be drastically off for modern hardware. This is one of the\n> optimizer issues that I'm hoping to resolve for 7.0.\n\nMakes sense. CPU's have gotten much faster than disk.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 12 Jan 2000 22:02:40 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] TODO list updated" }, { "msg_contents": "> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]]On Behalf Of Tom Lane\n> \n> TODO item comments:\n> \n> * Pull requested data directly from indexes, bypassing heap data\n> \n> I doubt this is ever going to happen --- to make it possible, we'd\n> have to store tuple-commit status in index entries as well as in the\n> tuples themselves. That would be a substantial space and speed penalty;\n> is the potential gain really worth it?\n>\n\nI agree with Tom. We could omit rows using indexes but cound't\npull data from indexes without time qualification of heap tuples now.\n \n> * -elog() flushes cache, try invalidating just entries from current xact,\n> perhaps using invalidation cache\n> \n> I don't think this is done?\n>\n\nIf I recognize correctly this item,this was fixed by my recent changes\nfor cache invalidation though I had changed it without knowing this item.\n\nRegards.\n\nHiroshi Inoue\[email protected] \n\n", "msg_date": "Thu, 13 Jan 2000 12:14:54 +0900", "msg_from": "\"Hiroshi Inoue\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: [HACKERS] TODO list updated " }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n>> but (at least on my machine) the explicit sort is marginally faster.\n>> Evidently, the cost estimate for an explicit sort is *way* too high.\n\n> But it shouldn't be using the ORDER BY,\n\nRight, if the cost estimates were in line with reality it would be\nchoosing the explicit sort.\n\n> ... except when the number of rows\n> processed is less than the full table, right?\n\nNow if there were *also* a LIMIT clause then the tradeoffs change again\n--- the index scan wins for a small LIMIT because of its much lower\nstartup cost. But the optimizer knows nothing of this and will still\nestimate on the basis that all of the tuples are going to be processed.\nAs Hiroshi just remarked, we really need to teach the optimizer about\nLIMIT. Another thing I'm hoping to get done before 7.0.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 12 Jan 2000 22:19:34 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] TODO list updated " }, { "msg_contents": "> > -----Original Message-----\n> > From: [email protected]\n> > [mailto:[email protected]]On Behalf Of Tom Lane\n> > \n> > TODO item comments:\n> > \n> > * Pull requested data directly from indexes, bypassing heap data\n> > \n> > I doubt this is ever going to happen --- to make it possible, we'd\n> > have to store tuple-commit status in index entries as well as in the\n> > tuples themselves. That would be a substantial space and speed penalty;\n> > is the potential gain really worth it?\n> >\n> \n> I agree with Tom. We could omit rows using indexes but cound't\n> pull data from indexes without time qualification of heap tuples now.\n\nRemoved.\n\n> \n> > * -elog() flushes cache, try invalidating just entries from current xact,\n> > perhaps using invalidation cache\n> > \n> > I don't think this is done?\n> >\n> \n> If I recognize correctly this item,this was fixed by my recent changes\n> for cache invalidation though I had changed it without knowing this item.\n\nGreat. I thought so. I remember some CVS messages saying this.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 12 Jan 2000 22:34:33 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] TODO list updated" }, { "msg_contents": "On Wed, 12 Jan 2000, Bruce Momjian wrote:\n\n> > Wow, we're at 32% done!\n> \n> Actually, there are tons of _done_ items on the list. I mentioned only\n> the big undone ones.\n\nI just do a\necho $(( `grep '^* -' TODO | wc -l` * 100 / `grep '^*' TODO | wc -l` ))\n<grin>\n\n> > * Make postgres user have a password by default\n> > \n> > There's an initdb switch.\n> \n> OK, now we have to decide if we are going to require this be done as\n> part of initdb. I am inclined to say the user _has_ to be _prompted_ in\n> a secure matter for the password as part of initdb. Have a command-line\n> switch for the password is not secure, IMHO, though it is better than\n> nothing.\n\nOkay, a prompt it shall be. But not mandatory, since in my environment we\ndon't even use passwords.\n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Thu, 13 Jan 2000 12:12:27 +0100 (MET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] TODO list updated" }, { "msg_contents": "On Wed, 12 Jan 2000, Tom Lane wrote:\n\n> Note that if initdb is a shell script, then it still has to be very\n> careful what it does with the password; put it in any command line\n> for a program invoked by the script, and the leak is back with you.\n> A C-program version of initdb would be a lot safer. But in theory you\n> can pass the password to the backend without exposing it in any command\n> line (put it in a data file instead, say).\n\nWhat is does is some sort of sed s/genericpassword/realpassword/ so I\nguess this is not completely safe either. But something like this you'd\nhave to do. Can I count you in on beating Bruce into submission for an\ninitdb in C? ;)\n\n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Thu, 13 Jan 2000 12:16:09 +0100 (MET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] TODO list updated " }, { "msg_contents": "On Wed, 12 Jan 2000, The Hermit Hacker wrote:\n\n> On Wed, 12 Jan 2000, Bruce Momjian wrote:\n> \n> > > If we do a 'CREATE USER <user> WITH PASSWORD <pass>', its no more secure\n> > > then using a command line switch for password ... \n> > \n> > Why is that? ps shows command args, righ?\n> \n> Point. You won me over :)\n\nBut it doesn't show the complete command line, only SELECT or UPDATE, etc.\nI'm not sure if it also shows create, I haven't been able to simulate\nthat.\n\nWhat's the whole point of access control if you can happily scan your ps\noutput for all selects, inserts, updates, etc. going through and keep\nrecord of it?\n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Thu, 13 Jan 2000 12:21:27 +0100 (MET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] TODO list updated" }, { "msg_contents": "On Thu, 13 Jan 2000, Peter Eisentraut wrote:\n\n> On Wed, 12 Jan 2000, Tom Lane wrote:\n> \n> > Note that if initdb is a shell script, then it still has to be very\n> > careful what it does with the password; put it in any command line\n> > for a program invoked by the script, and the leak is back with you.\n> > A C-program version of initdb would be a lot safer. But in theory you\n> > can pass the password to the backend without exposing it in any command\n> > line (put it in a data file instead, say).\n> \n> What is does is some sort of sed s/genericpassword/realpassword/ so I\n> guess this is not completely safe either. But something like this you'd\n> have to do. Can I count you in on beating Bruce into submission for an\n> initdb in C? ;)\n\nJust a thought...since its a script, why not put the password into an\nenvironment variable and read it from that? \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Thu, 13 Jan 2000 08:40:30 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] TODO list updated " }, { "msg_contents": "On Thu, 13 Jan 2000, Peter Eisentraut wrote:\n\n> On Wed, 12 Jan 2000, The Hermit Hacker wrote:\n> \n> > On Wed, 12 Jan 2000, Bruce Momjian wrote:\n> > \n> > > > If we do a 'CREATE USER <user> WITH PASSWORD <pass>', its no more secure\n> > > > then using a command line switch for password ... \n> > > \n> > > Why is that? ps shows command args, righ?\n> > \n> > Point. You won me over :)\n> \n> But it doesn't show the complete command line, only SELECT or UPDATE, etc.\n> I'm not sure if it also shows create, I haven't been able to simulate\n> that.\n\nNo, that isn't the problem...the problem is that initdb, if you run it\nwith command line arguments, will show up in a ps listing with those\ncommand line arguments...\n\nif you type 'initdb --pgpasswd=passwd' it will show up in pas as exactly\nthat ...\n\nits not the SELECT/UPDATE/etc that we are worried about...\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Thu, 13 Jan 2000 08:41:48 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] TODO list updated" }, { "msg_contents": "On Thu, 13 Jan 2000, The Hermit Hacker wrote:\n\n> > What is does is some sort of sed s/genericpassword/realpassword/ so I\n> > guess this is not completely safe either. But something like this you'd\n> > have to do. Can I count you in on beating Bruce into submission for an\n> > initdb in C? ;)\n> \n> Just a thought...since its a script, why not put the password into an\n> environment variable and read it from that? \n\nThat won't solve the problem. The password has to be substituted into the\ncatalog template and sed is the way to go for that. I guess it's a long\nshot to worry about that now. And option --pwprompt should be relatively\nsafe until initdb is a C program.\n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Thu, 13 Jan 2000 13:53:50 +0100 (MET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] TODO list updated " }, { "msg_contents": "On Thu, 13 Jan 2000, The Hermit Hacker wrote:\n\n> No, that isn't the problem...the problem is that initdb, if you run it\n> with command line arguments, will show up in a ps listing with those\n> command line arguments...\n> \n> if you type 'initdb --pgpasswd=passwd' it will show up in pas as exactly\n> that ...\n\nNot to mention the world readable shell history files which would make\nthis even more convenient ...\n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Thu, 13 Jan 2000 13:55:08 +0100 (MET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] TODO list updated" }, { "msg_contents": "> On Wed, 12 Jan 2000, Tom Lane wrote:\n> \n> > Note that if initdb is a shell script, then it still has to be very\n> > careful what it does with the password; put it in any command line\n> > for a program invoked by the script, and the leak is back with you.\n> > A C-program version of initdb would be a lot safer. But in theory you\n> > can pass the password to the backend without exposing it in any command\n> > line (put it in a data file instead, say).\n> \n> What is does is some sort of sed s/genericpassword/realpassword/ so I\n> guess this is not completely safe either. But something like this you'd\n> have to do. Can I count you in on beating Bruce into submission for an\n> initdb in C? ;)\n\nI will be responsible to make sure the password doesn't get into a\ncommand as an argument. sed has a -f command that will take it's regex\ninput from a file. That is the solution, though the umask has to be set\nto make sure the temp file is not readable by anyone else.\n\nMost OS vendors use shell scripts for this type of thing because it\ndoesn't have to be fast, and it is changed often.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 13 Jan 2000 08:15:57 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] TODO list updated" }, { "msg_contents": "> On Wed, 12 Jan 2000, The Hermit Hacker wrote:\n> > Point. You won me over :)\n> \n> But it doesn't show the complete command line, only SELECT or UPDATE, etc.\n> I'm not sure if it also shows create, I haven't been able to simulate\n> that.\n> \n> What's the whole point of access control if you can happily scan your ps\n> output for all selects, inserts, updates, etc. going through and keep\n> record of it?\n\nIt only shows the command, not the table involved or the parameters.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 13 Jan 2000 08:16:50 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] TODO list updated" }, { "msg_contents": "> On Thu, 13 Jan 2000, The Hermit Hacker wrote:\n> \n> > No, that isn't the problem...the problem is that initdb, if you run it\n> > with command line arguments, will show up in a ps listing with those\n> > command line arguments...\n> > \n> > if you type 'initdb --pgpasswd=passwd' it will show up in pas as exactly\n> > that ...\n> \n> Not to mention the world readable shell history files which would make\n> this even more convenient ...\n\nMan, why is my bash shell history world-readable. Who's idea was that?\n\nAlso, Peter, I got you the sed -f option to use files as sed\nparameters, which gets us out of this problem. Another day, another\nescape from recoding it in C... :-)\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 13 Jan 2000 08:22:47 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] TODO list updated" }, { "msg_contents": "On Thu, 13 Jan 2000, Bruce Momjian wrote:\n\n> > What is does is some sort of sed s/genericpassword/realpassword/ so I\n> > guess this is not completely safe either. But something like this you'd\n> > have to do. Can I count you in on beating Bruce into submission for an\n> > initdb in C? ;)\n> \n> I will be responsible to make sure the password doesn't get into a\n> command as an argument. sed has a -f command that will take it's regex\n> input from a file. That is the solution, though the umask has to be set\n> to make sure the temp file is not readable by anyone else.\n\nThat's one more file to find and to erase! Sounds very ugly to me. Better\nleave off this option altogether and user alter user. Can end users\ncomment on this at all?\n\n> Most OS vendors use shell scripts for this type of thing because it\n> doesn't have to be fast, and it is changed often.\n\nSo we can do it better! Also besides actual code changes (as recently),\ninitdb itself hardly ever changes. When I get some time I'll develop a\nprototype to convince you. :)\n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Thu, 13 Jan 2000 14:30:35 +0100 (MET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] TODO list updated" }, { "msg_contents": "> On Thu, 13 Jan 2000, Bruce Momjian wrote:\n> \n> > > What is does is some sort of sed s/genericpassword/realpassword/ so I\n> > > guess this is not completely safe either. But something like this you'd\n> > > have to do. Can I count you in on beating Bruce into submission for an\n> > > initdb in C? ;)\n> > \n> > I will be responsible to make sure the password doesn't get into a\n> > command as an argument. sed has a -f command that will take it's regex\n> > input from a file. That is the solution, though the umask has to be set\n> > to make sure the temp file is not readable by anyone else.\n> \n> That's one more file to find and to erase! Sounds very ugly to me. Better\n> leave off this option altogether and user alter user. Can end users\n> comment on this at all?\n\nHuh. Use trap and have it automatically removed on exit:\n\n\ttrap \"rm -f /tmp/pgpass.$$\" 0 1 2 3 15\n\n> \n> > Most OS vendors use shell scripts for this type of thing because it\n> > doesn't have to be fast, and it is changed often.\n> \n> So we can do it better! Also besides actual code changes (as recently),\n> initdb itself hardly ever changes. When I get some time I'll develop a\n> prototype to convince you. :)\n\nOK.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 13 Jan 2000 08:36:01 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] TODO list updated" }, { "msg_contents": "On Thu, 13 Jan 2000, Peter Eisentraut wrote:\n\n> On Thu, 13 Jan 2000, Bruce Momjian wrote:\n> \n> > > What is does is some sort of sed s/genericpassword/realpassword/ so I\n> > > guess this is not completely safe either. But something like this you'd\n> > > have to do. Can I count you in on beating Bruce into submission for an\n> > > initdb in C? ;)\n> > \n> > I will be responsible to make sure the password doesn't get into a\n> > command as an argument. sed has a -f command that will take it's regex\n> > input from a file. That is the solution, though the umask has to be set\n> > to make sure the temp file is not readable by anyone else.\n> \n> That's one more file to find and to erase! Sounds very ugly to me. Better\n> leave off this option altogether and user alter user. Can end users\n> comment on this at all?\n> \n> > Most OS vendors use shell scripts for this type of thing because it\n> > doesn't have to be fast, and it is changed often.\n> \n> So we can do it better! Also besides actual code changes (as recently),\n> initdb itself hardly ever changes. When I get some time I'll develop a\n> prototype to convince you. :)\n\nI could be wrong here, but I don't think anyone *really* cares whether its\nin script or C...just nobody wants to do the coding... :)\n\nI personally think there have been enough solutions to the problem\nprovided that a C version isn't required, but if someone wants to go\nthrough the trouble of doing it (when suitable solutions are present to\nnot require it), who am I to argue?\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Thu, 13 Jan 2000 09:38:21 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] TODO list updated" }, { "msg_contents": "\n> That's one more file to find and to erase! Sounds very ugly to me. Better\n> leave off this option altogether and user alter user. Can end users\n> comment on this at all?\n\nAs an end user, an initdb in C sounds like the best option. \n\nI don't really like the temp file idea - i have too many temp files\nalready. Nor will the average user be immediately understand that the\nenvironment variable should be set without leaving a trace in their\nhistory.\n\nI suppose any of the options could be added to initdb for the novice\nor lazy user. If there is no other solution, I'd prefer a note on\ninitdb to `psql template1` and `ALTER USER...`\n\n-- \nKarl DeBisschop <[email protected]>\n617.832.0332 (Fax: 617.956.2696)\n\nInformation Please - your source for FREE online reference\nhttp://www.infoplease.com - Your Ultimate Fact Finder\nhttp://kids.infoplease.com - The Great Homework Helper\n\nNetsaint Plugins Development\nhttp://netsaintplug.sourceforge.net\n", "msg_date": "Thu, 13 Jan 2000 09:12:12 -0500", "msg_from": "Karl DeBisschop <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] TODO list updated" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> I will be responsible to make sure the password doesn't get into a\n> command as an argument. sed has a -f command that will take it's regex\n> input from a file. That is the solution, though the umask has to be set\n> to make sure the temp file is not readable by anyone else.\n\nAnother possibility is not to try to 'sed' the password into the initial\ndatabase contents, but to run an ALTER USER command (using a standalone\nbackend) after we've done the initial setup of template1. As long as\nthis is done before a postmaster is started, it's perfectly safe ---\nno one other than the postgres user will have been able to connect to\nthe database yet.\n\nDoing it this way, the password would need to appear in the stdin input\nof that standalone backend, but not anyplace else.\n\nAfter thinking about it a little more, I wonder if I was too optimistic\nto say that an initdb script could transfer the password securely.\nConsider: we can get the password with\n\n\techo \"Please enter password for postgres superuser: \"\n\tread PASSWORD\n\nand now the password is in a shell variable of the shell running initdb,\nand hasn't been exposed anywhere else. So far so good, but now what?\nYou can't securely do\n\n\techo $PASSWORD | backend\n\nor\n\techo $PASSWORD > allegedly-secure-temp-file\n\nor even\n\tbackend <<EOF\n\t\tALTER USER ... PASSWORD $PASSWORD\n\tEOF\n\n(the latter *looks* good, but way too many shells implement\nhere-documents by creating a temp file to put the data in;\ndo you want to trust the shell to make the here-doc secure?)\n\nWhat I am starting to think is that we do need a C program. However,\nit could be very small; it shouldn't try to do all of what initdb does.\nAll it needs to do is fetch the password from stdin and then echo it\nto stdout in an ALTER USER command. The invocation in initdb would\nlook something like\n\n\tsecurepassword $SUPERUSERNAME | standalone-backend ...\n\nand the code would be on the order of\n\n\tfprintf(stderr, \"Please enter password for %s: \", argv[1]);\n\tfgets(stdin, password);\n\tprintf(\"ALTER USER %s PASSWORD '%s'\\n\", argv[1], password);\n\n(Actually, you'd want it to do a few more pushups: turn off tty\nechoing before prompting for password, read password twice and\ncheck it was entered the same both times, retry if not, etc.\nAnother reason that a pure shell script isn't really up to the\njob is that AFAIR it can't easily turn off tty echoing.)\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 13 Jan 2000 10:45:15 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] TODO list updated " }, { "msg_contents": "> After thinking about it a little more, I wonder if I was too optimistic\n> to say that an initdb script could transfer the password securely.\n> Consider: we can get the password with\n> \n> \techo \"Please enter password for postgres superuser: \"\n> \tread PASSWORD\n> \n> and now the password is in a shell variable of the shell running initdb,\n> and hasn't been exposed anywhere else. So far so good, but now what?\n> You can't securely do\n> \n> \techo $PASSWORD | backend\n> \n> or\n> \techo $PASSWORD > allegedly-secure-temp-file\n\nThis is secure. echo is a shell builtin, and does not invoke a separate\nprocess with arguments.\n\n> (Actually, you'd want it to do a few more pushups: turn off tty\n> echoing before prompting for password, read password twice and\n> check it was entered the same both times, retry if not, etc.\n> Another reason that a pure shell script isn't really up to the\n> job is that AFAIR it can't easily turn off tty echoing.)\n\nThat is the part that is hard to do in a shell, except I think there are\nstty settings for this.\n\nI just did:\n\t\n\tstty -echo\n\tread PASS \n\tstty echo\n\techo $PASS\n\nand it worked perfectly:\n\n\t#$ /bjm/x \n\t\t\t<- typed test here\n\ttest \n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 13 Jan 2000 10:57:33 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] TODO list updated" }, { "msg_contents": "On Thu, 13 Jan 2000, Tom Lane wrote:\n\n> What I am starting to think is that we do need a C program. However,\n> it could be very small; it shouldn't try to do all of what initdb does.\n> All it needs to do is fetch the password from stdin and then echo it\n> to stdout in an ALTER USER command. The invocation in initdb would\n\nOne more little utility lying around, not my favourite.\n\nWhat I had been phantasizing about is an initdb completely in C that\na) eliminates all shell incompatibilities\nb) doesn't depend on the grace of external utilities\nc) doesn't need any external files\n\nThe implemenation idea behind c) was to include all the catalog/*.h files\ndirectly, having changed the DATA() and DESC() macros prior, thus\neliminating the need for .bki files, genbki.sh (which fortunately hadn't\nhad any compatibility problems), another set of files being installed that\nyou don't really need at runtime.\n\nAlso you wouldn't need pg_version or pg_encoding which implies you don't\nneed libpq, which means you don't need to set LD_LIBRARY_PATH. The idea is\nthat initdb should run right out of the box after make install.\n\nI'm going to try if I can get something like this together before this\nthing goes out the door. But I urge you to give the potential advantages\nof this careful consideration.\n\n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Thu, 13 Jan 2000 17:02:14 +0100 (MET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "initdb (Re: [HACKERS] TODO list updated)" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n>> You can't securely do\n>> echo $PASSWORD | backend\n>> or\n>> echo $PASSWORD > allegedly-secure-temp-file\n\n> This is secure. echo is a shell builtin, and does not invoke a separate\n> process with arguments.\n\necho is a builtin in ksh and derivatives, but I don't think it's safe\nto assume it is a builtin everywhere...\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 13 Jan 2000 11:47:56 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] TODO list updated " }, { "msg_contents": "\n> What I am starting to think is that we do need a C program. However,\n> it could be very small; it shouldn't try to do all of what initdb does.\n> All it needs to do is fetch the password from stdin and then echo it\n> to stdout in an ALTER USER command. The invocation in initdb would\n> look something like\n> \n>\t securepassword $SUPERUSERNAME | standalone-backend ...\n> \n> and the code would be on the order of\n> \n>\t fprintf(stderr, \"Please enter password for %s: \", argv[1]);\n>\t fgets(stdin, password);\n> \t printf(\"ALTER USER %s PASSWORD '%s'\\n\", argv[1], password);\n \nWhy not something like:\n\n#include <libpq-fe.h>\n char *pghost;\n char *pgport;\n char *pgoptions;\n char *pgtty;\n char *dbName;\n char *user;\n char *password;\n char *query;\n PGconn *conn;\n PGresult *res;\n\n fprintf(stderr, \"Please enter password for %s: \", argv[1]);\n fgets(stdin, password);\n pgoptions = NULL; /* special options to start up the backend server */\n pgtty = NULL; /* debugging tty for the backend server */\n conn = PQsetdb(pghost, pgport, pgoptions, pgtty, dbName);\n sprintf(query,\"ALTER USER postgres SET PASSWORD='%s'\",password)\n PGresult= PQexec(conn,query);\n PQfinish(conn);\n\n-- \nKarl DeBisschop <[email protected]>\n617.832.0332 (Fax: 617.956.2696)\n\nInformation Please - your source for FREE online reference\nhttp://www.infoplease.com - Your Ultimate Fact Finder\nhttp://kids.infoplease.com - The Great Homework Helper\n\nNetsaint Plugins Development\nhttp://netsaintplug.sourceforge.net\n", "msg_date": "Thu, 13 Jan 2000 11:58:26 -0500", "msg_from": "Karl DeBisschop <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] TODO list updated" }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> What I had been phantasizing about is an initdb completely in C that\n> a) eliminates all shell incompatibilities\n> b) doesn't depend on the grace of external utilities\n\nThese apparent advantages won't really be realized unless you propose\nto replace *all* our shell-scripts with C; so I'm not persuaded by those\narguments. However\n\n> c) doesn't need any external files\n\n> The implemenation idea behind c) was to include all the catalog/*.h files\n> directly, having changed the DATA() and DESC() macros prior, thus\n> eliminating the need for .bki files, genbki.sh (which fortunately hadn't\n> had any compatibility problems), another set of files being installed that\n> you don't really need at runtime.\n\nis very attractive indeed --- it'd eliminate the risk of incompatibility\nbetween genbki's interpretation of the catalog .h files and the C\ncompiler's interpretation thereof, as well as give us more flexibility\nin what we put in the .h files. (For example, I just finished hacking\nup genbki.sh to interpret \"INDEX_MAX_KEYS*2\" and \"INDEX_MAX_KEYS*4\"\ncorrectly. If we ever go to 8-byte oids, that code will need fixed\nagain. Whole problem goes away if the tables are processed by the C\ncompiler...)\n\nWhat I'd be inclined to think about is a compromise: leave initdb as\nmostly a shell script, but replace genbki.sh and the lib template files\nwith something that compiles up tables equivalent to the template files\nand when invoked spits out bootstrapping commands on its stdout. It'd\nbe very easy to test: diff its output against the existing template files.\n\n> Also you wouldn't need pg_version or pg_encoding which implies you don't\n> need libpq, which means you don't need to set LD_LIBRARY_PATH.\n\nAgain, not very interesting, since you won't get far until you have\nmade libpq.so accessible...\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 13 Jan 2000 11:58:34 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: initdb (Re: [HACKERS] TODO list updated) " }, { "msg_contents": "Karl DeBisschop <[email protected]> writes:\n>> What I am starting to think is that we do need a C program. However,\n>> it could be very small; it shouldn't try to do all of what initdb does.\n \n> Why not something like:\n\n> [ fire up a postmaster and send it an ALTER USER command ]\n\nThat's got a race condition: at the time you start the postmaster,\nthe postgres superuser hasn't got a password. A bad guy could get\nin there and set the password the way *he* wanted it, or less\ndetectably: just connect as postgres, wait for you to set the password,\nthen read it out (he's still connected as postgres and still has\nsuperuser rights...)\n\nIf we thought that was acceptable, the whole issue of setting the\npassword in initdb (rather than doing it manually later on) wouldn't\nbe on the table. The idea is to have a password in place *before*\nopening the store.\n\nIf Bruce is correct that 'echo' is a shell builtin on all shells,\nthen\n\techo \"ALTER USER ...\" | standalone-backend\nseems like a sufficient solution. I am a little concerned about\nthat \"if\", but it may be a close-enough answer.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 13 Jan 2000 12:10:29 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] TODO list updated " }, { "msg_contents": "On Thu, 13 Jan 2000, Tom Lane wrote:\n\n> Bruce Momjian <[email protected]> writes:\n> >> You can't securely do\n> >> echo $PASSWORD | backend\n> >> or\n> >> echo $PASSWORD > allegedly-secure-temp-file\n> \n> > This is secure. echo is a shell builtin, and does not invoke a separate\n> > process with arguments.\n> \n> echo is a builtin in ksh and derivatives, but I don't think it's safe\n> to assume it is a builtin everywhere...\n\nbash-2.03$ which echo\n/usr/slocal/bin/echo\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Thu, 13 Jan 2000 13:18:11 -0400 (AST)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] TODO list updated " }, { "msg_contents": "> Bruce Momjian <[email protected]> writes:\n> >> You can't securely do\n> >> echo $PASSWORD | backend\n> >> or\n> >> echo $PASSWORD > allegedly-secure-temp-file\n> \n> > This is secure. echo is a shell builtin, and does not invoke a separate\n> > process with arguments.\n> \n> echo is a builtin in ksh and derivatives, but I don't think it's safe\n> to assume it is a builtin everywhere...\n\nI believe it is safe. csh and sh have it built in. Does anyone know of\na shell that does not have echo builtin? How do you tell? Not sure.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 13 Jan 2000 12:22:23 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] TODO list updated" }, { "msg_contents": "> On Thu, 13 Jan 2000, Tom Lane wrote:\n> \n> > Bruce Momjian <[email protected]> writes:\n> > >> You can't securely do\n> > >> echo $PASSWORD | backend\n> > >> or\n> > >> echo $PASSWORD > allegedly-secure-temp-file\n> > \n> > > This is secure. echo is a shell builtin, and does not invoke a separate\n> > > process with arguments.\n> > \n> > echo is a builtin in ksh and derivatives, but I don't think it's safe\n> > to assume it is a builtin everywhere...\n> \n> bash-2.03$ which echo\n> /usr/slocal/bin/echo\n> \n\nwhich is an external program looking for another external program. From\nbash:\n\n\t#$ type echo\n\techo is a shell builtin\n\t#$ which which\n\t/usr/bin/which\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 13 Jan 2000 12:25:21 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] TODO list updated" }, { "msg_contents": "\n>That's got a race condition: at the time you start the postmaster,\n>the postgres superuser hasn't got a password. A bad guy could get\n>in there and set the password the way *he* wanted it\n\nOr could `echo \"ALTER USER ...\" | standalone-backend` to the backend\n-- isn't that still a race condition?\n\n>or less detectably: just connect as postgres, wait for you to set the\n>password, then read it out (he's still connected as postgres and\n>still has superuser rights...)\n\nOr connect to the stanadalone backend, and create a trigger on ALTER\nUSER... to print the command to a file. Seems like echo doesn't solve\nthis vulnerablilty either.\n\nObviously I'm pretty naive here, so I'll just shut up after this. But\nfrom what I know of how these parts all work together, the echo\napproach has the same problems, but maybe to a somewaht smaller degree.\n\nAnd even if echo is a builtin in all shells, an alias will override\nthe builtin, at least in bash. So if you machine has been penetrated\nto the point where the above race condition comes into play, you also\ncannot trust echo.\n\nJust my $0.02 worth.\n\n-- \nKarl DeBisschop <[email protected]>\n617.832.0332 (Fax: 617.956.2696)\n\nInformation Please - your source for FREE online reference\nhttp://www.infoplease.com - Your Ultimate Fact Finder\nhttp://kids.infoplease.com - The Great Homework Helper\n\nNetsaint Plugins Development\nhttp://netsaintplug.sourceforge.net\n", "msg_date": "Thu, 13 Jan 2000 12:33:17 -0500", "msg_from": "Karl DeBisschop <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] TODO list updated" }, { "msg_contents": "\n> bash-2.03$ which echo\n> /usr/slocal/bin/echo\n\nI don't think that test has bearing on whether echo is builtin.\nConsider the following:\n\nskillet.infoplease.com:/u/kdebisschop> export PATH=.:$PATH\nskillet.infoplease.com:/u/kdebisschop> which echo\n/usr/bin/echo\nskillet.infoplease.com:/u/kdebisschop> echo '#!/bin/echo trap door'>./echo\nskillet.infoplease.com:/u/kdebisschop> chmod +x echo \nskillet.infoplease.com:/u/kdebisschop> which echo\n/disk/1/home/kdebisschop/echo\nskillet.infoplease.com:/u/kdebisschop> ./echo foo\ntrap door ./echo foo\nskillet.infoplease.com:/u/kdebisschop> echo foo\nfoo\n\nSo bash is using the builtin, but which shows the script.\n\nBUT, for aliases (this is a totally separate shell, BTW):\n\nskillet.infoplease.com:/u/kdebisschop> alias echo='echo tarp door'\nskillet.infoplease.com:/u/kdebisschop> echo foo\ntarp door foo\nskillet.infoplease.com:/u/kdebisschop> which echo\n/usr/bin/echo\n\n-- \nKarl DeBisschop <[email protected]>\n617.832.0332 (Fax: 617.956.2696)\n\nInformation Please - your source for FREE online reference\nhttp://www.infoplease.com - Your Ultimate Fact Finder\nhttp://kids.infoplease.com - The Great Homework Helper\n\nNetsaint Plugins Development\nhttp://netsaintplug.sourceforge.net\n", "msg_date": "Thu, 13 Jan 2000 12:42:16 -0500", "msg_from": "Karl DeBisschop <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] TODO list updated" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n>> echo is a builtin in ksh and derivatives, but I don't think it's safe\n>> to assume it is a builtin everywhere...\n\n> I believe it is safe. csh and sh have it built in. Does anyone know of\n> a shell that does not have echo builtin? How do you tell? Not sure.\n\nI looked at the man pages for plain old Bourne shell on the oldest\nsystems I have access to (SunOS 4.1.4 and HPUX 9). They all say that\necho is a builtin. So I guess it's probably safe enough. There may\nbe a few hoary old machines where\n\techo \"ALTER USER ... $password ...\" | backend\nis a security risk, but it seems like it should be a very minimal\nproblem. (Especially since even a non-builtin echo should be a live\nprocess for only a *really* short interval, even if the backend takes\nlonger to process the command.)\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 13 Jan 2000 13:29:11 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] TODO list updated " }, { "msg_contents": "Karl DeBisschop <[email protected]> writes:\n>> That's got a race condition: at the time you start the postmaster,\n>> the postgres superuser hasn't got a password. A bad guy could get\n>> in there and set the password the way *he* wanted it\n\n> Or could `echo \"ALTER USER ...\" | standalone-backend` to the backend\n> -- isn't that still a race condition?\n\nNo, not unless he's already either root or postgres. Ordinary other\nusers can't run a standalone backend in your database (that's one reason\nwhy the toplevel data directory must always have permissions 700).\n\n> And even if echo is a builtin in all shells, an alias will override\n> the builtin, at least in bash. So if you machine has been penetrated\n> to the point where the above race condition comes into play, you also\n> cannot trust echo.\n\nAgain, if the attacker has already managed to modify your .profile,\nthen you've lost the game. What we're concerned about here is other\nusers on your machine or any of the machines that your pg_hba file\nallows connections from. Running ps while you are doing initdb, for\nexample, doesn't require any special preconditions beyond a regular\nuser account on the same machine you are on.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 13 Jan 2000 13:38:23 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] TODO list updated " }, { "msg_contents": "Bruce Momjian wrote:\n> > * Make postgres user have a password by default\n\n> > There's an initdb switch.\n \n> OK, now we have to decide if we are going to require this be done as\n> part of initdb. I am inclined to say the user _has_ to be _prompted_ in\n> a secure matter for the password as part of initdb. Have a command-line\n> switch for the password is not secure, IMHO, though it is better than\n> nothing.\n \n> Let's get people's opinions on this, and we can mark it as done.\n\nAs a packager, and a user, I would like the _option_ of setting a\ndefault password using a --prompt-for-password switch.\n\nBy all means don't make it default to prompting for a password -- there\nare those who do not need a password on the database superuser account,\ndue to other security measures and connection models (IE, backing a\nwebserver that is handling authentication and pooling connections under\na single (nonprivileged) user).\n\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Thu, 13 Jan 2000 14:47:54 -0500", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] TODO list updated" } ]
[ { "msg_contents": "> I'm Korean. and I can't speak English well.\n> I want to know PostgreSQL 6.5.3 's int8(array)\n> My System is Redhat 6.1, Pentium.\n> and I make array int8 data type.\n> so I make int8 by ./contrib/int8 package.\n\nint8 is now a built-in type, but...\n\n> create table lr(val int8[][][]);\n> ERROR: Unable to locate type name '_int8' in catalog\n> Why ERROR: Unable to locate type name '_int8' in catalog\n> but Not use array ---> Ok.\n\nIt looks like you have uncovered a bug in Postgres; there is no\ncatalog entry for the int8 array type. It should be fixed in the next\nrelease.\n\nI am sending you a patch, but my Postgres installation is torn apart\nat the moment so it is not tested!! Perhaps one of the other\ndevelopers will be able to test and commit a patch to fix this problem\nsooner than I.\n\nThanks for the report, and sorry that it currently does not work for\nyou.\n\nThe patch should be applied to the source code as follows:\n\n1) cd src/include/catalog\n2) patch < pg_type.h.patch\n3) cd ../.. # (to src directory)\n4) make clean\n5) make install\n6) rm -rf ../data # or wherever your postgres data directory is\n7) initdb\n8) restart postmaster\n\nGood luck.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California", "msg_date": "Wed, 12 Jan 2000 15:50:19 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [Fwd: Help Me]" } ]
[ { "msg_contents": "\n> > It would be:\n> > \n> > \tSELECT *\n> > \tFROM tab1, OUTER tab2\n> > \tWHERE tab1.col1 = tab2.col2\n> \n> What about >2 table joins? Wish I had my book here, but I though tyou\n> could do multiple OUTER joins, no?\n\nselect * from tab1, OUTER tab2, OUTER (tab3, tab4), tab5, \n\tOUTER tab6, OUTER (tab7, OUTER tab8) where ......\n\nimho understandable syntax.\n\nAndreas\n", "msg_date": "Wed, 12 Jan 2000 18:20:46 +0100", "msg_from": "Zeugswetter Andreas SB <[email protected]>", "msg_from_op": true, "msg_subject": "AW: [HACKERS] Enhancing PGSQL to be compatible with Informix SQL" } ]
[ { "msg_contents": "I set my time zone to GMT+8 today (because the \"official\" timezone prescribed\nby the FreeBSD timezone database for my location is \"CST\", which was causing\nall sorts of other problems elsewhere).\n\nTonight, when I did my nightly data transfer (consisting of \"copy to\" a\nbunch of tables and concatenating them together into a \"pg_dump\" type of\nscript file, copying the file over, and then loading with psql), the \nbackend was very unhappy.\n\nEvery \"copy from\" block that contained a date crashed. This was particularly\nunpleasant, because the script is bracketed within a single transaction block,\nand each table is emptied (\"delete from\") before new data is copied in. As\na result of the crashes, the transaction aborted, but psql kept on processing\naway, emptying tables, crashing, and repeat.\n\nThis was on a soon-to-be production e-commerce server.\n\nI was able to recover in a few minutes by manually editing the script file\nto replace all \"GMT+8\" with \"+0800\". Had this happened during an automated\ntransfer on a live system, however, the problem could have been severe.\n\nI assume that my backups are similarly corrupted.\n\nI looked through dt.c, and ParseDateTime appears to assume that timezones\nare either strictly alphabetic or of the form \"+0000\". EncodeDateTime,\non the other hand, blindly spits out whatever the operating system gives it\nfrom localtime().\n\nIt seems to me there are two separate problems:\n 1. x == datetime_in(datetime_out(x)) should always be true for all valid x.\n 2. psql should exit with an error status if it receives a fatal error\n from the backend and isatty(0) is false.\n\n\t-Michael Robinson\n\n", "msg_date": "Thu, 13 Jan 2000 02:21:15 +0800 (CST)", "msg_from": "Michael Robinson <[email protected]>", "msg_from_op": true, "msg_subject": "Copy from/to asymmetry" }, { "msg_contents": "> I set my time zone to GMT+8 today (because the \"official\" timezone prescribed\n> by the FreeBSD timezone database for my location is \"CST\", which was causing\n> all sorts of other problems elsewhere).\n...\n> I was able to recover in a few minutes by manually editing the script file\n> to replace all \"GMT+8\" with \"+0800\".\n...\n> I assume that my backups are similarly corrupted.\n\nSure, if you were running with a similarly unusual timezone format.\n\n> I looked through dt.c, and ParseDateTime appears to assume that timezones\n> are either strictly alphabetic or of the form \"+0000\".\n\nRight, those are the forms we have seen or heard about (the minutes\nfield in the second form is optional). Yours is a new one for me.\n\n> EncodeDateTime,\n> on the other hand, blindly spits out whatever the operating system gives it\n> from localtime().\n\nYup. afaik this is the only way to get daylight savings time info\nsince there is no api to do so otherwise. Since this is the very first\nreport of this style of timezone, I don't feel too guilty, and it will\nbe easy to fix (I hope anyway).\n\n> 1. x == datetime_in(datetime_out(x)) should always be true for all valid x.\n\nImpossible to do apriori, given that we rely on the system to provide\ntimezone info for output. However, we try to fix all unusual cases,\nand afaik there are no reasonable formats we have rejected for\nsupport. I'm leaving town for a couple of days, but will look at it\nwhen I return.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Thu, 13 Jan 2000 04:51:31 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Copy from/to asymmetry" }, { "msg_contents": "Thomas Lockhart <[email protected]> writes:\n>> EncodeDateTime,\n>> on the other hand, blindly spits out whatever the operating system gives it\n>> from localtime().\n>\n>Yup. afaik this is the only way to get daylight savings time info\n>since there is no api to do so otherwise. Since this is the very first\n>report of this style of timezone, I don't feel too guilty, and it will\n>be easy to fix (I hope anyway).\n\nThe GMT+8 format is part of the POSIX standard (at least according to \nthe zoneinfo source file). In the meantime, I've created a new zoneinfo \nfile with ISO \"+0800\" format, as a workaround. (To make matters worse, I\ndiscovered that POSIX GMT+8 == ISO -0800 ; in other words, the semantics of\nthe sign character are reversed in the two standards.)\n\n>> 1. x == datetime_in(datetime_out(x)) should always be true for all valid x.\n\n>Impossible to do apriori, given that we rely on the system to provide\n>timezone info for output. However, we try to fix all unusual cases,\n>and afaik there are no reasonable formats we have rejected for\n>support.\n\nPerhaps, if the system supports strptime(), this function could be used as\na last-ditch effort by ParseDateTime before returning an error. That would\nsolve all cases where the datetime_in timezone equals the system timezone\nsetting.\n\nOr, maybe just use strptime() outright. I don't know, it's just a suggestion.\n\n\t-Michael Robinson\n\n", "msg_date": "Thu, 13 Jan 2000 13:51:52 +0800 (+0800)", "msg_from": "Michael Robinson <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Copy from/to asymmetry" }, { "msg_contents": "On 2000-01-13, Michael Robinson mentioned:\n\n> 2. psql should exit with an error status if it receives a fatal error\n> from the backend and isatty(0) is false.\n\nThis is already accomplished in the current sources.\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n\n", "msg_date": "Thu, 13 Jan 2000 19:30:06 +0100 (CET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Copy from/to asymmetry" }, { "msg_contents": "> The GMT+8 format is part of the POSIX standard (at least according to\n> the zoneinfo source file). In the meantime, I've created a new zoneinfo\n> file with ISO \"+0800\" format, as a workaround. (To make matters worse, I\n> discovered that POSIX GMT+8 == ISO -0800 ; in other words, the semantics of\n> the sign character are reversed in the two standards.)\n\nYuck.\n\n> Perhaps, if the system supports strptime(), this function could be used as\n> a last-ditch effort by ParseDateTime before returning an error. That would\n> solve all cases where the datetime_in timezone equals the system timezone\n> setting.\n\nHow? strptime() needs a formatting string, so you would somehow need\nto set it beforehand to *exactly* the correct value. And...\n\n> Or, maybe just use strptime() outright. I don't know, it's just a suggestion.\n\nThe other problem with using system-supplied routines for this is that\nthey invariably fail for years outside the Unix system time range. So\nwe need to do enough parsing to figure out what the year might be, and\nby that time we may as well finish it ourselves...\n\nAnyway, I'll be looking at it sometime soon.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Fri, 14 Jan 2000 15:12:06 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Copy from/to asymmetry" } ]
[ { "msg_contents": "subscribe\n\n", "msg_date": "Thu, 13 Jan 2000 04:23:35 +0900 (JST)", "msg_from": "Hideyuki Kawashima <[email protected]>", "msg_from_op": true, "msg_subject": "None" } ]
[ { "msg_contents": "I hope to have fixed all psql bugs that came up in the last month of my\nabsence. (The array syntax issue itself is not included.) I'd particularly\nbe interested whether the readline related compilation problem is gone,\nsince by readline's CHANGELOG I cannot decode when or where the problem\nwas introduced or removed. In addition, the frequent end user problem \"I\ndid \\dt and all my tables were gone\" has been eliminated.\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n\n", "msg_date": "Wed, 12 Jan 2000 20:44:25 +0100 (CET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "psql updates" }, { "msg_contents": "> was introduced or removed. In addition, the frequent end user problem \"I\n> did \\dt and all my tables were gone\" has been eliminated.\n\nHuge fix for us. Great.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 12 Jan 2000 16:02:03 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] psql updates" }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> ... I'd particularly be interested whether the readline related\n> compilation problem is gone, since by readline's CHANGELOG I cannot\n> decode when or where the problem was introduced or removed.\n\nIt was still there, but I fixed it. Given the lack of any clear version\ninfo for libreadline, adding a configure-time test seems to be the\nway to go.\n\nThe particular problem I saw was that the exported variable\nrl_completion_append_character doesn't exist in old versions of\nlibreadline. (How old? I dunno, but a RedHat 4.2 box I have access to\nhas a libreadline that's like that.) I arranged to #ifdef out psql's\nattempt to set the variable unless configure sees that the variable\nis declared in <libreadline.h>. With that change, psql builds\nsuccessfully against that libreadline version. The tab-completion\nbehavior seems a little flaky (if you press tab when you don't have\na partial keyword typed, it wipes out whatever word you do have typed)\nbut I doubt it is worth trying to fix that. I'm satisfied if psql\nbuilds and is usable --- anyone who complains about the tab behavior\ncan be told they need a newer libreadline.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 22 Jan 2000 11:37:34 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] psql updates " }, { "msg_contents": "On 2000-01-22, Tom Lane mentioned:\n\n> The tab-completion\n> behavior seems a little flaky (if you press tab when you don't have\n> a partial keyword typed, it wipes out whatever word you do have typed)\n\nIn some cases this might be intentional. For example, when you enter\n=> insert xx<tab>\nthen the xx gets replaced by INTO because it's the only valid thing to put\nthere anyway. If you observed someting different, then I'd be interested\nin looking at it.\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n\n", "msg_date": "Tue, 25 Jan 2000 00:48:33 +0100 (CET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] psql updates " }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> On 2000-01-22, Tom Lane mentioned:\n>> The tab-completion\n>> behavior seems a little flaky (if you press tab when you don't have\n>> a partial keyword typed, it wipes out whatever word you do have typed)\n\n> In some cases this might be intentional. For example, when you enter\n> => insert xx<tab>\n> then the xx gets replaced by INTO because it's the only valid thing to put\n> there anyway. If you observed someting different, then I'd be interested\n> in looking at it.\n\nWhat I see with this ancient libreadline is\n\n\tSELECT zz<tab>\n\nzz is wiped out and replaced by a single space. However, this does\n*not* happen with more modern readlines; and since I don't even know\nwhere to get source code corresponding to the readline that's on this\nold Linux system, I doubt it's worth worrying about.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 24 Jan 2000 18:57:01 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] psql updates " } ]
[ { "msg_contents": "Looking in the Informix manuals, I now see how they handle complex outer\njoins:\n\n\tSELECT *\t\n\tFROM tab1, OUTER(tab2, tab3)\n\tWHERE tab1.col1 = tab2.col1 AND\n\t tab2.col1 = tab3.col1\n\nIt does the tab2, tab3 join first, then _outer_ joins to tab1. \nInteresting.\n\n\t\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 12 Jan 2000 15:38:10 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Informix and OUTER join syntax" } ]
[ { "msg_contents": "> Looking in the Informix manuals, I now see how they handle complex outer\n> joins:\n> \n> \tSELECT *\t\n> \tFROM tab1, OUTER(tab2, tab3)\n> \tWHERE tab1.col1 = tab2.col1 AND\n> \t tab2.col1 = tab3.col1\n> \n> It does the tab2, tab3 join first, then _outer_ joins to tab1. \n> Interesting.\n\n\nHere is another example that does a double outer join:\n\n \tSELECT *\t\n \tFROM tab1, OUTER(tab2, OUTER tab3)\n \tWHERE tab1.col1 = tab2.col1 AND\n \t tab2.col1 = tab3.col1\n\nIt does the tab2, tab3 as an _outer_ join first, then _outer_ joins to\ntab1. Even more interesting.\n\nCan someone show me this in ANSI syntax?\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 12 Jan 2000 15:43:10 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Informix and OUTER join syntax" }, { "msg_contents": "At 03:43 PM 1/12/00 -0500, Bruce Momjian wrote:\n\n> \tSELECT *\t\n> \tFROM tab1, OUTER(tab2, OUTER tab3)\n> \tWHERE tab1.col1 = tab2.col1 AND\n> \t tab2.col1 = tab3.col1\n>\n>It does the tab2, tab3 as an _outer_ join first, then _outer_ joins to\n>tab1. Even more interesting.\n>\n>Can someone show me this in ANSI syntax?\n\nAlong the lines of\n\nSELECT *\nFROM tab1 RIGHT JOIN (tab2 RIGHT JOIN tab3 on col1) on col1\n\nmore or less. No where clause is needed, of course.\n\nI left my copy of Date's book back in Boston so can't be\nprecise, guess I'll have to go visit my girlfriend ASAP!\n\nThomas will probably make it clear I'm all wet here, but by\ntrying to generate SQL-92 queries myself I'm hoping I'll learn\nsomething.\n\n\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n", "msg_date": "Wed, 12 Jan 2000 13:13:08 -0800", "msg_from": "Don Baccus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: Informix and OUTER join syntax" }, { "msg_contents": "> > SELECT *\n> > FROM tab1, OUTER(tab2, OUTER tab3)\n> > WHERE tab1.col1 = tab2.col1 AND\n> > tab2.col1 = tab3.col1\n> >It does the tab2, tab3 as an _outer_ join first, then _outer_ joins to\n> >tab1. Can someone show me this in ANSI syntax?\n> SELECT *\n> FROM tab1 RIGHT JOIN (tab2 RIGHT JOIN tab3 on col1) on col1\n\nPretty sure this is correct (assuming that the Informix syntax is\nshowing a right-side outer join). istm that SQL92 is clearer, in the\nsense that the WHERE clause in the Informix syntax specifies that\ncolumns shall be equal, when in fact there is an implicit \"or no\ncolumn matches\" coming from the OUTER specification. SQL92 uses unique\nsyntax to specify this.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Thu, 13 Jan 2000 05:00:40 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: Informix and OUTER join syntax" }, { "msg_contents": "At 05:00 AM 1/13/00 +0000, Thomas Lockhart wrote:\n>> > SELECT *\n>> > FROM tab1, OUTER(tab2, OUTER tab3)\n>> > WHERE tab1.col1 = tab2.col1 AND\n>> > tab2.col1 = tab3.col1\n>> >It does the tab2, tab3 as an _outer_ join first, then _outer_ joins to\n>> >tab1. Can someone show me this in ANSI syntax?\n>> SELECT *\n>> FROM tab1 RIGHT JOIN (tab2 RIGHT JOIN tab3 on col1) on col1\n>\n>Pretty sure this is correct (assuming that the Informix syntax is\n>showing a right-side outer join). istm that SQL92 is clearer, in the\n>sense that the WHERE clause in the Informix syntax specifies that\n>columns shall be equal, when in fact there is an implicit \"or no\n>column matches\" coming from the OUTER specification. SQL92 uses unique\n>syntax to specify this.\n\nAnd if I understand SQL92 correctly, if tab1, tab2, and tab3 only\nshare col1 in common, then you can further simplify:\n\nSELECT *\nFROM tab1 NATURAL RIGHT JOIN (tab2 NATURAL RIGHT JOIN tab3)\n\nIs that right? Again, I'm missing my Date SQL 92 primer...and some\nmight argue this is less clear than explicitly listing the column(s)\nto join on.\n\nAnyway, thanks for the verification of my first stab at this, I think\nI'm getting a feel for the notation.\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n", "msg_date": "Thu, 13 Jan 2000 06:55:14 -0800", "msg_from": "Don Baccus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: Informix and OUTER join syntax" }, { "msg_contents": "> And if I understand SQL92 correctly, if tab1, tab2, and tab3 only\n> share col1 in common, then you can further simplify:\n> SELECT *\n> FROM tab1 NATURAL RIGHT JOIN (tab2 NATURAL RIGHT JOIN tab3)\n> Is that right? ...and some\n> might argue this is less clear than explicitly listing the column(s)\n> to join on.\n\nBut this is \"natural\", right? ;)\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Fri, 14 Jan 2000 15:14:22 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: Informix and OUTER join syntax" }, { "msg_contents": "At 03:14 PM 1/14/00 +0000, Thomas Lockhart wrote:\n>> And if I understand SQL92 correctly, if tab1, tab2, and tab3 only\n>> share col1 in common, then you can further simplify:\n>> SELECT *\n>> FROM tab1 NATURAL RIGHT JOIN (tab2 NATURAL RIGHT JOIN tab3)\n>> Is that right? ...and some\n>> might argue this is less clear than explicitly listing the column(s)\n>> to join on.\n>\n>But this is \"natural\", right? ;)\n\nCute! I have no experience trying to read and understand other\npeople's queries using SQL 92 outer joins so can't really say\nwhether the \"natural\" style is more clear than the more cumbersome\nexplicit notation.\n\nI think both forms are quite readable, though.\n\n\n\n\n- Don Baccus, Portland OR <[email protected]>\n Nature photos, on-line guides, Pacific Northwest\n Rare Bird Alert Service and other goodies at\n http://donb.photo.net.\n", "msg_date": "Fri, 14 Jan 2000 08:33:41 -0800", "msg_from": "Don Baccus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: Informix and OUTER join syntax" } ]
[ { "msg_contents": "We are about 3 weeks from the scheduled 7.0 beta.\n\nI am interested in hearing a status on our open items:\n\n\tforeign keys/activity queue - Jan\n\tlong tuples/TOAST - Jan\n\touter joins - Thomas\n Date/Time types - Thomas\n\nPostponed:\n\n WAL - Vadim\n Function args - Tom\n\nIf people need more time, let us know.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 12 Jan 2000 17:41:13 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Status request for 7.0" }, { "msg_contents": "\nOn Wed, 12 Jan 2000, Bruce Momjian wrote:\n\n> We are about 3 weeks from the scheduled 7.0 beta.\n> \n> I am interested in hearing a status on our open items:\n> \n> \tforeign keys/activity queue - Jan\n> \tlong tuples/TOAST - Jan\n> \touter joins - Thomas\n> Date/Time types - Thomas\n> \n\n I'm finishing to_char()'s family routines now (it is 8 routines). \nI'd like remove it to main tree next week (I will send patch). \nAgree Thomas?\n\n\t\t\t\t\t\t\tKarel \n\n", "msg_date": "Thu, 13 Jan 2000 11:11:51 +0100 (CET)", "msg_from": "Karel Zak - Zakkr <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Status request for 7.0" }, { "msg_contents": "> I'm finishing to_char()'s family routines now (it is 8 routines).\n> I'd like remove it to main tree next week (I will send patch).\n> Agree Thomas?\n\nShould be fine.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Fri, 14 Jan 2000 15:15:01 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Status request for 7.0" } ]
[ { "msg_contents": "Hi,\n\nI have a plan to clean up the usage of putenv(), getenv() in libpq+MB\nconfiguration. This needs some interface changes with libpq in the\nfrontend side. I'm not sure this is visible to end users or not, and I\nwould like to hear from hackes.\n\nFirst of all, I would like to explain the current implementation.\n\n(1) While establishing a connection, if the environment variable\nPGCLIENTENCODING is not set, libpq asks the backend what the encoding\nfor the database is. This is done by sending a query \"select\ngetdatabaseencoding()\". In this case, both the backend and the\nfrontend uses same encoding and its name is set to the\nPGCLIENTENCODING environment variable for the later use.\n(fe-connect.c: PQsetenvPoll())\n\n(2) When libpq prints the result of a query, it needs to determine the\nlength of a multi-byte letter. For this purpose, getenv() is called to\nknow the encoding name. (fe-print.c)\n\nAbove implementation has a design flaw since it is not multithread-safe.\nTo fix the problem, I would like to make changes as follows:\n\n(1) Add a new member \"int client_encoding\" to struct pg_conn.\n\n(2) Add an argument which is a pointer to PGconn to PQsetenvPoll() so\nthat the client encoding can be set in (1) above.\n\n(3) Add a new function PQclientencoding() to extract client_encoding\nfrom PGconn.\n\n(4) Change PQmblen() so that it extracts encoding info using\nPQclientencoding() rather than calling getenv(). This also requires\nadd an argument which is a pointer to PGconn.\n\n(5) Change fe-print.c:do_filed() to add an argument which is a pointer to\nPGconn.\n\nComments and suggestions are welcome.\n--\nTatsuo Ishii\n", "msg_date": "Thu, 13 Jan 2000 10:58:45 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": true, "msg_subject": "libpq+MB/putenv(), getenv() clean up" }, { "msg_contents": "Tatsuo Ishii <[email protected]> writes:\n> I have a plan to clean up the usage of putenv(), getenv() in libpq+MB\n> configuration. This needs some interface changes with libpq in the\n> frontend side. I'm not sure this is visible to end users or not, and I\n> would like to hear from hackes.\n\nI think it is a very good idea to remove getenv() from PQmblen().\ngetenv() is rather slow, at least on the machines I use, and having\nto do it for each character processed is a huge performance hit.\n\nPQmblen is exported by libpq (psql is an example of an application\nthat uses it). So very possibly, changing it would break a few client\napplications. A possible answer is to leave PQmblen alone, and invent\na new routine with a different name that looks at PGconn. We could\ndeprecate PQmblen and delete it after a few releases. I'm not sure\nif this is worth the trouble or not --- maybe it's OK to make a non-\ncompatible change to PQmblen.\n\n> (1) While establishing a connection, if the environment variable\n> PGCLIENTENCODING is not set, libpq asks the backend what the encoding\n> for the database is.\n\n> Above implementation has a design flaw since it is not multithread-safe.\n\nYou would still do one getenv() during connection setup, right, to see\nif PGCLIENTENCODING is set? If you don't, that would be a significant\nchange in behavior that might make a lot of people unhappy.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 12 Jan 2000 22:05:31 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] libpq+MB/putenv(), getenv() clean up " }, { "msg_contents": "> Tatsuo Ishii <[email protected]> writes:\n> > I have a plan to clean up the usage of putenv(), getenv() in libpq+MB\n> > configuration. This needs some interface changes with libpq in the\n> > frontend side. I'm not sure this is visible to end users or not, and I\n> > would like to hear from hackes.\n> \n> I think it is a very good idea to remove getenv() from PQmblen().\n> getenv() is rather slow, at least on the machines I use, and having\n> to do it for each character processed is a huge performance hit.\n\nYikes, we are calling it for every character. I think it munges through\nthe entire process environment space looking for a value.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 12 Jan 2000 22:36:04 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] libpq+MB/putenv(), getenv() clean up" }, { "msg_contents": "> I think it is a very good idea to remove getenv() from PQmblen().\n> getenv() is rather slow, at least on the machines I use, and having\n> to do it for each character processed is a huge performance hit.\n\nI didn't notice that. Thanks for the point.\n\n> PQmblen is exported by libpq (psql is an example of an application\n> that uses it). So very possibly, changing it would break a few client\n> applications. A possible answer is to leave PQmblen alone, and invent\n> a new routine with a different name that looks at PGconn. We could\n> deprecate PQmblen and delete it after a few releases. I'm not sure\n> if this is worth the trouble or not --- maybe it's OK to make a non-\n> compatible change to PQmblen.\n\nWith the changes I propose, PQmblen() would not work anymore\nanyway. I'll post to interfaces list about the incompatible changes.\n\n> You would still do one getenv() during connection setup, right, to see\n> if PGCLIENTENCODING is set?\n\nYes. \n\n>If you don't, that would be a significant\n> change in behavior that might make a lot of people unhappy.\n\nSo the behavior won't be changed.\n--\nTatsuo Ishii\n", "msg_date": "Thu, 13 Jan 2000 12:52:12 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] libpq+MB/putenv(), getenv() clean up " }, { "msg_contents": "Hi,\n\nTatsuo Ishii <[email protected]> wrote:\n> \n> I have a plan to clean up the usage of putenv(), getenv() in libpq+MB\n> configuration. This needs some interface changes with libpq in the\n> frontend side. I'm not sure this is visible to end users or not, and I\n> would like to hear from hackes.\n\n This plan is welcome !\n\n\n> Above implementation has a design flaw since it is not multithread-safe.\n> To fix the problem, I would like to make changes as follows:\n> \n> (1) Add a new member \"int client_encoding\" to struct pg_conn.\n> \n> (2) Add an argument which is a pointer to PGconn to PQsetenvPoll() so\n> that the client encoding can be set in (1) above.\n> \n> (3) Add a new function PQclientencoding() to extract client_encoding\n> from PGconn.\n> \n> (4) Change PQmblen() so that it extracts encoding info using\n> PQclientencoding() rather than calling getenv(). This also requires\n> add an argument which is a pointer to PGconn.\n> \n> (5) Change fe-print.c:do_filed() to add an argument which is a pointer to\n> PGconn.\n> \n> Comments and suggestions are welcome.\n\n Do those changes mean that libpq(or psql) always has the MULTIBYTE\nfunction?\n\n Now, libpq's MULTIBYTE function is a compile option(--with-mb).\nBut, I always want the MULTIBYTE function, even if PostgreSQL have \n*not* been made with \"--with-mb\" option. Because, I want to access\ntwo kind of PostgreSQL servers(named A and B) by using the B's psql.\n(Here, A server is \"--with-mb\" and B server is not \"--with-mb\".)\n\nRegards,\nSAKAIDA Masaaki -- Osaka, Japan\n\n\n", "msg_date": "Thu, 13 Jan 2000 13:18:56 +0900", "msg_from": "SAKAIDA <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] libpq+MB/putenv(), getenv() clean up" }, { "msg_contents": "On Thu, 13 Jan 2000, Tatsuo Ishii wrote:\n\n> Hi,\n> \n> I have a plan to clean up the usage of putenv(), getenv() in libpq+MB\n> configuration. This needs some interface changes with libpq in the\n> frontend side. I'm not sure this is visible to end users or not, and I\n> would like to hear from hackes.\n> \n> First of all, I would like to explain the current implementation.\n> \n> (1) While establishing a connection, if the environment variable\n> PGCLIENTENCODING is not set, libpq asks the backend what the encoding\n> for the database is. This is done by sending a query \"select\n> getdatabaseencoding()\". In this case, both the backend and the\n> frontend uses same encoding and its name is set to the\n> PGCLIENTENCODING environment variable for the later use.\n> (fe-connect.c: PQsetenvPoll())\n\nWhatever you do, please do not set any environment variables from within\nprograms. It's evil. Consider the user leaving the database and connecting\nto another, but then PGCLIENTENCODING is already set to what would be\ninterpreted as his \"preference\", but maybe he wants the backend to decide.\nI saw you had some hacks in psql for working around this, but psql is not\nevery application. I think what you are suggesting below would incorporate\nthis change, I just want to express it explicitly.\n\n> \n> (2) When libpq prints the result of a query, it needs to determine the\n> length of a multi-byte letter. For this purpose, getenv() is called to\n> know the encoding name. (fe-print.c)\n> \n> Above implementation has a design flaw since it is not multithread-safe.\n> To fix the problem, I would like to make changes as follows:\n> \n> (1) Add a new member \"int client_encoding\" to struct pg_conn.\n> \n> (2) Add an argument which is a pointer to PGconn to PQsetenvPoll() so\n> that the client encoding can be set in (1) above.\n> \n> (3) Add a new function PQclientencoding() to extract client_encoding\n> from PGconn.\n\nHow about PQencoding()?\n\n> \n> (4) Change PQmblen() so that it extracts encoding info using\n> PQclientencoding() rather than calling getenv(). This also requires\n> add an argument which is a pointer to PGconn.\n\nHow about PQmblen(character, encoding)? Then you could call it PQmblen(c,\nPQclientencoding(conn)) or PQmblen(c, other_encoding). That makes it more\ngeneral.\n\n> \n> (5) Change fe-print.c:do_filed() to add an argument which is a pointer to\n> PGconn.\n> \n> Comments and suggestions are welcome.\n> --\n> Tatsuo Ishii\n> \n> ************\n> \n> \n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Thu, 13 Jan 2000 12:00:28 +0100 (MET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] libpq+MB/putenv(), getenv() clean up" }, { "msg_contents": "On Thu, 13 Jan 2000, SAKAIDA wrote:\n\n> Do those changes mean that libpq(or psql) always has the MULTIBYTE\n> function?\n> \n> Now, libpq's MULTIBYTE function is a compile option(--with-mb).\n> But, I always want the MULTIBYTE function, even if PostgreSQL have \n> *not* been made with \"--with-mb\" option. Because, I want to access\n> two kind of PostgreSQL servers(named A and B) by using the B's psql.\n> (Here, A server is \"--with-mb\" and B server is not \"--with-mb\".)\n\nAah, that's a good point, but an always-on psql and libpq is much slower.\nBut you could use the psql that stems from the multibyte compile, or not?\n\n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Thu, 13 Jan 2000 12:23:49 +0100 (MET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] libpq+MB/putenv(), getenv() clean up" }, { "msg_contents": "> Whatever you do, please do not set any environment variables from within\n> programs. It's evil. \n\nOf course it's in my plan. I will eliminate them.\n\n> > (3) Add a new function PQclientencoding() to extract client_encoding\n> > from PGconn.\n> \n> How about PQencoding()?\n\nHumm... someday we may have PQdatabasencoding(). So I think it's\nbetter to have \"client\" on it not to be confused.\n\n> How about PQmblen(character, encoding)? Then you could call it PQmblen(c,\n> PQclientencoding(conn)) or PQmblen(c, other_encoding). That makes it more\n> general.\n\nGood idea. Agreed.\n--\nTatsuo Ishii\n", "msg_date": "Sat, 15 Jan 2000 14:31:31 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] libpq+MB/putenv(), getenv() clean up" }, { "msg_contents": "I have committed changes below.\n\n> (1) Add a new member \"int client_encoding\" to struct pg_conn.\n\ndone.\n\n> (2) Add an argument which is a pointer to PGconn to PQsetenvPoll() so\n> that the client encoding can be set in (1) above.\n\nRather than adding new parameter, I changed the argument to PGconn *.\n\n> (3) Add a new function PQclientencoding() to extract client_encoding\n> from PGconn.\n\ndone.\n\n> (4) Change PQmblen() so that it extracts encoding info using\n> PQclientencoding() rather than calling getenv(). This also requires\n> add an argument which is a pointer to PGconn.\n\nNow,\n\nextern int\tPQmblen(const unsigned char *s, int encoding);\n\n(Thanks goes to Peter for the suggestion)\n\n> (5) Change fe-print.c:do_filed() to add an argument which is a pointer to\n> PGconn.\n\nI found the arugument PGresult *res of do_field() has a pointer to\nPGconn. So I did not need to change the interface.\n\n(6) lots of changes have been made to psql to adapt the changes above.\n\nThough I have run the regression test with/without multibyte and did\nnot find particular problem, please let me know if you find anything\nwrong with those changes.\n--\nTatsuo Ishii\n", "msg_date": "Sat, 15 Jan 2000 14:52:42 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] libpq+MB/putenv(), getenv() clean up" } ]
[ { "msg_contents": "\n> If I'm going to hack around in that code, one related question: what\n> should the deal be regarding variable interpolation into quoted\n> strings? Yes/No/Maybe?\n\ndefinitely No. If you want to specify a string with an embedded variable,\nthe imho expected syntax would be:\n\n'The table ' || :tabname || ' is empty'\n\nOf course that has the problem, that psql would have to quote the :tabname\ncontent.\n\nAndreas\n", "msg_date": "Thu, 13 Jan 2000 10:21:57 +0100", "msg_from": "Zeugswetter Andreas SB <[email protected]>", "msg_from_op": true, "msg_subject": "AW: [HACKERS] Re: Regress tests reveal *serious* psql bug " } ]
[ { "msg_contents": "\n> In fact, I think it should be an error to reference a variable that is\n> not defined. This will catch accidental references too. If you\n> reference a variable that does not exist like :myvar, it passes the\n> literal :myvar to the backend.\n\nThat is what I would expect also.\n\nAndreas\n", "msg_date": "Thu, 13 Jan 2000 10:26:17 +0100", "msg_from": "Zeugswetter Andreas SB <[email protected]>", "msg_from_op": true, "msg_subject": "AW: [HACKERS] Re: Regress tests reveal *serious* psql bug" } ]
[ { "msg_contents": "> SELECT arrtest.a[1:3],\n> arrtest.b[1:1][1:2][1:2],\n> arrtest.c[1:2], \n> arrtest.d[1:1][1:2]\n> FROM arrtest;\n\nSorry for the stupid question, but can sombody enlighten me.\nWhy was the \":\" used in the first place ? I would expect to use a ','\nfor an array slice. No ?\n\nAs in: select arrtest.a[1,1][1,2]\n(This is also what others use for array slices)\n\nAndreas\n", "msg_date": "Thu, 13 Jan 2000 10:33:43 +0100", "msg_from": "Zeugswetter Andreas SB <[email protected]>", "msg_from_op": true, "msg_subject": "AW: [HACKERS] Re: Regress tests reveal *serious* psql bug " }, { "msg_contents": "Zeugswetter Andreas SB <[email protected]> writes:\n> Why was the \":\" used in the first place ? I would expect to use a ','\n> for an array slice. No ?\n\nI'd think a comma is a separator between subscripts for different\ndimensions, ie, a[1,2] would be equivalent to a[1][2]. It'd\ncertainly surprise *me* if it were interpreted as a slice indicator.\n\nIn any case, we're stuck with the notation now, I fear.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 13 Jan 2000 11:40:32 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: AW: [HACKERS] Re: Regress tests reveal *serious* psql bug " }, { "msg_contents": "Zeugswetter Andreas SB wrote:\n\n> > SELECT arrtest.a[1:3],\n> > arrtest.b[1:1][1:2][1:2],\n> > arrtest.c[1:2],\n> > arrtest.d[1:1][1:2]\n> > FROM arrtest;\n>\n> Sorry for the stupid question, but can sombody enlighten me.\n> Why was the \":\" used in the first place ? I would expect to use a ','\n> for an array slice. No ?\n>\n> As in: select arrtest.a[1,1][1,2]\n> (This is also what others use for array slices)\n\nIn Fortran 90 (one of the few languages that has true arrays with a size\nas well as a shape) arrays are indexed as\n\n A(1:8:2, -2:10)\n\nwhich would mean the 2D array defined by rows (1,3,5,7) and columns\n(-2,...,10). So ',' is commonly used to separate dimensions, and it\nwould be confusing to suddenly use commas to define a range. And as\nFortran is pretty much the grand-daddy of all programming languages we\ncan't really go and change that ;-) Putting indexes in separate ['s is\njust a modern C'ism, because C has no real multi-dimensional arrays,\nonly pointer dereferencing.\n\nAdriaan (a Fortran programmer)\n\n", "msg_date": "Thu, 13 Jan 2000 16:42:31 +0000", "msg_from": "Adriaan Joubert <[email protected]>", "msg_from_op": false, "msg_subject": "Re: AW: [HACKERS] Re: Regress tests reveal *serious* psql bug" } ]
[ { "msg_contents": "\n> Looking in the Informix manuals, I now see how they handle \n> complex outer\n> joins:\n> \n> \tSELECT *\t\n> \tFROM tab1, OUTER(tab2, tab3)\n> \tWHERE tab1.col1 = tab2.col1 AND\n> \t tab2.col1 = tab3.col1\n> \n> It does the tab2, tab3 join first, then _outer_ joins to tab1. \n> Interesting.\n\nOk, just to clarify:\n\nthis select gives at least one row for every row in tab1\nif an inner join on tab2, tab3 does not give a match (tab1.col1=tab2.col1)\nall columns of tab2 and tab3 are set to null for that row.\n\nIf that is what you said, I didn't understand it.\n\nAndreas\n", "msg_date": "Thu, 13 Jan 2000 10:52:49 +0100", "msg_from": "Zeugswetter Andreas SB <[email protected]>", "msg_from_op": true, "msg_subject": "AW: [HACKERS] Informix and OUTER join syntax" } ]
[ { "msg_contents": "\n> > Bruce Momjian <[email protected]> writes:\n> > > We currently do not use indexes to handle ORDER BY \n> because it is slower,\n> > \n> > Er, actually, we *do* use indexes for ORDER BY currently:\n> > \n> > regression=# explain select * from tenk1 order by unique1;\n> > NOTICE: QUERY PLAN:\n> > Index Scan using tenk1_unique1 on tenk1 (cost=760.00 \n> rows=10000 width=148)\n> > \n> > If you start psql with PGOPTIONS=\"-fi\" you can see that the \n> optimizer\n> > believes an explicit sort would be much slower:\n> > \n> > regression=# explain select * from tenk1 order by unique1;\n> > NOTICE: QUERY PLAN:\n> > Sort (cost=3233.91 rows=10000 width=148)\n> > -> Seq Scan on tenk1 (cost=563.00 rows=10000 width=148)\n> > \n> > but (at least on my machine) the explicit sort is marginally faster.\n> > Evidently, the cost estimate for an explicit sort is *way* too high.\n\nDoing the sort, or the index access is allways a tradeoff.\nFor interactive access the index is faster,\nfor batch mode the sort is faster.\n\nI would try to avoid a sort, that would need more than a few\n100 Mb of sort disk space, even if I would eventually get my last\nrow faster. \nThe tradeoff is, that you wait an hour before you get the first row,\nand block all those resources until you finish.\n\nThe index access gives the first rows fast, and does not block \nresources.\n\nIn mathematical terms I would give the sort an exponential cost\ncurve regarding sort size\n(probably also dependent on ~16 * available sort memory), \nand the index access a linear cost curve.\n\nAndreas\n", "msg_date": "Thu, 13 Jan 2000 11:38:10 +0100", "msg_from": "Zeugswetter Andreas SB <[email protected]>", "msg_from_op": true, "msg_subject": "AW: [HACKERS] TODO list updated" } ]
[ { "msg_contents": "Bruce, could you please add\n\n* fix array handling for ECPG\n\nto our TODO list for 7.0?\n\nThanks.\n\nRight now you can insert an array of integeres as single integers but not as\narray. And you cannot insert an array of structs as single structs.\n\nThis has to be cleaned up.\n\nMichael\n-- \nMichael Meskes | Go SF 49ers!\nTh.-Heuss-Str. 61, D-41812 Erkelenz | Go Rhein Fire!\nTel.: (+49) 2431/72651 | Use Debian GNU/Linux!\nEmail: [email protected] | Use PostgreSQL!\n", "msg_date": "Thu, 13 Jan 2000 12:26:18 +0100", "msg_from": "Michael Meskes <[email protected]>", "msg_from_op": true, "msg_subject": "Another item for TODO list" }, { "msg_contents": "> Bruce, could you please add\n> \n> * fix array handling for ECPG\n> \n> to our TODO list for 7.0?\n> \n> Thanks.\n> \n> Right now you can insert an array of integeres as single integers but not as\n> array. And you cannot insert an array of structs as single structs.\n> \n\nAdded to TODO under CLIENTS.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 13 Jan 2000 08:18:13 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Another item for TODO list" } ]
[ { "msg_contents": "We had a dicussion about this already, but I think I came up with a\nsolution that works for everyone.\n\n[proposal]\n\n* If you configure with --enable-multibyte then you build a server and\nclients that are multibyte enabled (as defined by the MULTIBYTE symbol).\nIf you don't use multibyte functionality then nothing will change, so this\nwould be the preferred option for package maintainers.\n\n* If you configure with --enable-multibyte=XXX then XXX will be initdb's\ndefault encoding, but XXX won't be used anywhere else. This option is for\nthose who insist on it; the preferred way to go about this would be the\nrespective option for initdb itself.\n\nI'd leave the --with-mb option in there but hidden and make the new option\nthe published interface. Any protests?\n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Thu, 13 Jan 2000 12:30:40 +0100 (MET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "--enable-multibyte" }, { "msg_contents": "Peter,\n\n> * If you configure with --enable-multibyte then you build a server and\n> clients that are multibyte enabled (as defined by the MULTIBYTE symbol).\n> If you don't use multibyte functionality then nothing will change, so this\n> would be the preferred option for package maintainers.\n\nThis actually would mean that the default encoding is set to\nSQL_ASCII, I guess. Since the encoding column in pg_database must be\nfilled with some value, we need to pick up a default value for it\nanyway. Any encoding could be a candidate for the default, probably\nSQL_ASCII is a good choice, IMHO. \n--\nTatsuo Ishii\n", "msg_date": "Fri, 14 Jan 2000 10:57:26 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] --enable-multibyte" } ]
[ { "msg_contents": "Dear [HACKERS] and [NTERFACES]\n\n\nWe are having some trouble getting an applet that accesses PostgreSQL to\nrun under Netscape. The following is a present status summary:\n\nIf anyone can help it would be really terrific!\n\nWhen executing the applet, the Netscape web server logs shows\nconnections or calls being made to different classes and jar files. The\npostgresql.jar file is found correctly through the classpath (identified\nin the web server start file in /Netscape/Suitespot/https-default/start\nref the classpath), but, it then seems to look for errors.class,\nerrors_EN.class, and errors_EN_US.class, none of which can be found. We\nexpanded the postgresql.jar file, and those classes weren't part of the\ntree either.\n\n This appears to be either -\n For some reason normal classes (not part of the postgres\npackage) are being looked for as sub-classes in the postgres set\n OR\n Those classes are meant to be postgres error classes that\nweren't compiled or included in the postgresql.jar file. Then again,\nafter looking at the README and Makefiles for the postgresql.jar file,\nwe would tend to agree that the file is being built with all the classes\nnormally needed. We also looked at http://www.retep.org.uk/postgres/\n(faq), and it seems to suggest that building the jar file with the right\nclasses for postgresql is fairly straightforward.\n Hmmmm.....\n\n The applet runs correctly until it tries to execute the following\nline:\n==> Class.classDriver = ClassforName(\"postgresql.Driver\");\n\nIt then enters that try/catch error process and appear to try to find\nthe missing classes.\n\n\n When compiling the applet no error messages like \"class not found\"\nare flagged. But, there are execution errors when we compiled with\njdbc6_5-1_2:\n\n==>getConnection ERROR 1: The postgresql.jar file does not contain the\ncorrect JDBC classes for this JVM. Try rebuilding. If that fails, try\nforcing the version supplying it to the command line using the argument\n-Djava.version=1.1 or -Djava.version=1.2\n==>Exception thrown was java.lang.ClassNotFoundExcption:\npostgresql.jdbc1.Connection.\n\nSounds a lot like the problems that we are having trying to run under\nNetscape!\n\nAllan tried to use the command line argument they talked about but can't\nfind the command line\nin the Visual Cafe environment even after searching the help files......\n\nPerhaps postgresql.jar is a 1.2 java driver.\n\nWe've got a business trip and we are leaving on for 2.5 to 3 weeks.\nShould be fun -- a few eastern Europe countries like Uzbekistan, Kyrghyz\nRepublic, and Kazakhstan.\n\nSincerely,\n\nAllan in Belgium\nand\nScott in Germany\n\n\n\n", "msg_date": "Thu, 13 Jan 2000 14:16:42 +0100", "msg_from": "\"Allan Huffman\" <[email protected]>", "msg_from_op": true, "msg_subject": "[HACK]-[INTERFACE] jdbc/postgresql.jar execution errors" } ]
[ { "msg_contents": "Replies prefixed with PM:\n\n-- \nPeter Mount\nEnterprise Support\nMaidstone Borough Council\nAny views stated are my own, and not those of Maidstone Borough Council.\n\n\n\n-----Original Message-----\nFrom: Allan Huffman [mailto:[email protected]]\nSent: Thursday, January 13, 2000 1:17 PM\nTo: [email protected]; [email protected];\[email protected]\nSubject: [HACKERS] [HACK]-[INTERFACE] jdbc/postgresql.jar execution\nerrors\n\n\nDear [HACKERS] and [NTERFACES]\n\n\nWe are having some trouble getting an applet that accesses PostgreSQL to\nrun under Netscape. The following is a present status summary:\n\nIf anyone can help it would be really terrific!\n\nWhen executing the applet, the Netscape web server logs shows\nconnections or calls being made to different classes and jar files. The\npostgresql.jar file is found correctly through the classpath (identified\nin the web server start file in /Netscape/Suitespot/https-default/start\nref the classpath), but, it then seems to look for errors.class,\nerrors_EN.class, and errors_EN_US.class, none of which can be found. We\nexpanded the postgresql.jar file, and those classes weren't part of the\ntree either.\n\nPM: It should be looking for errors.properties, errors_en.properties\netc... This sounds like a bug in Netscapes VM...\n\n This appears to be either -\n For some reason normal classes (not part of the postgres\npackage) are being looked for as sub-classes in the postgres set\n OR\n Those classes are meant to be postgres error classes that\nweren't compiled or included in the postgresql.jar file. Then again,\nafter looking at the README and Makefiles for the postgresql.jar file,\nwe would tend to agree that the file is being built with all the classes\nnormally needed.\n\nPM: The files in question are actually plain text files. the standard\nEnglish one is errors.properties, which should be in the postgresql\ndirectory. This should be the _last_ one looked for. The others are done\nin reverse order, and should be parsed as errors_en_us.properties then\nerrors_en.properties. This allows us to have foreign language support in\nthe error messages (we have english & french, and 7.0 has dutch).\n\n We also looked at http://www.retep.org.uk/postgres/\n(faq), and it seems to suggest that building the jar file with the right\nclasses for postgresql is fairly straightforward.\n Hmmmm.....\n\n The applet runs correctly until it tries to execute the following\nline:\n==> Class.classDriver = ClassforName(\"postgresql.Driver\");\n\nPM: It should be Class.forName but I presume this is a typo...\n\nIt then enters that try/catch error process and appear to try to find\nthe missing classes.\n\nPM: What version of Netscape are you using? So far, only IE has had\nproblems, but of those applet users contacting me haven't seen this\nproblem.\n\n When compiling the applet no error messages like \"class not found\"\nare flagged. But, there are execution errors when we compiled with\njdbc6_5-1_2:\n\n==>getConnection ERROR 1: The postgresql.jar file does not contain the\ncorrect JDBC classes for this JVM. Try rebuilding. If that fails, try\nforcing the version supplying it to the command line using the argument\n-Djava.version=1.1 or -Djava.version=1.2\n\nPM: Compile the postgresql.jar file using JDK1.1.x (x=6 or 7) and not\n1.2. Your copy of netscape is using the earlier JVM, and the class file\nformat is slightly different, as is the JDBC API.\n\n==>Exception thrown was java.lang.ClassNotFoundExcption:\npostgresql.jdbc1.Connection.\n\nPM: Yes, jdbc1 is for JDK1.1.x, while jdbc2 is for JDK1.2 (aka Java2)\nand later\n\nSounds a lot like the problems that we are having trying to run under\nNetscape!\n\nAllan tried to use the command line argument they talked about but can't\nfind the command line\nin the Visual Cafe environment even after searching the help files......\n\nPerhaps postgresql.jar is a 1.2 java driver.\n\nPM: It depends on how you compile it. There are actually two drivers in\nthere.\n\nWe've got a business trip and we are leaving on for 2.5 to 3 weeks.\nShould be fun -- a few eastern Europe countries like Uzbekistan, Kyrghyz\nRepublic, and Kazakhstan.\n\nSincerely,\n\nAllan in Belgium\nand\nScott in Germany\n\n\n\n\n************\n", "msg_date": "Thu, 13 Jan 2000 14:34:05 -0000", "msg_from": "Peter Mount <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] [HACK]-[INTERFACE] jdbc/postgresql.jar execution er\n rors" } ]
[ { "msg_contents": "I just got a reply from the original author of the patch I was talking\nabout:\n\n----- Forwarded message from Rene Hogendoorn <[email protected]> -----\n\nFrom: Rene Hogendoorn <[email protected]>\nDate: Thu, 13 Jan 2000 14:23:08 +0100 (MET)\nTo: Michael Meskes <[email protected]>\nSubject: Re: ECPG patches\n\n TL> Michael Meskes <[email protected]> writes:\n >> <fetch statement> ::= FETCH [ [ <fetch orientation> ] FROM ]\n >> <cursor name> INTO <fetch target list>\n\n >> To me this seems to say that FROM is just optional. Okay, if I\n >> make it optional in our parser?\n\n TL> Careful --- notice that FROM is only optional if you *also*\n TL> omit all the preceding optional clauses. Otherwise there will\n TL> be a reduce conflict that you could only resolve by removing\n TL> all of FETCH's secondary keywords from the ColId list. I\n TL> don't think that would be an acceptable tradeoff.\n\nThe reduce conflict is caused by the /* EMPTY */ alternatives of\n'opt_direction', 'fetch_how_many' and 'opt_portal_name'. Considering\nthe sql92 syntax, 'opt_portalname' is wrong; the portalname is not\noptional, but required. Requiring a portalname also solves the problem\nof 'EXEC SQL FETCH; being a valid statement.\nFurthermore, at least INFORMIX supports 'FETCH NEXT t1;'. So, I strongly\nsuggest to NOT require 'FROM'.\n...\nRegards\nRene\n-- \n\nR. A. Hogendoorn E-mail: [email protected]\nInformation and Communication Technology Division Tel. +31-527-24-8367 \nNational Aerospace Laboratory, The Netherlands Fax. +31-527-24-8210 \n\n----- End forwarded message -----\n\nMichael\n-- \nMichael Meskes | Go SF 49ers!\nTh.-Heuss-Str. 61, D-41812 Erkelenz | Go Rhein Fire!\nTel.: (+49) 2431/72651 | Use Debian GNU/Linux!\nEmail: [email protected] | Use PostgreSQL!\n", "msg_date": "Thu, 13 Jan 2000 15:39:16 +0100", "msg_from": "Michael Meskes <[email protected]>", "msg_from_op": true, "msg_subject": "FETCH statement again" } ]
[ { "msg_contents": "This message was sent from Geocrawler.com by \"Adam Walczykiewicz\" <[email protected]>\nBe sure to reply to that address.\n\nHi!\nI've compiled and installed PostgreSQL v.6.5.2.\n(I use SuSe Linux 6.2.). \nNow I want to replace it with v.6.5.3.\nThere is \"make clean\" statement in Makefile.\nBut where to find \"make uninstall\"?\nIs PostgreSQL install only in /usr/local/pgsql\n(I used default Dir) and it's enough to \nrm -f from that directory(?!).\n\n(I would like to make PostgreSQL.RPM for SuSe \nLinux.that's why I have to know how to \nuninstall...)\nThanks for any help!\nAdam([email protected])\n\n\n\n\nGeocrawler.com - The Knowledge Archive\n", "msg_date": "Thu, 13 Jan 2000 07:16:41 -0800", "msg_from": "\"Adam Walczykiewicz\" <[email protected]>", "msg_from_op": true, "msg_subject": "Uninstalling PostgreSQL ??!!" }, { "msg_contents": "On Thu, 13 Jan 2000, Adam Walczykiewicz wrote:\n\n> This message was sent from Geocrawler.com by \"Adam Walczykiewicz\" <[email protected]>\n> Be sure to reply to that address.\n> \n> Hi!\n> I've compiled and installed PostgreSQL v.6.5.2.\n> (I use SuSe Linux 6.2.). \n> Now I want to replace it with v.6.5.3.\n> There is \"make clean\" statement in Makefile.\n> But where to find \"make uninstall\"?\n\nOne more reason to move to automake, ey?\n\n> Is PostgreSQL install only in /usr/local/pgsql\n> (I used default Dir) and it's enough to \n> rm -f from that directory(?!).\n\nYes.\n\n> \n> (I would like to make PostgreSQL.RPM for SuSe \n> Linux.that's why I have to know how to \n> uninstall...)\n\nI think you have a misconception about RPMs. Your easiest bet would be to\nstart at with the RedHat RPMs and adjust what needs adjusting.\n\n\n-- \nPeter Eisentraut Sernanders vaeg 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n", "msg_date": "Thu, 13 Jan 2000 16:38:17 +0100 (MET)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Uninstalling PostgreSQL ??!!" }, { "msg_contents": "Does the PostgreSQL project officially only support Red Hat - I sure would like to see SuSE supported and\nmentioned instead of just Red hat. I'd be happy to generate SuSE RPMs for distribution via the PostgreSQL\nsite.\n\nI thought I saw some discussion about this issue earlier.\n\nWho generates the SuSE rpms distributed with the SuSE release. These are of no value to me because I wish\nto use /usr/local/pgsql for PostgreSQL.\n\nSteve\n\n\nPeter Eisentraut wrote:\n\n> On Thu, 13 Jan 2000, Adam Walczykiewicz wrote:\n>\n> > This message was sent from Geocrawler.com by \"Adam Walczykiewicz\" <[email protected]>\n> > Be sure to reply to that address.\n> >\n> > Hi!\n> > I've compiled and installed PostgreSQL v.6.5.2.\n> > (I use SuSe Linux 6.2.).\n> > Now I want to replace it with v.6.5.3.\n> > There is \"make clean\" statement in Makefile.\n> > But where to find \"make uninstall\"?\n>\n> One more reason to move to automake, ey?\n>\n> > Is PostgreSQL install only in /usr/local/pgsql\n> > (I used default Dir) and it's enough to\n> > rm -f from that directory(?!).\n>\n> Yes.\n>\n> >\n> > (I would like to make PostgreSQL.RPM for SuSe\n> > Linux.that's why I have to know how to\n> > uninstall...)\n>\n> I think you have a misconception about RPMs. Your easiest bet would be to\n> start at with the RedHat RPMs and adjust what needs adjusting.\n>\n> --\n> Peter Eisentraut Sernanders vaeg 10:115\n> [email protected] 75262 Uppsala\n> http://yi.org/peter-e/ Sweden\n>\n> ************\n\n", "msg_date": "Fri, 14 Jan 2000 11:20:30 -0800", "msg_from": "Stephen Birch <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Uninstalling PostgreSQL ??!!" }, { "msg_contents": "Hi Steve,\n\nThe PostgreSQL project is willing to support any os it\ncan. We have Redhat RPMS on our site, because someone\n(Lamar Owen) took the time to do it, then submitted them.\n\nYou may do the same. Ideally we'd like to have a package/bin\nfor each os.\n\nJeff\n\n===================================================================\n So long as the Universe had a beginning, we can suppose it had a \ncreator, but if the Universe is completly self contained , having \nno boundry or edge, it would neither be created nor destroyed\n It would simply be.\n===================================================================\n\n\n", "msg_date": "Fri, 14 Jan 2000 16:57:52 -0400 (AST)", "msg_from": "Jeff MacDonald <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Uninstalling PostgreSQL ??!!" }, { "msg_contents": "Stephen Birch wrote:\n >Does the PostgreSQL project officially only support Red Hat - I sure would l\n >ike to see SuSE supported and\n >mentioned instead of just Red hat. I'd be happy to generate SuSE RPMs for d\n >istribution via the PostgreSQL\n >site.\n >\n >I thought I saw some discussion about this issue earlier.\n >\n >Who generates the SuSE rpms distributed with the SuSE release. These are of\n > no value to me because I wish\n >to use /usr/local/pgsql for PostgreSQL.\n \nIf you generate RPMs for SUSE, rather than just for your own use, you will\nsurely need to conform to their policy. I should be most surprised if\nthat allowed their RPMs to use /usr/local, which should be for the local\nadministrator rather than for the distribution.\n\nThat is, of course, why Red Hat's RPMs and Debian's packages relocate\nPostgreSQL into directories that conform to those distributions'\npolicies.\n\n-- \nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\n PGP key from public servers; key ID 32B8FAA1\n ========================================\n \"For the LORD is good; his mercy is everlasting; and \n his truth endureth to all generations.\" \n Psalms 100:5 \n\n\n", "msg_date": "Fri, 14 Jan 2000 21:56:18 +0000", "msg_from": "\"Oliver Elphick\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Uninstalling PostgreSQL ??!! " }, { "msg_contents": "Oliver Elphick wrote:\n> \n> Stephen Birch wrote:\n> >Does the PostgreSQL project officially only support Red Hat - I sure would l\n> >ike to see SuSE supported and\n> >mentioned instead of just Red hat. I'd be happy to generate SuSE RPMs for d\n> >istribution via the PostgreSQL\n> >site.\n> >\n> >I thought I saw some discussion about this issue earlier.\n> >\n> >Who generates the SuSE rpms distributed with the SuSE release. These are of\n\nSuSE does their own -- according to rpmfind.net, they've split things up\nworse than RedHat ever did. They also put things in different places\nthan RedHat does, which, of course, they are certainly free to do.\n\n> If you generate RPMs for SUSE, rather than just for your own use, you will\n> surely need to conform to their policy. I should be most surprised if\n> that allowed their RPMs to use /usr/local, which should be for the local\n> administrator rather than for the distribution.\n> \n> That is, of course, why Red Hat's RPMs and Debian's packages relocate\n> PostgreSQL into directories that conform to those distributions'\n> policies.\n\nThank you, Oliver, for explaining that. I really wish that the various\nLinux distributions could standardize some things -- it is really\naggravating as a packager of RPMS -- SuSE has one way and place to store\nthings, RedHat as another, Caldera has yet another. Which is why I have\nlabeled the RPMS I have built as _RedHat_ -- that's what I've got, so\nthat's what I am able to support.\n\nI just received documentation from a nice chap who has successfully\ninstalled and gotten working the JDBC client in the RPM distribution --\nman, I'm very grateful for documentation!\n\nI know that there have been a couple of users who have gotten the RedHat\nRPM's to be usable under SuSE -- I just need more information -- in\nparticular, where does SuSE like to put things? What environment\nvariables and RPM macros are defined under SuSE during the build, so\nthat conditional logic can be put in the source RPM -- having a single\nsource RPM is a big plus, because then everybody can build from a common\nknowledge base.\n\nAccording to rpmfind.net, the SuSE RPM's are very different from the\nRedHat ones -- which I regret. However, I picked up the maintenance of\nan existing RPM so that existing users wouldn't be drastically surprised\nat the changes, rather than me building a whole new set of RPM's.\n\nFeel free to look at the source RPM for RedHat, and look carefully in my\nREADME.rpm as to package rationale. Feel free to take what I've done\nand modify it for SuSE. And, if you're going to do the above, please\ndocument it so that others can understand it. I would like to see a\nsingle RPM base that worked for all the RPM-based distributions -- sure\nwould make support easier!\n\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Fri, 14 Jan 2000 17:52:02 -0500", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Uninstalling PostgreSQL ??!!" }, { "msg_contents": "Jeff MacDonald wrote:\n> The PostgreSQL project is willing to support any os it\n> can. We have Redhat RPMS on our site, because someone\n> (Lamar Owen) took the time to do it, then submitted them.\n\nThanks, Jeff.\n\n> You may do the same. Ideally we'd like to have a package/bin\n> for each os.\n\nIf anyone wants to pick up RPM's for SuSE or Caldera (or any other\nRPM-based distribution), I'll be perfectly happy to help them get\nstarted.\n\nI would like any other RPM's to be as close as possible to the RedHat\nones to simplify support, but that's not absolutely necessary. I'll\nalso offer any of the spec file stuff and upgrading scripts to them to\nexpedite their efforts -- after all, I was able to take Oliver's Debian\nupgrade scripts and modify them to work very nicely under RedHat without\na great deal of effort (once I understood them, that is).\n\nThe documentation of the packaging differences in the various\ndistributions would be very helpful to have on the PostgreSQL site --\nI've put the RPM readme on the ftp site under bindist/RPM (Jeff, if\nSuSE and Caldera folks get this running, we might want to rename the\nbindist/RPM dir to bindist/RedHat, as those RPM's and that documentation\nis pretty RedHat-specific), and I'd like to see the Debian documentation\n(simple docs -- where is PGDATA, what is the name and parameters of the\nstartup script, etc) placed on the PostgreSQL site as well -- Oliver has\nalready mentioned that the standard download site for the Debian\npackages is the Debian distribution's site, so ftp.postgresql.org may\nnot need the .deb's online. Just a thought.\n\nOf course, those are just the Linux packages. There are many other\nsystems that have package managers for which it would be useful to have\nprebuilt binaries available for download.\n\nI am bothered more than a little of the thought that the RPM-based linux\ndistributions are so different that a single RPM set can't work with all\nof them equally well. Of course, that can happen under open source --\nthe distributors are certainly free to do it their own way.\n\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Fri, 14 Jan 2000 18:18:08 -0500", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Uninstalling PostgreSQL ??!!" } ]