threads
listlengths 1
2.99k
|
---|
[
{
"msg_contents": "> Bruce - \n> Did you see the message about the CVS repository moving? It's changed\n> from /usr/local/cvsroot/pgsql to /home/projects/pgsql/cvsroot/pgsql\n> \n> I was just looking for the magic find/sed script I know I've seen\n> somewhere for changing all the CVS/Repository files in a checked out\n> tree...\n\nOK, now I am getting:\n\n#$ pn aspg cvs -q -z 3 -d :pserver:[email protected]:/usr/local/cvsroot login \n(Logging in to [email protected])\nCVS password: \ncvs [login aborted]: authorization failed: server hub.org rejected access\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 19 May 2000 14:02:53 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: CVS commit broken"
}
] |
[
{
"msg_contents": "\nJust a heads up, in case anyone is interested ...\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n---------- Forwarded message ----------\nDate: Fri, 19 May 2000 11:56:59 -0700\nFrom: Joe Hellerstein <[email protected]>\nTo: [email protected], [email protected], [email protected]\nSubject: Re: GiST, PostgreSQL, etc.\n\nPS: Andy Dong's n-dimensional R-tree opclasses are available at\nhttp://best.me.berkeley.edu/~adong/rtree/.\n\nJoe Hellerstein\n\nJoe Hellerstein wrote:\n\n> Hi folks:\n> Tim Keitt contacted my group and brought your open source GIS\n> project to our attention. Sounds great.\n>\n> A couple of notes that may be of use:\n>\n> - PostgreSQL ships with GiST as a built-in access method, though I\n> doubt anybody has tested it much. I wrote that code, and with the help\n> of Marc Fournier got it patched into the PostgreSQL release. As\n> somebody mentioned, at http://gist.cs.berkeley.edu/pggist/ you can\n> download code for various initial prototype GiST \"functions\" (as in\n> \"create function\"), as well as DDL scripts. To be honest, this was the\n> very first attempt at delivering a GiST implementation, and it's not\n> wonderfully pretty. I don't think any PostgreSQL users have exercised\n> it either, because it's kind of exotic to try out new index types. I\n> also haven't tested the latest versions of PostgreSQL to see if this\n> stuff still works with it.\n>\n> - Yes, R-trees generalize trivially to >2 dimensions. Andy Dong, a\n> student in my grad DB class some years ago, wrote \"opclass\" code for\n> Illustra (commercial version of postgres available from Informix) that\n> extended its R-trees to multiple dimensions. \"Back-porting\" these\n> opclasses to Postgres should be pretty easy. This is my recommendation\n> to you as the simplest -- if not the highest-performance -- way to go\n> for your project.\n>\n> - My students did a much more involved standalone C++ implementation\n> called libgist, which is available at\n> http://gist.cs.berkeley.edu/libgist-2.0/index.html. This does not\n> integrate with PostgreSQL -- it's a standalone gist library that runs\n> over a file system, and can be linked into applications. It includes\n> many new features though, including better interfaces, support for\n> near-neighbor searches, as well as more efficient variants of R-trees\n> (e.g. R*-trees, and better variants) that generalize to however many\n> dimensions you want. It also comes with a powerful, graphical index\n> tuning tool called amdb.\n>\n> - None of us here have the bandwidth to follow your discussion list,\n> and we do not support the PostgreSQL gist implementation. I'm willing to\n> answer general design questions about it, but since I haven't touched\n> the code in about 4 years, I won't be able to help with bugs.\n>\n> - We do support libgist and amdb.\n>\n> - We've done a bunch of research on indexing multiple dimensions.\n> Please see http://gist.cs.berkeley.edu for papers, discussion, the\n> freeware, etc. Please email [email protected] with specific\n> questions.\n>\n> Regards,\n>\n> Joe Hellerstein\n>\n> --\n>\n> Joseph M. Hellerstein\n> Associate Professor\n> Computer Science Division\n> UC Berkeley\n> http://www.cs.berkeley.edu/~jmh\n\n\n",
"msg_date": "Fri, 19 May 2000 16:01:28 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": true,
"msg_subject": "n-dimensional r-tree opclasses ..."
}
] |
[
{
"msg_contents": "If any user-created index is on a system column (eg, OID),\n7.0 pg_dump fails with \"parseNumericArray: bogus number\".\n\nMea culpa, mea maxima culpa --- some well-intentioned error checking\ncode was a bit too tight. (But how'd this get through beta with no\none noticing? It's been broken since January...)\n\nThe attached patch is committed for 7.0.1, but you will need to apply\nit by hand if you have such indexes and you want to make a dump before\n7.0.1 comes out. (Alternatively, drop the indexes and remake them\nby hand later.)\n\nThanks to Kyle Bateman for the bug report.\n\n\t\t\tregards, tom lane\n\n*** src/bin/pg_dump/common.c.orig\tWed Apr 12 13:16:14 2000\n--- src/bin/pg_dump/common.c\tFri May 19 19:00:00 2000\n***************\n*** 190,196 ****\n \t\t}\n \t\telse\n \t\t{\n! \t\t\tif (!isdigit(s) || j >= sizeof(temp) - 1)\n \t\t\t{\n \t\t\t\tfprintf(stderr, \"parseNumericArray: bogus number\\n\");\n \t\t\t\texit(2);\n--- 190,196 ----\n \t\t}\n \t\telse\n \t\t{\n! \t\t\tif (!(isdigit(s) || s == '-') || j >= sizeof(temp) - 1)\n \t\t\t{\n \t\t\t\tfprintf(stderr, \"parseNumericArray: bogus number\\n\");\n \t\t\t\texit(2);\n",
"msg_date": "Fri, 19 May 2000 19:16:40 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Sigh: 7.0 pg_dump fails if user table has index on OID"
}
] |
[
{
"msg_contents": "Hello guys,\n\nAs a personal project, I have written an Excel-compatible\nspreadsheet as a component of a 3D charting and graphing\napplication. It uses a Phong-style renderer with texture mapping,\nbump mapping, Z-buffer shadow mapping, non-refractive\ntransparency and antialiasing to produce some very nice looking\nimages. Anyway, a component of the data analysis tool is, as\nmentioned, a spreadsheet. I've been able to implement around 300\nof the 320 Excel-style functions and was wondering if they would\nbe useful to PostgreSQL. Here's the list:\n\nABS\nACCRINT\nACCRINTM\nACOS\nACOSH\nADDRESS\nAMORDEGRC\nAMORLINC\nAND\nAREAS\nASIN\nASINH\nATAN\nATAN2\nATANH\nAVEDEV\nAVERAGE\nBESSELI\nBESSELJ\nBESSELK\nBESSELY\nBETADIST\nBETAINV\nBIN2DEC\nBIN2HEX\nBIN2OCT\nBINOMDIST\nCALL\nCEILING\nCELL\nCHAR\nCHIDIST\nCHIINV\nCHITEST\nCHOOSE\nCLEAN\nCODE\nCOLUMN\nCOLUMNS\nCOMBIN\nCOMPLEX\nCONCATENATE\nCONFIDENCE\nCONVERT\nCORREL\nCOS\nCOSH\nCOUNT\nCOUNTA\nCOUNTBLANK\nCOUNTIF\nCOUPDAYS\nCOUPDAYSBS\nCOUPDAYSNC\nCOUPNCD\nCOUPNUM\nCOUPPCD\nCOVAR\nCRITBINOM\nCUMIPMT\nCUMPRINC\nDATE\nDATEVALUE\nDAVERAGE\nDAY\nDAYS360\nDB\nDCOUNT\nDCOUNTA\nDDB\nDEC2BIN\nDEC2HEX\nDEC2OCT\nDEGREES\nDELTA\nDEVSQ\nDGET\nDISC\nDMAX\nDMIN\nDOLLAR\nDOLLARDE\nDOLLARFR\nDPRODUCT\nDSTDEV\nDSTDEVP\nDSUM\nDURATION\nDVAR\nDVARP\nEDATE\nEFFECT\nEOMONTH\nERF\nERFC\nERRORTYPE\nEVEN\nEXACT\nEXP\nEXPONDIST\nFACT\nFACTDOUBLE\nFALSE\nFDIST\nFIND\nFINV\nFISHER\nFISHERINV\nFIXED\nFLOOR\nFORECAST\nFREQUENCY\nFTEST\nFV\nFVSCHEDULE\nGAMMADIST\nGAMMAINV\nGAMMALN\nGCD\nGEOMEAN\nGESTEP\nGROWTH\nHARMEAN\nHEX2BIN\nHEX2DEC\nHEX2OCT\nHLOOKUP\nHOUR\nHYPGEOMDIST\nIF\nIMABS\nIMAGINARY\nIMARGUMENT\nIMCONJUGATE\nIMCOS\nIMDIV\nIMEXP\nIMLN\nIMLOG10\nIMLOG2\nIMPOWER\nIMPRODUCT\nIMREAL\nIMSIN\nIMSQRT\nIMSUB\nIMSUM\nINDEX\nINDIRECT\nINFO\nINT\nINTERCEPT\nINTRATE\nIPMT\nIRR\nISBLANK\nISERR\nISERROR\nISEVEN\nISLOGICAL\nISNA\nISNONTEXT\nISNUMBER\nISODD\nISREF\nISTEXT\nKURT\nLARGE\nLCM\nLEFT\nLEN\nLINEST\nLN\nLOG\nLOG10\nLOGEST\nLOGINV\nLOGNORMDIST\nLOOKUP\nLSOLVE\nLOWER\nMATCH\nMAX\nMDETERM\nMDURATION\nMEDIAN\nMID\nMIN\nMINUTE\nMINVERSE\nMIRR\nMMULT\nMOD\nMODE\nMONTH\nMROUND\nMULTINOMIAL\nN\nNA\nNEGBINOMDIST\nNETWORKDAYS\nNOMINAL\nNORMDIST\nNORMINV\nNORMSDIST\nNORMSINV\nNOT\nNOW\nNPER\nNPV\nOCT2BIN\nOCT2DEC\nOCT2HEX\nODD\nODDFPPRICE\nODDFYIELD\nODDLPRICE\nODDLYIELD\nOFFSET\nOR\nPEARSON\nPERCENTILE\nPERCENTRANK\nPERMUT\nPI\nPMT\nPOISSON\nPOWER\nPPMT\nPRICE\nPRICEDISC\nPRICEMAT\nPROB\nPRODUCT\nPROPER\nPV\nQUARTILE\nQUOTIENT\nRADIANS\nRAND\nRANDBETWEEN\nRANK\nRATE\nRECEIVED\nREGISTERID\nREPLACE\nREPT\nRIGHT\nROMAN\nROUND\nROUNDDOWN\nROUNDUP\nROW\nROWS\nRSQ\nSEARCH\nSECOND\nSERIESSUM\nSIGN\nSIN\nSINH\nSKEW\nSLN\nSLOPE\nSMALL\nSQLREQUEST\nSQRT\nSQRTPI\nSTANDARDIZE\nSTDEV\nSTDEVP\nSTEYX\nSUBSTITUTE\nSUBTOTAL\nSUM\nSUMIF\nSUMPRODUCT\nSUMSQ\nSUMX2MY2\nSUMX2PY2\nSUMXMY2\nSYD\nT\nTAN\nTANH\nTBILLEQ\nTBILLPRICE\nTBILLYIELD\nTDIST\nTEXT\nTIME\nTIMEVALUE\nTINV\nTODAY\nTRANSPOSE\nTREND\nTRIM\nTRIMMEAN\nTRUE\nTRUNC\nTTEST\nTYPE\nUPPER\nVALUE\nVAR\nVARP\nVDB\nVLOOKUP\nWEEKDAY\nWEIBULL\nWORKDAY\nXIRR\nXNPV\nYEAR\nYEARFRAC\nYIELD\nYIELDDISC\nYIELDMAT\n\nNaturally, some of these functions already map to existing\nPostgreSQL functions and some are irrelevant to PostgreSQL (such\nas ADDRESS, CALL, HLOOKUP for example). Others map with different\nSQL names. All of them have been written using double precision\nnumbers (like Excel) and therefore would take some time to use\nnumeric -- if that's what is desired. The cumulative distribution\nfunctions and their inverses I derived solely from recurrence\nrelations, polynomial approximations, and evalutation of various\ninfinite series to n-terms from the \"Handbook of Mathematical\nFunctions\" and so, like Excel, only realize 32-bit float\nprecision. I suspect there may be some BSD licensed libraries out\nthere that do a better job (hopefully with arbitrary precision).\n\nSome of the functions, it would seem to me, would be very useful,\nbut I'm not sure of how the semantics would map. For example,\nTREND() performs multi-variate linear regression analysis (i.e.:\ngood for picking stocks), but, not only would it require the new\nfunction manager interface to return a result set, but it takes\nseveral 'sets' as input as well:\n\nTREND(known y's, known x1's, known x2's, ..., new x's)\n\nand so it would act almost like an aggregate except that it would\nyield a result set, not just a single-valued result. I visualize\nsomething like:\n\nSELECT TREND(quotes.prices, quotes.date1, quotes.date2) FROM\nquotes;\n\nAnyways, I guess I'm like Chris Bitmead. What do you want me to\ndo with this crap? I could port the non-set returning functions\nand single-pass aggregates now (such as DDB(), RATE(), PMT(),\netc.) and wait on the others until the rewrite. Or I could just\nwait until the rewrite if the function call interface is going to\nchange dramatically for this such as NULL handling etc.\n\nAny comments?\n\nMike Mascari\n",
"msg_date": "Fri, 19 May 2000 23:53:51 -0400",
"msg_from": "Mike Mascari <[email protected]>",
"msg_from_op": true,
"msg_subject": "Postgres Analysis Tool-Pak"
},
{
"msg_contents": "Mike Mascari <[email protected]> writes:\n> I've been able to implement around 300 of the 320 Excel-style\n> functions and was wondering if they would be useful to\n> PostgreSQL.\n\nSeems like we could certainly stick these into a contrib directory.\nI'd be a little hesitant to cram so many names into the default\ninstallation, for fear of conflicting with existing user setups ---\nbut a contrib distribution has no such constraint. (Also, aren't\nsome of these the same as ODBC-standard functions? I believe Thomas\nhas been working on including all the ODBC functions into the backend,\nso you could save him some work there.)\n\n> Or I could just wait until the rewrite if the function call interface\n> is going to change dramatically for this such as NULL handling etc.\n\nWhat I have done so far cleans up NULL handling, but I have not tried to\ndo anything about functions accepting or returning sets. I think the\nplan is to see if we can support set functions better as part of the\nquerytree redesign scheduled for 7.2.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 20 May 2000 02:30:21 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres Analysis Tool-Pak "
}
] |
[
{
"msg_contents": "\nDoes the PostgreSQL have some features enabling to make contextual search\nin fulltexts (something like Oracle Context Option)?\n\n--\nZhenzheruha Kate\n\n\n\n",
"msg_date": "Sat, 20 May 2000 11:28:19 +0400 (MSD)",
"msg_from": "Kate <[email protected]>",
"msg_from_op": true,
"msg_subject": "contextual search"
},
{
"msg_contents": "> \n> Does the PostgreSQL have some features enabling to make contextual search\n> in fulltexts (something like Oracle Context Option)?\n\nYes, see contrib/fulltextindex.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 20 May 2000 07:28:45 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: contextual search"
},
{
"msg_contents": "If you use this and have a decent amount of text to index, be prepared to\nspend a few days.. I had about 2000 files of Text (varying size) -- it\nturned out to be about 600 megs, 33 million rows in the indexed column...\nCreating indexes on that and VACUUM 'ing takes hours -- literally. :-)\n\nTowards the end of the README it describes several ways of clustering your\ntables on the disk. Use method two if you have the disk space, it's much\nfaster than CLUSTER!\n\nGood luck!\n\n-Mitch\n\n----- Original Message -----\nFrom: Bruce Momjian <[email protected]>\nTo: Kate <[email protected]>\nCc: <[email protected]>\nSent: Saturday, May 20, 2000 7:28 AM\nSubject: Re: [HACKERS] contextual search\n\n\n> >\n> > Does the PostgreSQL have some features enabling to make contextual\nsearch\n> > in fulltexts (something like Oracle Context Option)?\n>\n> Yes, see contrib/fulltextindex.\n>\n> --\n> Bruce Momjian | http://www.op.net/~candle\n> [email protected] | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n>\n\n",
"msg_date": "Sat, 20 May 2000 12:25:11 -0400",
"msg_from": "\"Mitch Vincent\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: contextual search"
}
] |
[
{
"msg_contents": "\n From the create_rule man page this example is offered:\n\n CREATE RULE example_5 AS\n ON INERT TO emp WHERE new.salary > 5000\n DO\n UPDATE NEWSET SET salary = 5000;\n\nBut what is \"NEWSET\"? Is it a keyword?\n\nMy problem is that on an insert with an invalid amount I try to perform\nan update with a corrected amount, but the action part of the rule\ndoesn't affect or \"see\" the newly inserted row (or so it seems).\n\nI tried: CREATE RULE ON INSERT TO bid WHERE new.price > limit\n DO UPDATE bid SET price = 0.1;\n\nand all price columns in the bid table would be set to 0.1 _except_ the\nnewly inserted row.\n\nAm I missing something obvious?\n\nTIA\n\n-- \nLouis-David Mitterrand - [email protected] - http://www.apartia.fr\n",
"msg_date": "Sat, 20 May 2000 11:00:56 +0200",
"msg_from": "Louis-David Mitterrand <[email protected]>",
"msg_from_op": true,
"msg_subject": "rules on INSERT can't UPDATE new instance?"
},
{
"msg_contents": "Although not exactly what you were asking about, it might be easier to get\nthe effect with a before insert trigger written in plpgsql.\n\n(only minimally tested -- and against a 6.5 db - and replace the 100 and 0.1\nwith real values)\ncreate function checktriggerfunc() returns opaque as '\nbegin\n if (NEW.price>100) then\n NEW.price=0.1;\nend if;\nreturn NEW;\nend;\n' language 'plpgsql';\n\ncreate trigger checktrigger before insert on bid for each row\nexecute procedure checktriggerfunc();\n\n----- Original Message -----\nFrom: \"Louis-David Mitterrand\" <[email protected]>\nTo: <[email protected]>\nSent: Saturday, May 20, 2000 2:00 AM\nSubject: [GENERAL] rules on INSERT can't UPDATE new instance?\n\n\n>\n> From the create_rule man page this example is offered:\n>\n> CREATE RULE example_5 AS\n> ON INERT TO emp WHERE new.salary > 5000\n> DO\n> UPDATE NEWSET SET salary = 5000;\n>\n> But what is \"NEWSET\"? Is it a keyword?\n>\n> My problem is that on an insert with an invalid amount I try to perform\n> an update with a corrected amount, but the action part of the rule\n> doesn't affect or \"see\" the newly inserted row (or so it seems).\n>\n> I tried: CREATE RULE ON INSERT TO bid WHERE new.price > limit\n> DO UPDATE bid SET price = 0.1;\n>\n> and all price columns in the bid table would be set to 0.1 _except_ the\n> newly inserted row.\n>\n> Am I missing something obvious?\n\n\n",
"msg_date": "Sat, 20 May 2000 02:24:23 -0700",
"msg_from": "\"Stephan Szabo\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: rules on INSERT can't UPDATE new instance?"
},
{
"msg_contents": "> \n> From the create_rule man page this example is offered:\n> \n> CREATE RULE example_5 AS\n> ON INERT TO emp WHERE new.salary > 5000\n> DO\n> UPDATE NEWSET SET salary = 5000;\n> \n> But what is \"NEWSET\"? Is it a keyword?\n\nIt should be:\n\nCREATE RULE example_5 AS\n ON INERT TO emp WHERE new.salary > 5000\n DO \n UPDATE emp SET salary = 5000\n WHERE emp.oid = new.oid;\n\nFixing now.\n\n> \n> My problem is that on an insert with an invalid amount I try to perform\n> an update with a corrected amount, but the action part of the rule\n> doesn't affect or \"see\" the newly inserted row (or so it seems).\n> \n> I tried: CREATE RULE ON INSERT TO bid WHERE new.price > limit\n> DO UPDATE bid SET price = 0.1;\n> \n> and all price columns in the bid table would be set to 0.1 _except_ the\n> newly inserted row.\n> \n> Am I missing something obvious?\n\nNo, buggy documentation. My book has a section on rules too, but you\nshould be fine now.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 20 May 2000 07:35:38 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: rules on INSERT can't UPDATE new instance?"
},
{
"msg_contents": "On Sat, May 20, 2000 at 07:35:38AM -0400, Bruce Momjian wrote:\n> > From the create_rule man page this example is offered:\n> > \n> > CREATE RULE example_5 AS\n> > ON INERT TO emp WHERE new.salary > 5000\n> > DO\n> > UPDATE NEWSET SET salary = 5000;\n> > \n> > But what is \"NEWSET\"? Is it a keyword?\n> \n> It should be:\n> \n> CREATE RULE example_5 AS\n> ON INERT TO emp WHERE new.salary > 5000\n> DO \n> UPDATE emp SET salary = 5000\n> WHERE emp.oid = new.oid;\n> \n> Fixing now.\n\nBut this doesn't work in PG 7.0:\n\nauction=> create table test (price float);\nCREATE\nauction=> create rule price_control AS ON INSERT TO test WHERE new.price > 100 DO UPDATE test SET price = 100 where test.oid = new.oid;\nCREATE 27913 1\nauction=> INSERT INTO test VALUES (101);\nINSERT 27914 1\nauction=> SELECT test.*;\n price \n-------\n 101\n(1 row)\n\n\n-- \nLouis-David Mitterrand - [email protected] - http://www.apartia.fr\n\nMACINTOSH == Most Applications Crash If Not The Operatings System Hangs\n",
"msg_date": "Sat, 20 May 2000 15:44:17 +0200",
"msg_from": "Louis-David Mitterrand <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: rules on INSERT can't UPDATE new instance?"
},
{
"msg_contents": "> On Sat, May 20, 2000 at 07:35:38AM -0400, Bruce Momjian wrote:\n> > > From the create_rule man page this example is offered:\n> > > \n> > > CREATE RULE example_5 AS\n> > > ON INERT TO emp WHERE new.salary > 5000\n> > > DO\n> > > UPDATE NEWSET SET salary = 5000;\n> > > \n> > > But what is \"NEWSET\"? Is it a keyword?\n> > \n> > It should be:\n> > \n> > CREATE RULE example_5 AS\n> > ON INERT TO emp WHERE new.salary > 5000\n> > DO \n> > UPDATE emp SET salary = 5000\n> > WHERE emp.oid = new.oid;\n> > \n> > Fixing now.\n> \n> But this doesn't work in PG 7.0:\n> \n> auction=> create table test (price float);\n> CREATE\n> auction=> create rule price_control AS ON INSERT TO test WHERE new.price > 100 DO UPDATE test SET price = 100 where test.oid = new.oid;\n> CREATE 27913 1\n> auction=> INSERT INTO test VALUES (101);\n> INSERT 27914 1\n> auction=> SELECT test.*;\n> price \n> -------\n> 101\n> (1 row)\n\nYes, I see it failing too. I tried old.oid, and that failed too.\n\nI know there is a recursive problem with rules acting on their own\ntable, where if you have an INSERT rule that performs an INSERT on the\nsame table, the rules keep firing in a loop.\n\nI thought an INSERT rule with an UPDATE action would work on the same\ntable, but that fails. Seems the rule is firing before the INSERT\nhappens.\n\nI am not really sure what to recommend. The INSERT rule clearly doesn't\nfix cases where someone UPDATE's the row to != 100. A CHECK constraint\ncould be used to force the column to contain 100, but that doesn't\nsilently fix non-100 values, which seemed to be your goal. A trigger\nwill allow this kind of action, on INSERT and UPDATE, though they are a\nlittle more complicated than rules.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 20 May 2000 10:41:53 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: rules on INSERT can't UPDATE new instance?"
},
{
"msg_contents": "On Sat, May 20, 2000 at 10:41:53AM -0400, Bruce Momjian wrote:\n> > But this doesn't work in PG 7.0:\n> > \n> > auction=> create table test (price float);\n> > CREATE\n> > auction=> create rule price_control AS ON INSERT TO test WHERE new.price > 100 DO UPDATE test SET price = 100 where test.oid = new.oid;\n> > CREATE 27913 1\n> > auction=> INSERT INTO test VALUES (101);\n> > INSERT 27914 1\n> > auction=> SELECT test.*;\n> > price \n> > -------\n> > 101\n> > (1 row)\n> \n> Yes, I see it failing too. I tried old.oid, and that failed too.\n> \n> I know there is a recursive problem with rules acting on their own\n> table, where if you have an INSERT rule that performs an INSERT on the\n> same table, the rules keep firing in a loop.\n> \n> I thought an INSERT rule with an UPDATE action would work on the same\n> table, but that fails. Seems the rule is firing before the INSERT\n> happens.\n> \n> I am not really sure what to recommend. The INSERT rule clearly doesn't\n> fix cases where someone UPDATE's the row to != 100. A CHECK constraint\n> could be used to force the column to contain 100, but that doesn't\n> silently fix non-100 values, which seemed to be your goal. A trigger\n> will allow this kind of action, on INSERT and UPDATE, though they are a\n> little more complicated than rules.\n\nThanks for all your help. You are right: this seems more like the job of\na trigger and I am exploring that topic in depth right now.\n\nCheers,\n\n-- \nLouis-David Mitterrand - [email protected] - http://www.apartia.fr\n\n Parkinson's Law: Work expands to fill the time alloted it.\n",
"msg_date": "Sat, 20 May 2000 18:06:35 +0200",
"msg_from": "Louis-David Mitterrand <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: rules on INSERT can't UPDATE new instance?"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n> I thought an INSERT rule with an UPDATE action would work on the same\n> table, but that fails. Seems the rule is firing before the INSERT\n> happens.\n\nYes, a trigger is the right way to do surgery on a tuple before it is\nstored. Rules are good for generating additional SQL queries that will\ninsert/update/delete other tuples (usually, but not necessarily, in\nother tables). Even if it worked, a rule would be a horribly\ninefficient way to handle modification of the about-to-be-inserted\ntuple, because (being an independent query) it'd have to scan the table\nto find the tuple you are talking about!\n\nThe reason the additional queries are done before the original command\nis explained thus in the source code:\n\n\t * The original query is appended last if not instead\n\t * because update and delete rule actions might not do\n\t * anything if they are invoked after the update or\n\t * delete is performed. The command counter increment\n\t * between the query execution makes the deleted (and\n\t * maybe the updated) tuples disappear so the scans\n\t * for them in the rule actions cannot find them.\n\nThis seems to make sense for UPDATE/DELETE, but I wonder whether\nthe ordering should be different for the INSERT case: perhaps it\nshould be original-query-first in that case.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 20 May 2000 12:19:51 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: rules on INSERT can't UPDATE new instance? "
},
{
"msg_contents": "> Bruce Momjian <[email protected]> writes:\n> > I thought an INSERT rule with an UPDATE action would work on the same\n> > table, but that fails. Seems the rule is firing before the INSERT\n> > happens.\n> \n> Yes, a trigger is the right way to do surgery on a tuple before it is\n> stored. Rules are good for generating additional SQL queries that will\n> insert/update/delete other tuples (usually, but not necessarily, in\n> other tables). Even if it worked, a rule would be a horribly\n> inefficient way to handle modification of the about-to-be-inserted\n> tuple, because (being an independent query) it'd have to scan the table\n> to find the tuple you are talking about!\n> \n> The reason the additional queries are done before the original command\n> is explained thus in the source code:\n> \n> \t * The original query is appended last if not instead\n> \t * because update and delete rule actions might not do\n> \t * anything if they are invoked after the update or\n> \t * delete is performed. The command counter increment\n> \t * between the query execution makes the deleted (and\n> \t * maybe the updated) tuples disappear so the scans\n> \t * for them in the rule actions cannot find them.\n> \n> This seems to make sense for UPDATE/DELETE, but I wonder whether\n> the ordering should be different for the INSERT case: perhaps it\n> should be original-query-first in that case.\n> \n\nThanks, Tom. I was writing the Trigger section of my book the past few\ndays, and this helped me define when to use rules and when to use\ntriggers.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 21 May 2000 20:23:00 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: rules on INSERT can't UPDATE new instance?"
},
{
"msg_contents": "Is the INSERT rule re-ordering mentioned a TODO item?\n\n> Bruce Momjian <[email protected]> writes:\n> > I thought an INSERT rule with an UPDATE action would work on the same\n> > table, but that fails. Seems the rule is firing before the INSERT\n> > happens.\n> \n> Yes, a trigger is the right way to do surgery on a tuple before it is\n> stored. Rules are good for generating additional SQL queries that will\n> insert/update/delete other tuples (usually, but not necessarily, in\n> other tables). Even if it worked, a rule would be a horribly\n> inefficient way to handle modification of the about-to-be-inserted\n> tuple, because (being an independent query) it'd have to scan the table\n> to find the tuple you are talking about!\n> \n> The reason the additional queries are done before the original command\n> is explained thus in the source code:\n> \n> \t * The original query is appended last if not instead\n> \t * because update and delete rule actions might not do\n> \t * anything if they are invoked after the update or\n> \t * delete is performed. The command counter increment\n> \t * between the query execution makes the deleted (and\n> \t * maybe the updated) tuples disappear so the scans\n> \t * for them in the rule actions cannot find them.\n> \n> This seems to make sense for UPDATE/DELETE, but I wonder whether\n> the ordering should be different for the INSERT case: perhaps it\n> should be original-query-first in that case.\n> \n> \t\t\tregards, tom lane\n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 13 Jun 2000 03:50:49 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: rules on INSERT can't UPDATE new instance?"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n> Is the INSERT rule re-ordering mentioned a TODO item?\n\nDarn if I know. I threw the thought out for discussion, but didn't\nsee any comments. I'm not in a hurry to change it, unless there's\nconsensus that we should.\n\n\t\t\tregards, tom lane\n\n\n>> Bruce Momjian <[email protected]> writes:\n>>>> I thought an INSERT rule with an UPDATE action would work on the same\n>>>> table, but that fails. Seems the rule is firing before the INSERT\n>>>> happens.\n>> \n>> Yes, a trigger is the right way to do surgery on a tuple before it is\n>> stored. Rules are good for generating additional SQL queries that will\n>> insert/update/delete other tuples (usually, but not necessarily, in\n>> other tables). Even if it worked, a rule would be a horribly\n>> inefficient way to handle modification of the about-to-be-inserted\n>> tuple, because (being an independent query) it'd have to scan the table\n>> to find the tuple you are talking about!\n>> \n>> The reason the additional queries are done before the original command\n>> is explained thus in the source code:\n>> \n>> * The original query is appended last if not instead\n>> * because update and delete rule actions might not do\n>> * anything if they are invoked after the update or\n>> * delete is performed. The command counter increment\n>> * between the query execution makes the deleted (and\n>> * maybe the updated) tuples disappear so the scans\n>> * for them in the rule actions cannot find them.\n>> \n>> This seems to make sense for UPDATE/DELETE, but I wonder whether\n>> the ordering should be different for the INSERT case: perhaps it\n>> should be original-query-first in that case.\n",
"msg_date": "Tue, 13 Jun 2000 04:01:12 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: rules on INSERT can't UPDATE new instance? "
},
{
"msg_contents": "Any comments?\n\n> Bruce Momjian <[email protected]> writes:\n> > Is the INSERT rule re-ordering mentioned a TODO item?\n> \n> Darn if I know. I threw the thought out for discussion, but didn't\n> see any comments. I'm not in a hurry to change it, unless there's\n> consensus that we should.\n> \n> \t\t\tregards, tom lane\n> \n> \n> >> Bruce Momjian <[email protected]> writes:\n> >>>> I thought an INSERT rule with an UPDATE action would work on the same\n> >>>> table, but that fails. Seems the rule is firing before the INSERT\n> >>>> happens.\n> >> \n> >> Yes, a trigger is the right way to do surgery on a tuple before it is\n> >> stored. Rules are good for generating additional SQL queries that will\n> >> insert/update/delete other tuples (usually, but not necessarily, in\n> >> other tables). Even if it worked, a rule would be a horribly\n> >> inefficient way to handle modification of the about-to-be-inserted\n> >> tuple, because (being an independent query) it'd have to scan the table\n> >> to find the tuple you are talking about!\n> >> \n> >> The reason the additional queries are done before the original command\n> >> is explained thus in the source code:\n> >> \n> >> * The original query is appended last if not instead\n> >> * because update and delete rule actions might not do\n> >> * anything if they are invoked after the update or\n> >> * delete is performed. The command counter increment\n> >> * between the query execution makes the deleted (and\n> >> * maybe the updated) tuples disappear so the scans\n> >> * for them in the rule actions cannot find them.\n> >> \n> >> This seems to make sense for UPDATE/DELETE, but I wonder whether\n> >> the ordering should be different for the INSERT case: perhaps it\n> >> should be original-query-first in that case.\n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 9 Oct 2000 15:58:25 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] rules on INSERT can't UPDATE new instance?"
},
{
"msg_contents": "\nComments on this? Seems INSERT should happen at the end. Is this a\ntrivial change?\n\n\n> Bruce Momjian <[email protected]> writes:\n> > Is the INSERT rule re-ordering mentioned a TODO item?\n> \n> Darn if I know. I threw the thought out for discussion, but didn't\n> see any comments. I'm not in a hurry to change it, unless there's\n> consensus that we should.\n> \n> \t\t\tregards, tom lane\n> \n> \n> >> Bruce Momjian <[email protected]> writes:\n> >>>> I thought an INSERT rule with an UPDATE action would work on the same\n> >>>> table, but that fails. Seems the rule is firing before the INSERT\n> >>>> happens.\n> >> \n> >> Yes, a trigger is the right way to do surgery on a tuple before it is\n> >> stored. Rules are good for generating additional SQL queries that will\n> >> insert/update/delete other tuples (usually, but not necessarily, in\n> >> other tables). Even if it worked, a rule would be a horribly\n> >> inefficient way to handle modification of the about-to-be-inserted\n> >> tuple, because (being an independent query) it'd have to scan the table\n> >> to find the tuple you are talking about!\n> >> \n> >> The reason the additional queries are done before the original command\n> >> is explained thus in the source code:\n> >> \n> >> * The original query is appended last if not instead\n> >> * because update and delete rule actions might not do\n> >> * anything if they are invoked after the update or\n> >> * delete is performed. The command counter increment\n> >> * between the query execution makes the deleted (and\n> >> * maybe the updated) tuples disappear so the scans\n> >> * for them in the rule actions cannot find them.\n> >> \n> >> This seems to make sense for UPDATE/DELETE, but I wonder whether\n> >> the ordering should be different for the INSERT case: perhaps it\n> >> should be original-query-first in that case.\n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 24 Jan 2001 08:45:44 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] rules on INSERT can't UPDATE new instance?"
},
{
"msg_contents": "\nJan, can you handle this TODO item?\n\n\t* Evaluate INSERT rules at end of query, rather than beginning\n\n\n> Bruce Momjian <[email protected]> writes:\n> > Is the INSERT rule re-ordering mentioned a TODO item?\n> \n> Darn if I know. I threw the thought out for discussion, but didn't\n> see any comments. I'm not in a hurry to change it, unless there's\n> consensus that we should.\n> \n> \t\t\tregards, tom lane\n> \n> \n> >> Bruce Momjian <[email protected]> writes:\n> >>>> I thought an INSERT rule with an UPDATE action would work on the same\n> >>>> table, but that fails. Seems the rule is firing before the INSERT\n> >>>> happens.\n> >> \n> >> Yes, a trigger is the right way to do surgery on a tuple before it is\n> >> stored. Rules are good for generating additional SQL queries that will\n> >> insert/update/delete other tuples (usually, but not necessarily, in\n> >> other tables). Even if it worked, a rule would be a horribly\n> >> inefficient way to handle modification of the about-to-be-inserted\n> >> tuple, because (being an independent query) it'd have to scan the table\n> >> to find the tuple you are talking about!\n> >> \n> >> The reason the additional queries are done before the original command\n> >> is explained thus in the source code:\n> >> \n> >> * The original query is appended last if not instead\n> >> * because update and delete rule actions might not do\n> >> * anything if they are invoked after the update or\n> >> * delete is performed. The command counter increment\n> >> * between the query execution makes the deleted (and\n> >> * maybe the updated) tuples disappear so the scans\n> >> * for them in the rule actions cannot find them.\n> >> \n> >> This seems to make sense for UPDATE/DELETE, but I wonder whether\n> >> the ordering should be different for the INSERT case: perhaps it\n> >> should be original-query-first in that case.\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 29 Jun 2001 15:54:07 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] rules on INSERT can't UPDATE new instance?"
}
] |
[
{
"msg_contents": "Can someone tell me why PL/Pgsql is in the User's Guide and not in the\nProgrammer's Guide? I see that SQL functions are in the Programmer's\nGuide.\n\nNow, in my book, I put SQL and PL/PGSQL functions into one chapter, and\nC functions into another. I am not sure I like that separate either,\nbut C functions seem sufficiently more complex to be placed in their own\nchapter.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 20 May 2000 09:33:11 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "plpgsql chapter in docs"
}
] |
[
{
"msg_contents": "Chris Bitmead writes:\n\n[ONLY]\n> For UPDATE and DELETE it is absolutely correct, and useful, not to\n> mention absolutely essential.\n\nCould you explain how and why, preferably with a concrete example? I am\nstill at a loss.\n\n> > the `SELECT **' syntax (bad idea, IMO), \n> \n> Why is it a bad idea (considering that every ODBMS on the planet does\n> this)?\n\nFirst of all, ODBMS and [O]RDBMS are not necessarily infinitely compatible\nconcepts. An ORDBMS is an RDBMS extended with OO'ish features such as\ntable inheritance and abstract data types to make data modeling easier for\nthose who like it. But below it all there's still relational algebra and\nfriends. An ODBMS is a paradigm shift to get rid of some restrictions in\nrelational databases, both technical and theoretical, the implication of\nwhich is that it's no longer a relational database. Please correct me if\nI'm wrong.\n\nSpecifically, a query on a relational database always returns a table, and\na table is a set of rows with the same number and types of columns. This\nis a pretty fundamental assumption, and even accounting for the\npossibility that it might be broken somehow is going to be a major effort\nthroughout the entire system.\n\nNow a question in particular. I understand that this syntax might\ngive me some rows (a, b, c) and others (a, b, c, d, e) and perhaps others\n(a, b, c, f, g, h). Now what would be the syntax for getting only (b, c),\n(b, c, e) and (b, c, h)?\n\nFinally, it seems that the same effect can be obtained with a UNION query,\npadding with NULLs where necessary and perhaps judicious use of\nCORRESPONDING. What would be wrong with that?\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Sat, 20 May 2000 15:35:23 +0200 (CEST)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: OO Patch"
},
{
"msg_contents": "Peter Eisentraut wrote:\n> \n> Chris Bitmead writes:\n> \n> [ONLY]\n> > For UPDATE and DELETE it is absolutely correct, and useful, not to\n> > mention absolutely essential.\n> \n> Could you explain how and why, preferably with a concrete example? I am\n> still at a loss.\n\nThe simple answer is that UPDATE and DELETE should not act different to\nSELECT. If SELECT returns a certain set of records with a particular\nWHERE clause, then DELETE should delete the same set of records with\nidentical WHERE clause and UPDATE should UPDATE the same set of records.\nWhich part of this is tricky?\n\nThe complex answer is in your own SQL3 research. Now of course Postgres\nis not implemented that way that the SQL3 model seems to imply (a good\nthing IMHO). Columns that came from super tables are stored in the most\nspecific table. But the end result has to conform to the description in\nyour other posting. I'll comment a little more on your other posting.\n\n> > > the `SELECT **' syntax (bad idea, IMO),\n> >\n> > Why is it a bad idea (considering that every ODBMS on the planet does\n> > this)?\n> \n> First of all, ODBMS and [O]RDBMS are not necessarily infinitely \n> compatible concepts. \n\nWhy?\n\n> An ORDBMS is an RDBMS extended with OO'ish features such as\n> table inheritance and abstract data types to make data modeling easier \n> for those who like it. \n\nThe custom data type aspect of ORDBMS is a good feature. The inheritance\nfeature of ORDBMS is IMHO half-baked. Take the class\nshape/circle/square example...\nCREATE TABLE SHAPE( ..);\nCREATE TABLE SQUARE(x1, y1, x2, y2) INHERITS(shape);\nCREATE TABLE CIRCLE(x, y, radius) INHERITS(shape);\n\nI can't just go SELECT * FROM SHAPE and call some C++ method to display\nthe shape on the screen. If I maintain an attribute in SHAPE called\n\"classname\" manually, then I can SELECT * FROM SHAPE, and then do a\nseparate query on the subclass when I know the type - very inefficient.\nOr I can do 3 separate queries. But then I'm hosed when I add a TRIANGLE\ntype.\n\nWhat I really want is..\n\nResult r = Query<Shape>.select(\"SELECT ** FROM SHAPE\");\nforeach(r, item) {\n\titem->display();\n}\n\nWhich still will work when I add a triangle. I.e. typical polymorphism\ncode maintenance advantage.\n\nWhich is what object databases do or an object relational mapper like\nPersistance do. Without that ability I would argue there's very limited\npoint in having inheritance at all.\n\n> But below it all there's still relational algebra and\n> friends. An ODBMS is a paradigm shift to get rid of some restrictions \n> in relational databases, both technical and theoretical, the \n> implication of\n> which is that it's no longer a relational database. Please correct me \n> if I'm wrong.\n\nIt's no longer a purely relational database true. I think it's always\nbeen a crazy idea that everything should be squeezed into a pure\ntable/column model.\n\n> Specifically, a query on a relational database always returns a table, \n> and a table is a set of rows with the same number and types of columns. \n> This is a pretty fundamental assumption, and even accounting for the\n> possibility that it might be broken somehow is going to be a major \n> effort throughout the entire system.\n\nIt's a pretty fundamentally limiting assumption. If you're saying that\nthis might be a lot of work to fix I think I agree. If you're saying\nthat you can't see the relational and object models being merged into a\ncoherent and useful combination then I disagree. I can see no conflict\nat all between them. Both models are like seeing half the world. Both\nmodels without the other are limiting.\n \n> Now a question in particular. I understand that this syntax might\n> give me some rows (a, b, c) and others (a, b, c, d, e) and perhaps others\n> (a, b, c, f, g, h). Now what would be the syntax for getting only (b, c),\n> (b, c, e) and (b, c, h)?\n\nI don't think I understand this question.\n\n> Finally, it seems that the same effect can be obtained with a UNION \n> query,\n> padding with NULLs where necessary and perhaps judicious use of\n> CORRESPONDING. What would be wrong with that?\n\nSeveral things. Firstly, what happens when you introduce TRIANGLE? You\nhave to rewrite every query in your system. Secondly, what if you have\n20 classes in your hierarchy each with 20 different fields. Now you have\na UNION with 400 fields, most of which are NULL.\n",
"msg_date": "Sun, 21 May 2000 10:55:34 +1000",
"msg_from": "Chris <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: OO Patch"
},
{
"msg_contents": "\n> CREATE TABLE SHAPE( ..);\n> CREATE TABLE SQUARE(x1, y1, x2, y2) INHERITS(shape);\n> CREATE TABLE CIRCLE(x, y, radius) INHERITS(shape);\n> \n> I can't just go SELECT * FROM SHAPE and call some C++ method to display\n> the shape on the screen. If I maintain an attribute in SHAPE called\n> \"classname\" manually, then I can SELECT * FROM SHAPE, and then do a\n> separate query on the subclass when I know the type - very inefficient.\n> Or I can do 3 separate queries. But then I'm hosed when I add a TRIANGLE\n> type.\n> \n> What I really want is..\n> \n> Result r = Query<Shape>.select(\"SELECT ** FROM SHAPE\");\n> foreach(r, item) {\n> \titem->display();\n> }\n> \n> Which still will work when I add a triangle. I.e. typical polymorphism\n> code maintenance advantage.\n> \n> Which is what object databases do or an object relational mapper like\n> Persistance do. Without that ability I would argue there's very limited\n> point in having inheritance at all.\n> \n\n I can agree with that. As I wrote a relational mapper for Smalltalk/X\nbased on the libpq API I noticed the same problems, when doing mapping on \ntables.\n\n But some questions/comments about that:\n\n a) How are the indices handled ? If I define an index on an attribute\n defined in TABLE SHAPE all subclasses are also handled by this index\n or do we have an index for the base table and each sub table on \n this attribute ?\n\n b) Please do not make the libpq API too compilcated ! It's a charm how\n small the API is and how easy the initial connection to a psqgl\n database is -- compare it against the ODBC API ....\n\n b.1)\n Despite the ODBC API I rather would like to see to enhance the idea \n of result\n sets supported by the libpq-API. I do not need to query each tuple\n what it delivers to me. I would like to open the result, query\n the structure and then handle the data. If the database returns\n multiple different sets (results from different tables): ok: do it the \n same way for each result set.\n\n b.2)\n There were some postings about other delivering methods to retrieve\n the information from each tuple. Today we get an ASCII-representation\n of the result tuple and the client has to convert it.\n\n Some were not very happy about it, but I like it. Some were concerned\n about the fact, that they have to copy the result to the information\n structure within their software. When you use software systems, which \n are based on garbage collection systems, then one ALMOST EVER has\n to do it.\n\n c)\n I would like to see more ideas about the extension of pgsql to become\n an active database. The notification systen is not enough, because\n it does not return the most interesting informations.\n\n I myself would like to see something like the VERSANT event system.\n\n d) \n Please only add basic, language independent, support for \n inheritance - special features can very often better simulated by\n software on the client side. The best example is the introduction\n of sequences.\n\n\n\n Marten\n\n \n\n\n",
"msg_date": "Sun, 21 May 2000 10:12:59 +0200 (CEST)",
"msg_from": "Marten Feldtmann <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: OO Patch"
},
{
"msg_contents": "Peter Eisentraut wrote:\n> \n> > > the `SELECT **' syntax (bad idea, IMO),\n> >\n> > Why is it a bad idea (considering that every ODBMS on the planet does\n> > this)?\n> \n> First of all, ODBMS and [O]RDBMS are not necessarily infinitely compatible\n> concepts. An ORDBMS is an RDBMS extended with OO'ish features such as\n> table inheritance and abstract data types to make data modeling easier for\n> those who like it.\n\nAnd which may be ignored by those who don't, just like SELECT ** .\n\n> But below it all there's still relational algebra and\n> friends. An ODBMS is a paradigm shift to get rid of some restrictions in\n> relational databases, both technical and theoretical, the implication of\n> which is that it's no longer a relational database. Please correct me if\n> I'm wrong.\n\nAdding DATE and TIME datatypes or functions to SQL may have also seemed\na \nparadigm shift but seems quite essential once it is done.\n\n> \n> Specifically, a query on a relational database always returns a table, and\n> a table is a set of rows with the same number and types of columns.\n\nSays who ? ;)\n\n> This is a pretty fundamental assumption, and even accounting for the\n> possibility that it might be broken somehow is going to be a major effort\n> throughout the entire system.\n\nIn first round ** could we disallowed in subselects and other tricky\nparts.\n\n> Now a question in particular. I understand that this syntax might\n> give me some rows (a, b, c) and others (a, b, c, d, e) and perhaps others\n> (a, b, c, f, g, h). Now what would be the syntax for getting only (b, c),\n> (b, c, e) and (b, c, h)?\n\nWhat would you need that for ?\n\nIf its really needed we could implement something like\n\nSELECT B,C,E?,H? FROM BASECLASS.\n\nbut as E can be an INT in one subclass and TIMESTAMP or VARBINARY in\nother \nit would perhaps be better to do\n\nSELECT B,C,SUB1.E?,SUB3.H? FROM BASECLASS.\n\nwhich means the attribute E defined in subclass SUB1 (an inherited by\nits \ndescendants)\n\nor perhaps\n\nSELECT B,C,E OF SUB1,H OF SUB3 FROM BASECLASS.\n\nto be style-compatible vith general verbosity and english-likeness of\nSQL ;)\n\n> Finally, it seems that the same effect can be obtained with a UNION query,\n> padding with NULLs where necessary and perhaps judicious use of\n> CORRESPONDING. What would be wrong with that?\n\nIt would be overly complex and error-prone and need a rewrite each time\na new\nsub-class is added.\n\n------------\nHannu\n",
"msg_date": "Sun, 21 May 2000 22:19:53 +0300",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: OO Patch"
},
{
"msg_contents": "Marten Feldtmann wrote:\n\n> I can agree with that. As I wrote a relational mapper for Smalltalk/X\n> based on the libpq API I noticed the same problems, when doing mapping on\n> tables.\n> \n> But some questions/comments about that:\n> \n> a) How are the indices handled ? If I define an index on an attribute\n> defined in TABLE SHAPE all subclasses are also handled by this index\n> or do we have an index for the base table and each sub table on\n> this attribute ?\n\nAt the moment there is a separate index for each subclass, but this\nshould probably not be the default.\n\n> b) Please do not make the libpq API too compilcated ! It's a charm how\n> small the API is and how easy the initial connection to a psqgl\n> database is -- compare it against the ODBC API ....\n\nThere will be 2 levels of API. An important part of an object database\nis the existance of the client side cache. The lowest level API would be\nsomething like libpq - that is uncached. Next there would be ODMG\ninterfaces for various languages that incorporate the client side cache.\nI would say the only people who would continue to use libpq level would\nbe people writing higher level interfaces. ODMG is just too convenient.\n\n> b.1)\n> Despite the ODBC API I rather would like to see to enhance the idea\n> of result\n> sets supported by the libpq-API. I do not need to query each tuple\n> what it delivers to me. I would like to open the result, query\n> the structure and then handle the data. If the database returns\n> multiple different sets (results from different tables): ok: do it the\n> same way for each result set.\n\nI'm a bit vague on what you mean here. But if you go the full OO way,\nthe language polymorphism will do all the handling of different result\nsets. The ODMG layer will do the hard work.\n\n> b.2)\n> There were some postings about other delivering methods to retrieve\n> the information from each tuple. Today we get an ASCII-representation\n> of the result tuple and the client has to convert it.\n> \n> Some were not very happy about it, but I like it. Some were concerned\n> about the fact, that they have to copy the result to the information\n> structure within their software. When you use software systems, which\n> are based on garbage collection systems, then one ALMOST EVER has\n> to do it.\n\nAgain, using ODMG takes the hard work out of it. You stop caring about\nwhat format the information is delivered in.\n\n> c)\n> I would like to see more ideas about the extension of pgsql to become\n> an active database. The notification systen is not enough, because\n> it does not return the most interesting informations.\n> \n> I myself would like to see something like the VERSANT event system.\n\nOh, a Versant fan. Good, I like Versant. The only flaw in the Versant\nevent system is events can be lost when the receiver is dead, which may\nor may not matter depending on the application.\n\n> d)\n> Please only add basic, language independent, support for\n> inheritance - special features can very often better simulated by\n> software on the client side. The best example is the introduction\n> of sequences.\n",
"msg_date": "Mon, 22 May 2000 09:27:32 +1000",
"msg_from": "Chris Bitmead <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: OO Patch"
},
{
"msg_contents": "Hannu Krosing wrote:\n >Peter Eisentraut wrote:\n >> Now a question in particular. I understand that this syntax might\n >> give me some rows (a, b, c) and others (a, b, c, d, e) and perhaps others\n >> (a, b, c, f, g, h). Now what would be the syntax for getting only (b, c),\n >> (b, c, e) and (b, c, h)?\n >\n >What would you need that for ?\n \nIn OO terms it should be illegal. In terms of any one class, there is a\ndefined set of columns that can be seen. What Peter is asking for is a union\nof selects on different classes. The ordinary union rules should apply.\n\n >If its really needed we could implement something like\n >\n >SELECT B,C,E?,H? FROM BASECLASS.\n >\n >but as E can be an INT in one subclass and TIMESTAMP or VARBINARY in\n >other \n\nI don't think that should be allowed. It violates inheritance principles,\nsince the types are not compatible.\n\n\nThere is quite a lot right with inheritance as it is. We support multiple\ninheritance, and columns with the same name are merged in the child. What we\nimmediately lack are features to make inheritance properly useful:\n\n* shared index - an index that should point to the correct child class for\n quick recovery of rows from inheritance hierarchies. \n\n* inheritance of constraints, including Primary/Foreign keys (does this\n imply the necessity of turning inheritance off in certain cases?) \n\n* handling of some write operations on a hierarchy: DELETE and UPDATE\n (INSERT must require the exact class to be specified)\n\n* automatic use of inheritance hierarchies (use ONLY to avoid it)\n\n* ALTER ... ADD COLUMN inserting columns in the correct positions in\n child tables; alternatively, have column numbering independent of\n the physical representation, so that columns can be added at the end\n but shown in the correct place by SELECT.\n\n\nThere are further complexities in OO which might be desirable, but would\nrequire a lot of design work. One fundamental feature of pure OO is that\nclasses carry their own methods, whereas SELECT (for example) imposes a\nglobal operation on the various classes of the inheritance tree. This\nmakes the following problematic:\n\n* renaming columns in multiple inheritance (to avoid column merging, or to\n allow a child's column to be of a different type) - what would SELECT do\n with them?\n\n* deferred classes - tables that are used only for inheritance rather than\n for storing data rows - how could these be specified and implemented?\n\nNo doubt further research would bring up many more examples.\n\nI'm not sure it is feasible to make PostgreSQL into a proper OO database,\nbut getting those first five features would really be useful.\n\n\n\n-- \nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\n PGP key from public servers; key ID 32B8FAA1\n ========================================\n \"We are troubled on every side, yet not distressed; we \n are perplexed, but not in despair; persecuted, but not\n forsaken; cast down, but not destroyed; Always bearing\n about in the body the dying of the Lord Jesus, that \n the life also of Jesus might be made manifest in our \n body.\" II Corinthians 4:8-10 \n\n\n",
"msg_date": "Mon, 22 May 2000 10:55:40 +0100",
"msg_from": "\"Oliver Elphick\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: OO Patch "
},
{
"msg_contents": "> Hannu Krosing wrote:\n> >\n> >but as E can be an INT in one subclass and TIMESTAMP or VARBINARY in\n> >other \n> \n> I don't think that should be allowed. It violates inheritance principles,\n> since the types are not compatible.\n\n I see ... here's a person who has always programmed with typed\nlanguages and now thinks, that this is the right definition .... it's\nmuch more out there in the world. Open your mind and think about the\nfollowing:\n\n An attribute named \"a\" of \"type\" TIMESTAMP of an instance of a class\ncan be seen as a relation from this class to the class TIMESTAMP and\nthis relation is named \"a\".\n\n And if you're on the way to relations you're not far away to see,\nthat a relation is of course not limited to show to one specific class\n... but perhaps to all subclasses also ... and this is not a\nviolation.\n\n I know, that for many people these are only theoretical questions and\nthey may even be true with that, but \"violation of inheritance\nprinciples\" is simply wrong.\n\n But I also know, that we deal with a relational database and I do not\nexpect, that it will be as good as a pure object-oriented database -\nbut all those great wrapper software in the market work with\nrelational databases and they work pretty well - but I also see, that\nthey only use the basic technology to do their work.\n\n The reason seems to be, that all those nice oo-features within all\nthose databases do not scale very well ... and they're good for a\nsingle implementation. There're other problems out there:\n\n - caching at the client side \n\n - more powerful db desing evolution features. Change the type, the\n length of a typed attribute\n \n\n \n \nMarten\n\n\n",
"msg_date": "Mon, 22 May 2000 19:23:31 +0200 (CEST)",
"msg_from": "Marten Feldtmann <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: OO Patch"
},
{
"msg_contents": "Marten Feldtmann wrote:\n >> Hannu Krosing wrote:\n >> >\n >> >but as E can be an INT in one subclass and TIMESTAMP or VARBINARY in\n >> >other \n >> \n >> I don't think that should be allowed. It violates inheritance principles,\n >> since the types are not compatible.\n >\n > I see ... here's a person who has always programmed with typed\n >languages and now thinks, that this is the right definition .... it's\n >much more out there in the world. Open your mind and think about the\n >following:\n >\n > An attribute named \"a\" of \"type\" TIMESTAMP of an instance of a class\n >can be seen as a relation from this class to the class TIMESTAMP and\n >this relation is named \"a\".\n >\n > And if you're on the way to relations you're not far away to see,\n >that a relation is of course not limited to show to one specific class\n >... but perhaps to all subclasses also ... and this is not a\n >violation.\n \nHowever the example I was referring to talked of INT4, TIMESTAMP or VARBINARY.\n\nThese are not subclasses but totally unrelated. Suppose you had \n\n parent (id char(2))\n child1 (a int4)\n child2 (a timestamp)\n\nand someone asks for\n\n select sum(a) from parent*\n\nsince the types are incompatible, the answer would be nonsense.\n\n-- \nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\n PGP key from public servers; key ID 32B8FAA1\n ========================================\n \"We are troubled on every side, yet not distressed; we \n are perplexed, but not in despair; persecuted, but not\n forsaken; cast down, but not destroyed; Always bearing\n about in the body the dying of the Lord Jesus, that \n the life also of Jesus might be made manifest in our \n body.\" II Corinthians 4:8-10 \n\n\n",
"msg_date": "Mon, 22 May 2000 21:58:21 +0100",
"msg_from": "\"Oliver Elphick\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: OO Patch "
},
{
"msg_contents": "Oliver Elphick wrote:\n> \n> Marten Feldtmann wrote:\n> >> Hannu Krosing wrote:\n> >> >\n> >> >but as E can be an INT in one subclass and TIMESTAMP or VARBINARY in\n> >> >other\n> >>\n> >> I don't think that should be allowed. It violates inheritance principles,\n> >> since the types are not compatible.\n> >\n> > I see ... here's a person who has always programmed with typed\n> >languages and now thinks, that this is the right definition .... it's\n> >much more out there in the world. Open your mind and think about the\n> >following:\n> >\n> > An attribute named \"a\" of \"type\" TIMESTAMP of an instance of a class\n> >can be seen as a relation from this class to the class TIMESTAMP and\n> >this relation is named \"a\".\n> >\n> > And if you're on the way to relations you're not far away to see,\n> >that a relation is of course not limited to show to one specific class\n> >... but perhaps to all subclasses also ... and this is not a\n> >violation.\n> \n> However the example I was referring to talked of INT4, TIMESTAMP or VARBINARY.\n> \n> These are not subclasses but totally unrelated. Suppose you had\n> \n> parent (id char(2))\n> child1 (a int4)\n> child2 (a timestamp)\n> \n> and someone asks for\n> \n> select sum(a) from parent*\n> \n> since the types are incompatible, the answer would be nonsense.\n\nMS Excel for example SUMs only things summable, by which logic in this\ncase \nthe sum would/could/should be sum of int4 colums.\n\nIn real world not all things are summable.\n\nOTOH for schema\n parent (id char(2))\n child1 (a orange)\n child2 (a apple)\n\nselect sum(a) from parent*\n\ncould yield \"N apples and M oranges\" or possibly \"X fruits\" if orange\nand \napple were subtypes of fruit. Yes, really ;)\n\nOr depending on how the sum() function is defined it could even be \"Y\nkg\" .\n\n---------\nHannu\n",
"msg_date": "Tue, 23 May 2000 00:07:32 +0300",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: OO Patch"
},
{
"msg_contents": "Oliver Elphick wrote:\n\n> These are not subclasses but totally unrelated. Suppose you had\n> \n> parent (id char(2))\n> child1 (a int4)\n> child2 (a timestamp)\n> \n> and someone asks for\n> \n> select sum(a) from parent*\n> \n> since the types are incompatible, the answer would be nonsense.\n\nThat query would be disallowed, for the reason you note. Ambigous\ncoloumns would need to be specified by class.attribute.\n",
"msg_date": "Tue, 23 May 2000 10:57:12 +1000",
"msg_from": "Chris Bitmead <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: OO Patch"
},
{
"msg_contents": "Bruce,\n\nOn Tue, 23 Jan 2001 09:35:49 -0500 (EST)\nBruce Momjian <[email protected]> wrote:\n\n> We used someone elses. Here is a copy. Please submit any patches\n> against this version.\n\nSucks found:\n- doesn't handle mediumint, converts it to mediuminteger.\nThe same for bigint, and probably shorting & tinyint as well.\nI don't know whether 7.1 release has such type but even if yes \nmore preferrable to keep compatibility with old releases (7.0.x doesn't have, right ?)\n- it doesn't handle mysql UNIQUE (that is keyword for unique index) inside CREATE TABLE block\n- better to create indices after data load (it does before)\n- doesn't handle UNSIGNED keyword (should a least skip it, or, at user option, convert to CHECK(field>=0))\n- doesn't convert AUTO_INCREMENT in right way, at least in my production database.\n\nI don't see conversion of MySQL's SET and ENUM types.\n\nWell, before do any improvements on mysql2pgsql, I want to inform you that my \nconverter has all features described above. Maybe it's easier to modify it to fit your requirements ?\nAt least take a look at it.\nI don't like to do the same work twice, and this one promises to be exactly so.\n\nSending you my MySQL db dump which I used to play with it.\n\nMax Rudensky.",
"msg_date": "Tue, 23 Jan 2001 18:25:30 +0200",
"msg_from": "Max Rudensky <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] MySQL -> Postgres dump converter"
},
{
"msg_contents": "\nCan some PostgreSQL people comment on this? This person wrote a\nMySQL->PostgreSQL converter too. His version is at:\n\n\thttp://ziet.zhitomir.ua/~fonin/code\n\n\n> Bruce,\n> \n> On Tue, 23 Jan 2001 09:35:49 -0500 (EST)\n> Bruce Momjian <[email protected]> wrote:\n> \n> > We used someone elses. Here is a copy. Please submit any patches\n> > against this version.\n> \n> Sucks found:\n> - doesn't handle mediumint, converts it to mediuminteger.\n> The same for bigint, and probably shorting & tinyint as well.\n> I don't know whether 7.1 release has such type but even if yes \n> more preferrable to keep compatibility with old releases (7.0.x doesn't have, right ?)\n> - it doesn't handle mysql UNIQUE (that is keyword for unique index) inside CREATE TABLE block\n> - better to create indices after data load (it does before)\n> - doesn't handle UNSIGNED keyword (should a least skip it, or, at user option, convert to CHECK(field>=0))\n> - doesn't convert AUTO_INCREMENT in right way, at least in my production database.\n> \n> I don't see conversion of MySQL's SET and ENUM types.\n> \n> Well, before do any improvements on mysql2pgsql, I want to inform you that my \n> converter has all features described above. Maybe it's easier to modify it to fit your requirements ?\n> At least take a look at it.\n> I don't like to do the same work twice, and this one promises to be exactly so.\n> \n> Sending you my MySQL db dump which I used to play with it.\n> \n> Max Rudensky.\n\n[ Attachment, skipping... ]\n\n[ Attachment, skipping... ]\n\n[ Attachment, skipping... ]\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 23 Jan 2001 11:38:09 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] MySQL -> Postgres dump converter"
},
{
"msg_contents": "> Can some PostgreSQL people comment on this? This person wrote a\n> MySQL->PostgreSQL converter too. His version is at:\n> http://ziet.zhitomir.ua/~fonin/code\n\n-- THIS VERSION IS EXTREMELY BUGSOME ! USE IT ON YOUR OWN RISK !!!\n\nHmm. My version does not have this feature, but it could be added ;)\n\nSeriously, I haven't looked at the differences, but there is a licensing\ndifference (BSD vs GPL). Someone else with experience with MySQL should\nevaluate both packages.\n\nmysql2pgsql has been used to convert SourceForge, with ~90 tables and\nmoderately complicated schema, but that did not include enumerated types\n(done with ints at SF) and \"unique\" keys (done with sequences at SF)\nafaicr.\n\n> Sucks found:...\n\nEach is a one-liner to fix in mysql2pgsql. The (nonstandard) types\nmentioned weren't used in the test cases I had available. I didn't\nrealize that we had *any* reports of troubles or lacking features in the\nexisting converter, but I'll leave it up to y'all to decide if the\nlicensing issues and feature issues are significant.\n\nI'm willing to provide patches to address some of the concerns, but of\ncourse will not be able to look at the GPL'd code for hints and can only\nuse the information posted here to help afaik.\n\nComments?\n\n - Thomas\n",
"msg_date": "Tue, 23 Jan 2001 18:08:17 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] MySQL -> Postgres dump converter"
},
{
"msg_contents": "\nCan someone look at both versions and merge the improvements into our\nversion? Should be pretty easy.\n\n> > Can some PostgreSQL people comment on this? This person wrote a\n> > MySQL->PostgreSQL converter too. His version is at:\n> > http://ziet.zhitomir.ua/~fonin/code\n> \n> -- THIS VERSION IS EXTREMELY BUGSOME ! USE IT ON YOUR OWN RISK !!!\n> \n> Hmm. My version does not have this feature, but it could be added ;)\n> \n> Seriously, I haven't looked at the differences, but there is a licensing\n> difference (BSD vs GPL). Someone else with experience with MySQL should\n> evaluate both packages.\n> \n> mysql2pgsql has been used to convert SourceForge, with ~90 tables and\n> moderately complicated schema, but that did not include enumerated types\n> (done with ints at SF) and \"unique\" keys (done with sequences at SF)\n> afaicr.\n> \n> > Sucks found:...\n> \n> Each is a one-liner to fix in mysql2pgsql. The (nonstandard) types\n> mentioned weren't used in the test cases I had available. I didn't\n> realize that we had *any* reports of troubles or lacking features in the\n> existing converter, but I'll leave it up to y'all to decide if the\n> licensing issues and feature issues are significant.\n> \n> I'm willing to provide patches to address some of the concerns, but of\n> course will not be able to look at the GPL'd code for hints and can only\n> use the information posted here to help afaik.\n> \n> Comments?\n> \n> - Thomas\n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 24 Jan 2001 11:06:12 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] MySQL -> Postgres dump converter"
},
{
"msg_contents": "Guys,\n\nThomas said he won't look into GPL'ed code for ideas.\nWell, I re-read GPL and found that it's up to author whether is to allow or not \nto use code for using in programs with another open-source license. So this isn't \na problem - you may use my program and include code to your without limitations.\nI even would prefer to change license to BSD or similar \nbut I didn't find in GPL ideas about that.\n\nMax Rudensky.\n\nOn Wed, 24 Jan 2001 11:06:12 -0500 (EST)\nBruce Momjian <[email protected]> wrote:\n\n> \n> Can someone look at both versions and merge the improvements into our\n> version? Should be pretty easy.\n> \n> > > Can some PostgreSQL people comment on this? This person wrote a\n> > > MySQL->PostgreSQL converter too. His version is at:\n> > > http://ziet.zhitomir.ua/~fonin/code\n> > \n> > -- THIS VERSION IS EXTREMELY BUGSOME ! USE IT ON YOUR OWN RISK !!!\n> > \n> > Hmm. My version does not have this feature, but it could be added ;)\n> > \n> > Seriously, I haven't looked at the differences, but there is a licensing\n> > difference (BSD vs GPL). Someone else with experience with MySQL should\n> > evaluate both packages.\n> > \n> > mysql2pgsql has been used to convert SourceForge, with ~90 tables and\n> > moderately complicated schema, but that did not include enumerated types\n> > (done with ints at SF) and \"unique\" keys (done with sequences at SF)\n> > afaicr.\n> > \n> > > Sucks found:...\n> > \n> > Each is a one-liner to fix in mysql2pgsql. The (nonstandard) types\n> > mentioned weren't used in the test cases I had available. I didn't\n> > realize that we had *any* reports of troubles or lacking features in the\n> > existing converter, but I'll leave it up to y'all to decide if the\n> > licensing issues and feature issues are significant.\n> > \n> > I'm willing to provide patches to address some of the concerns, but of\n> > course will not be able to look at the GPL'd code for hints and can only\n> > use the information posted here to help afaik.\n> > \n> > Comments?\n> > \n> > - Thomas\n> > \n> \n> \n> -- \n> Bruce Momjian | http://candle.pha.pa.us\n> [email protected] | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 25 Jan 2001 11:59:37 +0200",
"msg_from": "Max Rudensky <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] MySQL -> Postgres dump converter"
},
{
"msg_contents": "On 27 Jan 2001 11:25:56 +0100\nAdrian Phillips <[email protected]> wrote:\n\n> >>>>> \"Max\" == Max Rudensky <[email protected]> writes:\n> \n> Max> Guys, Thomas said he won't look into GPL'ed code for ideas.\n> Max> Well, I re-read GPL and found that it's up to author whether\n> Max> is to allow or not to use code for using in programs with\n> Max> another open-source license. So this isn't a problem - you\n> Max> may use my program and include code to your without\n> Max> limitations. I even would prefer to change license to BSD or\n> Max> similar but I didn't find in GPL ideas about that.\n> \n> If you are the original author and have not had large contributions to\n> it then its just a matter of rereleasing as BSD, if on the other hand\n> others have contributed then you'll have to get permission from them.\nYes, I wrote it, and I'd like to re-release it to other from GPL license.\n\n> \n> Sincerely,\n> \n> Adrian Phillips\n> \n> -- \n> Your mouse has moved.\n> Windows NT must be restarted for the change to take effect.\n> Reboot now? [OK]\n",
"msg_date": "Sat, 27 Jan 2001 19:16:03 +0200",
"msg_from": "Max Rudensky <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: [GENERAL] MySQL -> Postgres dump converter"
},
{
"msg_contents": "Max Rudensky wrote:\n> \n> Guys,\n> \n> Thomas said he won't look into GPL'ed code for ideas.\n> Well, I re-read GPL and found that it's up to author whether is to allow or not\n> to use code for using in programs with another open-source license. So this isn't\n> a problem - you may use my program and include code to your without limitations.\n> I even would prefer to change license to BSD or similar\n> but I didn't find in GPL ideas about that.\n> \n\nIf it's all your code, then you are free to license it under any licence\nyou desire.\n\nYou are completely free to licence it under both GPL and BSD licenses at\nthe same time. \n\nYou can even simultaneously license it under a commercial licence if you\nwant to ;)\n\nWhat you probably can't do is to revoke the GPL license.\n\n-----------\nHannu\n",
"msg_date": "Sun, 28 Jan 2001 03:34:06 +0200",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: [GENERAL] MySQL -> Postgres dump converter"
},
{
"msg_contents": "Hannu Krosing wrote:\n >If it's all your code, then you are free to license it under any licence\n >you desire.\n...\n >What you probably can't do is to revoke the GPL license.\n\nYou can't revoke it from existing copies \"out there\", but you can from any\ncopies you release from now on, even of the same code..\n\n-- \nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\nPGP: 1024R/32B8FAA1: 97 EA 1D 47 72 3F 28 47 6B 7E 39 CC 56 E4 C1 47\nGPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n ========================================\n \"Many are the afflictions of the righteous; but the \n LORD delivereth him out of them all.\" \n Psalm 34:19 \n\n\n",
"msg_date": "Sun, 28 Jan 2001 07:57:43 +0000",
"msg_from": "\"Oliver Elphick\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: [GENERAL] MySQL -> Postgres dump converter "
},
{
"msg_contents": "OK, I have added this conversion scripts to CVS along with Thomas's. If\nsomeone wants to merge them into one, feel free to submit a patch. I\nhave provided a URL to my2pg.pl to retrieve the most recent version.\n\nThe only license issue was that the bottom of the source code had a\nmention referring to the GPL for more information. I removed that\nbecause it now clearly states it has a BSD-like license.\n\nThanks.\n\n> > Can some PostgreSQL people comment on this? This person wrote a\n> > MySQL->PostgreSQL converter too. His version is at:\n> > http://ziet.zhitomir.ua/~fonin/code\n> \n> -- THIS VERSION IS EXTREMELY BUGSOME ! USE IT ON YOUR OWN RISK !!!\n> \n> Hmm. My version does not have this feature, but it could be added ;)\n> \n> Seriously, I haven't looked at the differences, but there is a licensing\n> difference (BSD vs GPL). Someone else with experience with MySQL should\n> evaluate both packages.\n> \n> mysql2pgsql has been used to convert SourceForge, with ~90 tables and\n> moderately complicated schema, but that did not include enumerated types\n> (done with ints at SF) and \"unique\" keys (done with sequences at SF)\n> afaicr.\n> \n> > Sucks found:...\n> \n> Each is a one-liner to fix in mysql2pgsql. The (nonstandard) types\n> mentioned weren't used in the test cases I had available. I didn't\n> realize that we had *any* reports of troubles or lacking features in the\n> existing converter, but I'll leave it up to y'all to decide if the\n> licensing issues and feature issues are significant.\n> \n> I'm willing to provide patches to address some of the concerns, but of\n> course will not be able to look at the GPL'd code for hints and can only\n> use the information posted here to help afaik.\n> \n> Comments?\n> \n> - Thomas\n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 10 Feb 2001 07:03:19 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] MySQL -> Postgres dump converter"
},
{
"msg_contents": "\nJust a long standing curiosity?\n\nFor most web sites MySQL seems to work fine, but overall PostgreSQL offers \nmore capabilites so why build upon a limited base such as MySQL?\n\nDoes anyone here have any idea as to why so many people select MySQL when \nboth systems are open sourced?\n\nMatthew\n\n",
"msg_date": "Sun, 28 Jul 2002 19:08:53 -0400",
"msg_from": "Matthew Tedder <[email protected]>",
"msg_from_op": false,
"msg_subject": "Why is MySQL more chosen over PostgreSQL?"
},
{
"msg_contents": "> \n> Just a long standing curiosity?\n> \n> For most web sites MySQL seems to work fine, but overall PostgreSQL offers \n> more capabilites so why build upon a limited base such as MySQL?\n> \n> Does anyone here have any idea as to why so many people select MySQL when \n> both systems are open sourced?\n\nThree likely effects:\n\na) ISP management toolsets include management tools for MySQL, and not \nPostgreSQL.\n\n(CPanel is an example of such a toolset.)\n\nb) Apparently the permissions model for PostgreSQL used to discourage its use \nin shared hosting environments. (Ask Neil Conway more about this.)\n\nc) There was corporate sponsorship of MySQL, and they probably spent money \nmarketing it in the ISP web hosting market.\n\nd) MySQL is GPL-licensed, and some people consider that very important. (And \nare too stupid to grasp that they like XFree86, which _isn't_ licensed under \nthe GPL... Of course, this is d), and I said \"three\" likely effects...)\n\ne) Inertia. MySQL got more popular way back when; the reasons may no longer \napply, but nobody is going to move to PostgreSQL without _compelling_ reason, \nand you'll have to show something _really compelling_.\n--\n(concatenate 'string \"cbbrowne\" \"@acm.org\")\nhttp://cbbrowne.com/info/advocacy.html\nFLORIDA: Where your vote counts and counts and counts.\n\n\n",
"msg_date": "Mon, 29 Jul 2002 08:53:20 -0400",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: Why is MySQL more chosen over PostgreSQL? "
},
{
"msg_contents": "On Mon, 29 Jul 2002 [email protected] wrote:\n\n[snip]\n\n> e) Inertia. MySQL got more popular way back when; the reasons may no longer \n> apply, but nobody is going to move to PostgreSQL without _compelling_ reason, \n> and you'll have to show something _really compelling_.\n\nI would like to add one other thought. There are many web site designers\nthat get thrust into being a web site programmer. Without an\nunderstanding of database design and a novice programmers (?) view of the\nprocess the benefits of letting the database (RDBMS) do the database work\nisn't recognized. They code it all in the CGI.\n\n\nRod\n-- \n \"Open Source Software - Sometimes you get more than you paid for...\"\n\n",
"msg_date": "Mon, 29 Jul 2002 06:39:59 -0700 (PDT)",
"msg_from": "\"Roderick A. Anderson\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why is MySQL more chosen over PostgreSQL? "
},
{
"msg_contents": "Matthew Tedder <[email protected]> wrote:\n> For most web sites MySQL seems to work fine, but overall PostgreSQL offers \n> more capabilites so why build upon a limited base such as MySQL?\n> Does anyone here have any idea as to why so many people select MySQL when \n> both systems are open sourced?\nSome people working on win32 platforms, and mysql easy install on win32.\nJust for starting on use databases in soft.\n\nPgSQL easy-install on *unix-systems (mostly..:)), but on win32 ..it's hard..:(\n\nIMHO.\n\n--\nBest regards, KVN.\n PHP4You (<http://php4you.kiev.ua/>)\n PEAR [ru] (<http://pear.php.net/manual/ru/>)\n mailto:[email protected]\n",
"msg_date": "Mon, 29 Jul 2002 14:32:10 +0000 (UTC)",
"msg_from": "\"Vitaliy N. Kravchenko\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why is MySQL more chosen over PostgreSQL?"
},
{
"msg_contents": "well that and people tend to drift towards an easy answer,\nlike php... amazing how that combo is so popular... hrrmm...\n\nRoderick A. Anderson writes:\n > On Mon, 29 Jul 2002 [email protected] wrote:\n > \n > [snip]\n > \n > > e) Inertia. MySQL got more popular way back when; the reasons may no longer \n > > apply, but nobody is going to move to PostgreSQL without _compelling_ reason, \n > > and you'll have to show something _really compelling_.\n > \n > I would like to add one other thought. There are many web site designers\n > that get thrust into being a web site programmer. Without an\n > understanding of database design and a novice programmers (?) view of the\n > process the benefits of letting the database (RDBMS) do the database work\n > isn't recognized. They code it all in the CGI.\n > \n > \n > Rod\n > -- \n > \"Open Source Software - Sometimes you get more than you paid for...\"\n > \n > \n > ---------------------------(end of broadcast)---------------------------\n > TIP 1: subscribe and unsubscribe commands go to [email protected]\n\n-- \nChris Humphries\nDevelopment InfoStructure\n540.366.9809 \n",
"msg_date": "Mon, 29 Jul 2002 09:39:58 -0500",
"msg_from": "Chris Humphries <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why is MySQL more chosen over PostgreSQL? "
},
{
"msg_contents": "On Mon, 29 Jul 2002, Chris Humphries wrote:\n\n> well that and people tend to drift towards an easy answer,\n> like php... amazing how that combo is so popular... hrrmm...\n\nWell people seem to get so ... about php that I didn't want to touch that \ntopic.\n\n\nRod\n-- \n \"Open Source Software - Sometimes you get more than you paid for...\"\n\n",
"msg_date": "Mon, 29 Jul 2002 08:07:08 -0700 (PDT)",
"msg_from": "\"Roderick A. Anderson\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why is MySQL more chosen over PostgreSQL? "
},
{
"msg_contents": "On Mon, 29 Jul 2002, Roderick A. Anderson wrote:\n\n> I would like to add one other thought. There are many web site\n> designers that get thrust into being a web site programmer. Without\n> an understanding of database design and a novice programmers (?) view\n> of the process the benefits of letting the database (RDBMS) do the\n> database work isn't recognized. They code it all in the CGI.\n\nWell, I'll add two points to this, then:\n\n1. Often there's a lot more benefit to moving the work from the database\nto the application structure. Database schemas are hard to change, and\nhard to keep under revision control. When I was doing a large website,\nit was much, much easier to say \"everything goes through these Java\nclasses\" than \"everything goes through the database.\" I could change the\ndatabase schema at will and know that my data was safe, because I could\nhave old interfaces running simultaneously with new.\n\n(Though I'll admit, good view support would have mitigated this problem\nquite a lot. But there is *no* database in the world that has really\ngood view support; they all fail on various updates where one can\ntheoretically do the Right Thing, but in practice it's very difficult.\nAnd I don't think that's going to change any time soon.)\n\n2. I expect that even most PostgreSQL--or even database--experts don't\nhave a real understanding of relational theory, anyway. That we still\nhave table inheritance shows that. As far as I can tell, there is\nnothing whatsoever that table inheritance does that the relational model\ndoes not handle; the whole \"OO\" thing is just another, redundant way of\ndoing what we already ought to be able to do within the relational model.\n\nI'm still waiting to find out just what advantage table inheritance\noffers. I've asked a couple of times here, and nobody has even started\nto come up with anything.\n\nAll that said, though, don't take this as any kind of a dismissal of\npostgres. It's in most ways better than MySQL and also some commericial\nsystems, and many of its failures are being addressed. Postgres for some\nreason seems to attract some really, really smart people to work on it.\nIf I could see something better, I'd be there. But I don't.\n\ncjs\n-- \nCurt Sampson <[email protected]> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n\n",
"msg_date": "Tue, 30 Jul 2002 00:59:40 +0900 (JST)",
"msg_from": "Curt Sampson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why is MySQL more chosen over PostgreSQL? "
},
{
"msg_contents": "Curt Sampson wrote:\n> I'm still waiting to find out just what advantage table inheritance\n> offers. I've asked a couple of times here, and nobody has even started\n> to come up with anything.\n\nWe inherited inheritance from Berkeley. I doubt we would have added it\nourselves. It causes too much complexity in other parts of the system.\n\n> All that said, though, don't take this as any kind of a dismissal of\n> postgres. It's in most ways better than MySQL and also some commericial\n> systems, and many of its failures are being addressed. Postgres for some\n> reason seems to attract some really, really smart people to work on it.\n> If I could see something better, I'd be there. But I don't.\n\nInterbase/Firebird maybe? They just came out with a 1.0 release in\nMarch.\n\nAs for why PostgreSQL is less popular than MySQL, I think it is all\nmomentum from 1996 when MySQL worked and we sometimes crashed. Looking\nforward, I don't know many people who choose MySQL _if_ they consider\nboth PostgreSQL and MySQL, so the discussions people have over MySQL vs.\nPostgreSQL are valuable because they get people to consider MySQL\nalternatives, and once they do, they usually choose PostgreSQL.\n\nAs for momentum, we still have a smaller userbase than MySQL, but we are\nincreasing our userbase at a fast rate, perhaps faster than MySQL at\nthis point.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 29 Jul 2002 12:30:56 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why is MySQL more chosen over PostgreSQL?"
},
{
"msg_contents": "Bruce Momjian wrote:\n> \n> Curt Sampson wrote:\n> > I'm still waiting to find out just what advantage table inheritance\n> > offers. I've asked a couple of times here, and nobody has even started\n> > to come up with anything.\n> \n> We inherited inheritance from Berkeley. I doubt we would have added it\n> ourselves. It causes too much complexity in other parts of the system.\n\n...\n\n> As for why PostgreSQL is less popular than MySQL, I think it is all\n> momentum from 1996 when MySQL worked and we sometimes crashed. Looking\n> forward, I don't know many people who choose MySQL _if_ they consider\n> both PostgreSQL and MySQL, so the discussions people have over MySQL vs.\n> PostgreSQL are valuable because they get people to consider MySQL\n> alternatives, and once they do, they usually choose PostgreSQL.\n> \n> As for momentum, we still have a smaller userbase than MySQL, but we are\n> increasing our userbase at a fast rate, perhaps faster than MySQL at\n> this point.\n\nIts all due to sort-order. If Oracle was open source MySQL would still\nbe more popular. ;-)\n\nMike Mascari\[email protected]\n",
"msg_date": "Mon, 29 Jul 2002 12:49:29 -0400",
"msg_from": "Mike Mascari <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why is MySQL more chosen over PostgreSQL?"
},
{
"msg_contents": "On Mon, 29 Jul 2002, Bruce Momjian wrote:\n\n> Curt Sampson wrote:\n> > I'm still waiting to find out just what advantage table inheritance\n> > offers. I've asked a couple of times here, and nobody has even started\n> > to come up with anything.\n>\n> We inherited inheritance from Berkeley. I doubt we would have added it\n> ourselves. It causes too much complexity in other parts of the system.\n\nAh, all the more reason to remove it, then! :-)\n\nBut really, please don't take that as a criticism of the current development\ndirection; I know it was inherited, and it's not new code. In fact, I think\nit probably wasn't until _The Third Manifsto_ came out in 1998 that it\nreally became clear that table inheritance was not terribly useful--if it's\neven generally known now. And even so, I'm open to other opinions on that,\nsince it's not been an intensive area of study by any means.\n\n> > All that said, though, don't take this as any kind of a dismissal of\n> > postgres. It's in most ways better than MySQL and also some commericial\n> > systems, and many of its failures are being addressed. Postgres for some\n> > reason seems to attract some really, really smart people to work on it.\n> > If I could see something better, I'd be there. But I don't.\n>\n> Interbase/Firebird maybe? They just came out with a 1.0 release in March.\n\nOnce in a while I go back to it, but I still can't build the darn thing\nfrom scratch. Which makes it a bit difficult to evaluate....\n\n> As for why PostgreSQL is less popular than MySQL, I think it is all\n> momentum from 1996 when MySQL worked and we sometimes crashed.\n\nRight. I have a lot of hope. After all, MySQL was for a couple of\nyears a second-runner to mSQL, remember?\n\ncjs\n-- \nCurt Sampson <[email protected]> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n\n",
"msg_date": "Tue, 30 Jul 2002 02:01:19 +0900 (JST)",
"msg_from": "Curt Sampson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why is MySQL more chosen over PostgreSQL?"
},
{
"msg_contents": "On Mon, 2002-07-29 at 19:01, Curt Sampson wrote:\n> On Mon, 29 Jul 2002, Bruce Momjian wrote:\n> \n> > Curt Sampson wrote:\n> > > I'm still waiting to find out just what advantage table inheritance\n> > > offers. I've asked a couple of times here, and nobody has even started\n> > > to come up with anything.\n\nIt is mostly a syntactic thing that makes it easier to humans to write\ncleaner code.\n\nOtherwise, it is proved that anything can be written for a Turing\nMachine ;)\n\n> > We inherited inheritance from Berkeley. I doubt we would have added it\n> > ourselves. It causes too much complexity in other parts of the system.\n> \n> Ah, all the more reason to remove it, then! :-)\n>\n\nIt would make more sense to make it compatible with SQL99 and drop the\ncurrent behaviour only after that if possible.\n\nAs it stands now it is a strange mix of SQL99's\n\n CREATE TABLE thistable(...,LIKE anothertable,...);\nand\n CREATE table mytable(...) UNDER anothertable;\n\nwith only a few additional goodies, like SELECT* (i.e not ONLY) which\nselects from all tables that inherit from this.\n\nother things that should be done are not (like inheriting constraints,\nforeign and primary keys, triggers, ...)\n\nAlso we currently can't return more than one recordset from a query,\nwhich also makes selecting from an inheritance hierarchy less versatile.\n\n-----------\nHannu\n\n\n",
"msg_date": "29 Jul 2002 20:57:37 +0200",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why is MySQL more chosen over PostgreSQL?"
},
{
"msg_contents": "On Tue, 2002-07-30 at 14:51, Adrian 'Dagurashibanipal' von Bidder wrote:\n\n> Bruce Momjian:\n> > It causes too much complexity in other parts of the system.\n> \n> That's one reason.\n\nSeems like somewhat valid reason. But still not enough to do a lot of\nwork _and_ annoy a lot of existing users :)\n\n> Curt Sampson wrote:\n> > I'm still waiting to find out just what advantage table inheritance\n> > offers. I've asked a couple of times here, and nobody has even started\n> > to come up with anything. and\n> > there is nothing whatsoever that table inheritance does that the\n> > relational model does not handle\n> \n> That's the other one.\n\nThat's quite bogus imho. You could just as well argue that there is\nnothing that relational model handles that can't be done in pure C.\n\n----------------\nHannu\n\n",
"msg_date": "30 Jul 2002 13:13:24 +0500",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why is MySQL more chosen over PostgreSQL?"
},
{
"msg_contents": "On Mon, 2002-07-29 at 18:30, Bruce Momjian wrote:\n> Curt Sampson wrote:\n> > I'm still waiting to find out just what advantage table inheritance\n> > offers. I've asked a couple of times here, and nobody has even started\n> > to come up with anything.\n> \n> We inherited inheritance from Berkeley. I doubt we would have added it\n> ourselves. It causes too much complexity in other parts of the system.\n\nHow about dropping it, then?\n\nJust start to emit \n\nWARNING: inheritance will be dropped with postgres 8.0 \nWARNING: please refer to http://.../ for an explanation why.\n\nright now on every CREATE TABLE that uses it.\n\ncheers\n-- vbi\n\n-- \nsecure email with gpg http://fortytwo.ch/gpg",
"msg_date": "30 Jul 2002 10:19:39 +0200",
"msg_from": "Adrian 'Dagurashibanipal' von Bidder <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why is MySQL more chosen over PostgreSQL?"
},
{
"msg_contents": "> > We inherited inheritance from Berkeley. I doubt we would have added it\n> > ourselves. It causes too much complexity in other parts of the system.\n>\n> How about dropping it, then?\n>\n> Just start to emit\n>\n> WARNING: inheritance will be dropped with postgres 8.0\n> WARNING: please refer to http://.../ for an explanation why.\n>\n> right now on every CREATE TABLE that uses it.\n\nWhy? It doesn't hurt you personally! Plus, it would annoy a _boatload_ of\nexisting inheritance users.\n\nA more interesting question I think is how to allow our indexes to span\nmultiple relations, _without_ causing any performance degradation for non\ninheritance users...\n\nChris\n\n",
"msg_date": "Tue, 30 Jul 2002 16:45:43 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why is MySQL more chosen over PostgreSQL?"
},
{
"msg_contents": "[don't cc: me, please.]\n[please leave proper attribution in]\n\nOn Tue, 2002-07-30 at 10:45, Christopher Kings-Lynne wrote:\n> > > We inherited inheritance from Berkeley. I doubt we would have added it\n> > > ourselves. It causes too much complexity in other parts of the system.\n\n[Inheritance]\n\n> > How about dropping it, then?\n[...]\n\n> Why? It doesn't hurt you personally! \n\nThat's correct.\n\n> Plus, it would annoy a _boatload_ of\n> existing inheritance users.\n\nBruce Momjian:\n> It causes too much complexity in other parts of the system.\n\nThat's one reason.\n\nCurt Sampson wrote:\n> I'm still waiting to find out just what advantage table inheritance\n> offers. I've asked a couple of times here, and nobody has even started\n> to come up with anything.\nand\n> there is nothing whatsoever that table inheritance does that the\n> relational model does not handle\n\nThat's the other one.\n\ncheers\n-- vbi\n\n-- \nsecure email with gpg http://fortytwo.ch/gpg",
"msg_date": "30 Jul 2002 11:51:45 +0200",
"msg_from": "Adrian 'Dagurashibanipal' von Bidder <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why is MySQL more chosen over PostgreSQL?"
},
{
"msg_contents": "On 30 Jul 2002, Hannu Krosing wrote:\n\n> On Tue, 2002-07-30 at 14:51, Adrian 'Dagurashibanipal' von Bidder wrote:\n>\n> > Bruce Momjian:\n> > > It causes too much complexity in other parts of the system.\n> >\n> > That's one reason.\n>\n> Seems like somewhat valid reason. But still not enough to do a lot of\n> work _and_ annoy a lot of existing users :)\n\nIt's almost unquestionably more work to maintain than to drop. Dropping\nsupport for it is a one-time operation. Maintaining it is an ongoing\nexpense.\n\n> That's quite bogus imho. You could just as well argue that there is\n> nothing that relational model handles that can't be done in pure C.\n\nThat's a straw man argument. What we (or I, anyway) are arguing is that\nthe relational model does everything that table inheritance does, and at\nleast as easily. Extending the model adds complexity without adding the\nability to do things you couldn't easily do before. (This, IMHO, makes\ntable inheritance quite inelegant.)\n\ncjs\n-- \nCurt Sampson <[email protected]> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n\n",
"msg_date": "Tue, 30 Jul 2002 20:00:13 +0900 (JST)",
"msg_from": "Curt Sampson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why is MySQL more chosen over PostgreSQL?"
},
{
"msg_contents": "On 29 Jul 2002, Hannu Krosing wrote:\n\n> > > Curt Sampson wrote:\n> > > > I'm still waiting to find out just what advantage table inheritance\n> > > > offers. I've asked a couple of times here, and nobody has even started\n> > > > to come up with anything.\n>\n> It is mostly a syntactic thing that makes it easier to humans to write\n> cleaner code.\n\nAnd how is using table inheritance \"cleaner\" than doing it the\nrelational way? It adds extra complexity to the system, which is\nautomatically a reduction in cleanliness, so it would have to have\nsome correspondingly cleanliness-increasing advantages in order\nto be cleaner, overall.\n\ncjs\n-- \nCurt Sampson <[email protected]> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n\n",
"msg_date": "Tue, 30 Jul 2002 20:02:44 +0900 (JST)",
"msg_from": "Curt Sampson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why is MySQL more chosen over PostgreSQL?"
},
{
"msg_contents": "* Adrian 'Dagurashibanipal' von Bidder <[email protected]> [020730 04:20]:\n> On Mon, 2002-07-29 at 18:30, Bruce Momjian wrote:\n> > Curt Sampson wrote:\n> > > I'm still waiting to find out just what advantage table inheritance\n> > > offers. I've asked a couple of times here, and nobody has even started\n> > > to come up with anything.\n\nI think one of the values of it is that it is something that no one else\nhas. It distinguishes us.\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 425 1212 (DoD#0082) (eNTP) | what's for dinner.\n",
"msg_date": "Tue, 30 Jul 2002 07:46:58 -0400",
"msg_from": "\"D'Arcy J.M. Cain\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why is MySQL more chosen over PostgreSQL?"
},
{
"msg_contents": "[No cc: please. Especially if you're not commenting on anything I said]\n\nOn Tue, 2002-07-30 at 13:46, D'Arcy J.M. Cain wrote:\n> > > Curt Sampson wrote:\n> > > > I'm still waiting to find out just what advantage table inheritance\n> > > > offers. I've asked a couple of times here, and nobody has even started\n> > > > to come up with anything.\n> \n> I think one of the values of it is that it is something that no one else\n> has. It distinguishes us.\n\nCoooool. Let's have the 'automatically phone KFC if developer works more\nthan 8 hours non-stop' feature, *that* is something nobody else has.\nYes. Cool.\n\nIn other words: this is an absolutely bogus argument.\n\nAs an implementor I'm always wary of using features nobody else has,\nespecially in databases. So, if I'd want postgres to have one thing\nnobody else has, it would be the most complete standard SQL\nimplementation - so it would at least be the other products' fault if\nI'd have to do any special porting work to/from postgres.\n\ncheers\n-- vbi\n\n-- \nsecure email with gpg http://fortytwo.ch/gpg",
"msg_date": "30 Jul 2002 14:01:35 +0200",
"msg_from": "Adrian 'Dagurashibanipal' von Bidder <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why is MySQL more chosen over PostgreSQL?"
},
{
"msg_contents": "> As an implementor I'm always wary of using features nobody else has,\n> especially in databases. So, if I'd want postgres to have one thing\n> nobody else has, it would be the most complete standard SQL\n> implementation - so it would at least be the other products' fault if\n> I'd have to do any special porting work to/from postgres.\n\nWhy can't both be done? If nobody extended the spec or came up with new\nfeatures there wouldn't exactly be any progress.\n\nYes, meeting the spec is a good goal, and one that is getting quite\nclose as far as the SQL part goes -- but it shouldn't be the only goal.\n\n\nInheritance currently saves me from issuing ~4 inserts, updates, deletes\nas it handles it itself. If indexes and a couple other things worked\nacross the entire tree it could be more useful.\n\nI think what we need to do is expand on it, not blow it away.\n\n\nThere is a list of spec features we support. Stick to those (or the\nsubset) that is appropriate for portability. If you plan on making an\nembedded DB Based application the extra features may be useful.\n\n\n",
"msg_date": "30 Jul 2002 08:13:42 -0400",
"msg_from": "Rod Taylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why is MySQL more chosen over PostgreSQL?"
},
{
"msg_contents": "* Adrian 'Dagurashibanipal' von Bidder <[email protected]> [020730 08:01]:\n> On Tue, 2002-07-30 at 13:46, D'Arcy J.M. Cain wrote:\n> > I think one of the values of it is that it is something that no one else\n> > has. It distinguishes us.\n> \n> Coooool. Let's have the 'automatically phone KFC if developer works more\n> than 8 hours non-stop' feature, *that* is something nobody else has.\n> Yes. Cool.\n\nExcuse me all to hell but are you in the junior debating class or what?\nNo one said we need to include every possible feature just because it\nis not in other products. Your KFC suggestion has nothing whatsoever\nto do with database management. Inheritance does. It is useful to\nsome and, as I said *one of the values* is the way it distinguishes us.\n\nFor the record, I do use the feature and I would miss it if it disappeared.\nI think it can be improved upon, especially in the area of indexes and\nprmary keys but overall it is a nice feature that has the added benefit\nof differentiating us from other RDBMS systems.\n\n> As an implementor I'm always wary of using features nobody else has,\n\nHow very conservative of you. Personally I have spent my life trying\nto do new things. If I wanted Oracle or DB2 I know where to find it.\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 425 1212 (DoD#0082) (eNTP) | what's for dinner.\n",
"msg_date": "Tue, 30 Jul 2002 08:28:33 -0400",
"msg_from": "\"D'Arcy J.M. Cain\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why is MySQL more chosen over PostgreSQL?"
},
{
"msg_contents": "[Still no cc:s please]\n\nOn Tue, 2002-07-30 at 14:28, D'Arcy J.M. Cain wrote:\n> * Adrian 'Dagurashibanipal' von Bidder <[email protected]> [020730 08:01]:\n> > On Tue, 2002-07-30 at 13:46, D'Arcy J.M. Cain wrote:\n> > > I think one of the values of it is that it is something that no one else\n> > > has. It distinguishes us.\n> > \n> > Coooool. Let's have the 'automatically phone KFC if developer works more\n> > than 8 hours non-stop' feature, *that* is something nobody else has.\n> > Yes. Cool.\n> \n> Excuse me all to hell but are you in the junior debating class or what?\n\nSure, I was taking it to the extreme here (And I really am sorry if you\nfelt offended by my remark). But I strongly feel that having a feature\nbecause 'it is something that no one else has. It distinguishes us.' is\nno justification at all.\n\nOf course, if a feature provides some real use, then it is worth having\n(yes, even if it's not in the standard). But exactly this seems not so\nclear in the case of inheritance in postgres.\n\n(And that's where I'm starting to say things I've said before. So I'll\njust shut up now.)\n\ncheers\n-- vbi\n\n-- \nsecure email with gpg http://fortytwo.ch/gpg",
"msg_date": "30 Jul 2002 14:44:05 +0200",
"msg_from": "Adrian 'Dagurashibanipal' von Bidder <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why is MySQL more chosen over PostgreSQL?"
},
{
"msg_contents": "\n\nAdrian 'Dagurashibanipal' von Bidder wrote:\n\n> (And that's where I'm starting to say things I've said before. So I'll\n> just shut up now.)\n\nMay be you can contribute some code :)\n\n",
"msg_date": "Tue, 30 Jul 2002 14:45:23 +0200",
"msg_from": "\"Iavor Raytchev\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why is MySQL more chosen over PostgreSQL?"
},
{
"msg_contents": "> > I'm still waiting to find out just what advantage table inheritance\n> > offers. I've asked a couple of times here, and nobody has even started\n> > to come up with anything.\n> and\n> > there is nothing whatsoever that table inheritance does that the\n> > relational model does not handle\n> \n> That's the other one.\n\nIrrelevant - thousands of people are using the feature!\n\nChris\n\n\n",
"msg_date": "Tue, 30 Jul 2002 21:48:37 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Inheritance (was: Re: Why is MySQL more chosen over PostgreSQL?)"
},
{
"msg_contents": "On Tue, Jul 30, 2002 at 02:01:35PM +0200, Adrian 'Dagurashibanipal' von Bidder wrote:\n> As an implementor I'm always wary of using features nobody else has,\n> especially in databases. So, if I'd want postgres to have one thing\n> nobody else has, it would be the most complete standard SQL\n> implementation - so it would at least be the other products' fault if\n> I'd have to do any special porting work to/from postgres.\n\nSQL99 includes inheritance (albeit a somewhat different implementation\nthan the design in Postgres right now) -- so the \"most complete standard\nSQL implementation\" would need to include inheritance.\n\nI'd say removing inheritence would be a waste of time -- it would\nprobably be easier to just fix its deficiencies.\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <[email protected]>\nPGP Key ID: DB3C29FC\n",
"msg_date": "Tue, 30 Jul 2002 10:10:20 -0400",
"msg_from": "[email protected] (Neil Conway)",
"msg_from_op": false,
"msg_subject": "Re: Why is MySQL more chosen over PostgreSQL?"
},
{
"msg_contents": "> ... But I strongly feel that having a feature\n> because 'it is something that no one else has. It distinguishes us.' is\n> no justification at all.\n\nOne reason why we have a database which *does* come very close to the\nstandards is precisely because it had (and has) things which no one else\nhad (or has). It demonstrated how to do things which are now part of\nSQL99, but which were not implemented *anywhere else* back in the early\n'90s.\n\nInheritance is not as well supported by us, but that is our fault for\nfocusing on other things recently. I think that some of the recent work\nwill end up benefiting inheritance features, so these might make some\nprogress soon too.\n\nSearch and destroy missions to eliminate all that is not \"standard\" will\ndiminish the product, because we will be constrained to work entirely\nwithin the boundaries of a standard which is poorly thought out around\nthe edges. If our boundaries are always just a bit wider than that we'll\nbe OK ;)\n\nAll imho of course...\n\n - Thomas\n",
"msg_date": "Tue, 30 Jul 2002 07:24:45 -0700",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why is MySQL more chosen over PostgreSQL?"
},
{
"msg_contents": "Adrian 'Dagurashibanipal' von Bidder wrote:\n\nChecking application/pgp-signature: FAILURE\n-- Start of PGP signed section.\n> [Still no cc:s please]\n> \n> On Tue, 2002-07-30 at 14:28, D'Arcy J.M. Cain wrote:\n> > * Adrian 'Dagurashibanipal' von Bidder <[email protected]> [020730 08:01]:\n> > > On Tue, 2002-07-30 at 13:46, D'Arcy J.M. Cain wrote:\n> > > > I think one of the values of it is that it is something that no one else\n> > > > has. It distinguishes us.\n> > > \n> > > Coooool. Let's have the 'automatically phone KFC if developer works more\n> > > than 8 hours non-stop' feature, *that* is something nobody else has.\n> > > Yes. Cool.\n> > \n> > Excuse me all to hell but are you in the junior debating class or what?\n> \n> Sure, I was taking it to the extreme here (And I really am sorry if you\n> felt offended by my remark). But I strongly feel that having a feature\n> because 'it is something that no one else has. It distinguishes us.' is\n> no justification at all.\n\nI thought the KFC thing was very funny.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 30 Jul 2002 11:30:49 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why is MySQL more chosen over PostgreSQL?"
},
{
"msg_contents": "On Tue, 2002-07-30 at 16:00, Curt Sampson wrote:\n> On 30 Jul 2002, Hannu Krosing wrote:\n> \n> > On Tue, 2002-07-30 at 14:51, Adrian 'Dagurashibanipal' von Bidder wrote:\n> >\n> > > Bruce Momjian:\n> > > > It causes too much complexity in other parts of the system.\n> > >\n> > > That's one reason.\n> >\n> > Seems like somewhat valid reason. But still not enough to do a lot of\n> > work _and_ annoy a lot of existing users :)\n> \n> It's almost unquestionably more work to maintain than to drop. Dropping\n> support for it is a one-time operation. Maintaining it is an ongoing\n> expense.\n\nI would not rush to drop advanced features, as they may be hard to put\nback later. If they stay in, even in broken form, then there wont be\nnearly as much patches which make fixing them harder.\n\nI'm afraid that we have already dropped too much. \n\nFor example we dropped time travel, but recent versions of Oracle now\nhave some form of it, usable mostly for recovering accidentally deleted\n(and committed rows), although it is much harder to implement it using\nlogs than using MVCC.\n\nAlso, I suspect that dropping support for multiple return sets for one\nquery was done too fast.\n\n> > That's quite bogus imho. You could just as well argue that there is\n> > nothing that relational model handles that can't be done in pure C.\n> \n> That's a straw man argument.\n\nActually it was meant to be 'one straw man against another straw man \nargument' ;)\n\n> What we (or I, anyway) are arguing is that\n> the relational model does everything that table inheritance does, and at\n> least as easily.\n\nThe problem is that 'the relational model' does nothing by itself. It is\nalways the developers/DBAs who have to do things. \n\nAnd at least for some brain shapes it is much more convenient to inherit\ntables than to (re)factor stuff into several tables to simulate\ninheritance using the relational model. \n\nI still think that inheritance should be enchanced and made compatible\nwith standards not removed.\n\n> Extending the model adds complexity without adding the\n> ability to do things you couldn't easily do before. (This, IMHO, makes\n> table inheritance quite inelegant.)\n\nThen explain why SQL99 has included inheritance ?\n\n---------------\nHannu\n\n",
"msg_date": "31 Jul 2002 00:54:14 +0500",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why is MySQL more chosen over PostgreSQL?"
},
{
"msg_contents": "> 2. I expect that even most PostgreSQL--or even database--experts don't\n> have a real understanding of relational theory, anyway. That we still\n> have table inheritance shows that. As far as I can tell, there is\n> nothing whatsoever that table inheritance does that the relational model\n> does not handle; the whole \"OO\" thing is just another, redundant way of\n> doing what we already ought to be able to do within the relational model.\n>\n> I'm still waiting to find out just what advantage table inheritance\n> offers. I've asked a couple of times here, and nobody has even started\n> to come up with anything.\n\nCan you point me (someone without a real understanding of relational theory) \nto some good resources that explain the concepts well?\n\nRegards,\n\tJeff\n",
"msg_date": "Tue, 30 Jul 2002 14:41:02 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why is MySQL more chosen over PostgreSQL?"
},
{
"msg_contents": "On 31 Jul 2002, Hannu Krosing wrote:\n\n> I would not rush to drop advanced features, as they may be hard to put\n> back later.\n\nIf they are hard to put back, it's generally because the other code\nin the system that relates to it has changed, so you can't just bring\nback what is in the old versions in the CVS repository.\n\nBut if the code was left in, that meant that someone had to make all of\nthose integration changes you'd have to make to bring the code back;\nit's just they had to make it as they were adding new features and\nwhatnot. If in the end you decide that the feature you didn't drop isn't\nimportant, you just did a lot of work for nothing. You may also slow\ndown or stop the implementation of other, more useful features, because\npeople find that the work to add them isn't worthwhile, due to having to\nchange too much code.\n\n> If they stay in, even in broken form, then there wont be\n> nearly as much patches which make fixing them harder.\n\nSummary: someone always has to do the patches. It's just a question of\nwhether you *might* do them *if* you decide to bring the feature back,\nor whether you *will* do them because the feature is there.\n\n> > What we (or I, anyway) are arguing is that\n> > the relational model does everything that table inheritance does, and at\n> > least as easily.\n>\n> The problem is that 'the relational model' does nothing by itself. It is\n> always the developers/DBAs who have to do things.\n\nOk. So \"the developer can do what table inheritance does just as easily\nin the relational model.\"\n\n> And at least for some brain shapes it is much more convenient to inherit\n> tables than to (re)factor stuff into several tables to simulate\n> inheritance using the relational model.\n\nI highly doubt that. Relating two tables to each other via a key, and\njoining them together, allows you to do everything that inheritance\nallows you to do, but also more. If you have difficulty with keys and\njoins, well, you really probably want to stop and fix that problem\nbefore you do more work on a relational database....\n\n> > Extending the model adds complexity without adding the\n> > ability to do things you couldn't easily do before. (This, IMHO, makes\n> > table inheritance quite inelegant.)\n>\n> Then explain why SQL99 has included inheritance ?\n\nBecuase SQL has a long, long history of doing things badly. The language\nhas been non-relational in many ways from the very beginning. But Codd\nand Date argue that much better than I do, so I'd prefer you read their\nbooks and respond to those arguments. I can provide references if you\nneed them.\n\ncjs\n-- \nCurt Sampson <[email protected]> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n\n",
"msg_date": "Wed, 31 Jul 2002 08:35:17 +0900 (JST)",
"msg_from": "Curt Sampson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why is MySQL more chosen over PostgreSQL?"
},
{
"msg_contents": "On Mon, 2002-07-29 at 08:53, [email protected] wrote:\n> > Just a long standing curiosity?\n> e) Inertia. MySQL got more popular way back when; the reasons may no longer \n\nf) Win32 Support. I can download a setup.exe for mysql and have it up\nand running quickly on Windows. I think that native Win32 support will\ngo a long way toward making Postgres more \"popular\"\n\n",
"msg_date": "30 Jul 2002 21:09:01 -0400",
"msg_from": "\"Matthew T. O'Connor\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why is MySQL more chosen over PostgreSQL?"
},
{
"msg_contents": "On Tue, 30 Jul 2002, Jeff Davis wrote:\n\n> Can you point me (someone without a real understanding of relational theory)\n> to some good resources that explain the concepts well?\n\nC. J. Date's _An Introduction to Database Systems, Seventh Edition_ is\na fat tome that will give you an extremely good grasp of relational\ntheory if you take the time to study it. Even just browsing it is well\nworthwhile. It has some discussion of \"object-oriented\" database systems\nas well.\n\nIn particular (you'll see the relevance of this below) it has an\nexcellent analysis of the updatability of views.\n\nDate and Darwen's _Foundation for Future Database Systems: the\nThird Manifesto_ goes into much more detail about how they feel\nobject-oriented stuff should happen in relational databases. Appendix E\n(\"Subtables and Supertables\") discusses table inheritance. It ends with\nthis statement:\n\n To sum up: It looks as if the whole business of a subtable\n inheriting columns from a supertable is nothing but a syntatic\n shorthand--not that there is anything wrong with syntatic\n shorthands in general, of course, but this particular shorthand\n does not seem particularly useful, and in any case it is always\n more than adequately supported by the conventional view mechanism.\n\n(This, BTW, addresses the note someone else made here about the\nsubtable/supertable thing letting him do one insert instead of two\nor three; he just needs to create a view and appropriate rules,\nand he'll get exactly the same effect. And maybe that will help\nfix his index problems, too....)\n\ncjs\n-- \nCurt Sampson <[email protected]> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n\n",
"msg_date": "Wed, 31 Jul 2002 11:31:11 +0900 (JST)",
"msg_from": "Curt Sampson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why is MySQL more chosen over PostgreSQL?"
},
{
"msg_contents": "> I highly doubt that. Relating two tables to each other via a key, and\n> joining them together, allows you to do everything that inheritance\n> allows you to do, but also more. If you have difficulty with keys and\n> joins, well, you really probably want to stop and fix that problem\n> before you do more work on a relational database....\n\nI'm still not convinced of this. For example, my friend has a hardware\ne-store and every different class of hardware has different properties. ie\nmodems have baud and network cards have speed and video cards have ram. He\nsimply just has a 'products' table from which he extends\n'networkcard_products', etc. with the additional fields. Easy.\n\nChris\n\n",
"msg_date": "Wed, 31 Jul 2002 10:43:30 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why is MySQL more chosen over PostgreSQL?"
},
{
"msg_contents": "> On Mon, 2002-07-29 at 08:53, [email protected] wrote:\n> > > Just a long standing curiosity?\n> > e) Inertia. MySQL got more popular way back when; the reasons\n> may no longer\n>\n> f) Win32 Support. I can download a setup.exe for mysql and have it up\n> and running quickly on Windows. I think that native Win32 support will\n> go a long way toward making Postgres more \"popular\"\n\nSpeaking of that - wasn't someone going to branch the CVS with a whole lot\nof Win32 support stuff? Jan?\n\nChris\n\n",
"msg_date": "Wed, 31 Jul 2002 10:44:26 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why is MySQL more chosen over PostgreSQL?"
},
{
"msg_contents": "On Wed, 31 Jul 2002, Christopher Kings-Lynne wrote:\n\n> > I highly doubt that. Relating two tables to each other via a key, and\n> > joining them together, allows you to do everything that inheritance\n> > allows you to do, but also more. If you have difficulty with keys and\n> > joins, well, you really probably want to stop and fix that problem\n> > before you do more work on a relational database....\n>\n> I'm still not convinced of this. For example, my friend has a hardware\n> e-store and every different class of hardware has different properties. ie\n> modems have baud and network cards have speed and video cards have ram. He\n> simply just has a 'products' table from which he extends\n> 'networkcard_products', etc. with the additional fields. Easy.\n\nAnd what's the problem with networkcard_products being a separate table\nthat shares a key with the products table?\n\n CREATE TABLE products (product_id int, ...)\n CREATE TABLE networkcard_products_data (product_id int, ...)\n CREATE VIEW networkcard_products AS\n\tSELECT products.product_id, ...\n\tFROM products\n\tJOINT networkcard_products_data USING (product_id)\n\nWhat functionality does table inheritance offer that this traditional\nrelational method of doing things doesn't?\n\ncjs\n-- \nCurt Sampson <[email protected]> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n\n",
"msg_date": "Wed, 31 Jul 2002 12:03:56 +0900 (JST)",
"msg_from": "Curt Sampson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why is MySQL more chosen over PostgreSQL?"
},
{
"msg_contents": "Curt Sampson wrote:\n> On Wed, 31 Jul 2002, Christopher Kings-Lynne wrote:\n> \n> > > I highly doubt that. Relating two tables to each other via a key, and\n> > > joining them together, allows you to do everything that inheritance\n> > > allows you to do, but also more. If you have difficulty with keys and\n> > > joins, well, you really probably want to stop and fix that problem\n> > > before you do more work on a relational database....\n> >\n> > I'm still not convinced of this. For example, my friend has a hardware\n> > e-store and every different class of hardware has different properties. ie\n> > modems have baud and network cards have speed and video cards have ram. He\n> > simply just has a 'products' table from which he extends\n> > 'networkcard_products', etc. with the additional fields. Easy.\n> \n> And what's the problem with networkcard_products being a separate table\n> that shares a key with the products table?\n> \n> CREATE TABLE products (product_id int, ...)\n> CREATE TABLE networkcard_products_data (product_id int, ...)\n> CREATE VIEW networkcard_products AS\n> \tSELECT products.product_id, ...\n> \tFROM products\n> \tJOINT networkcard_products_data USING (product_id)\n> \n> What functionality does table inheritance offer that this traditional\n> relational method of doing things doesn't?\n\nYou can add children without modifying your code. It is classic C++\ninheritance; parent table accesses work with the new child tables\nautomatically.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 30 Jul 2002 23:11:14 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why is MySQL more chosen over PostgreSQL?"
},
{
"msg_contents": "On Tue, 30 Jul 2002, Bruce Momjian wrote:\n\n> You can add children without modifying your code. It is classic C++\n> inheritance; parent table accesses work with the new child tables\n> automatically.\n\nI don't see how my method doesn't do this as well. What code do you have\nto modify in the relational way of doing things that you don't in this\ninheritance way?\n\ncjs\n-- \nCurt Sampson <[email protected]> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n\n",
"msg_date": "Wed, 31 Jul 2002 12:14:36 +0900 (JST)",
"msg_from": "Curt Sampson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why is MySQL more chosen over PostgreSQL?"
},
{
"msg_contents": "Curt Sampson wrote:\n> On Tue, 30 Jul 2002, Bruce Momjian wrote:\n> \n> > You can add children without modifying your code. It is classic C++\n> > inheritance; parent table accesses work with the new child tables\n> > automatically.\n> \n> I don't see how my method doesn't do this as well. What code do you have\n> to modify in the relational way of doing things that you don't in this\n> inheritance way?\n\nSeems like you have to modify your views to handle this, at least in the\nexample you just posted, right?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 30 Jul 2002 23:15:33 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why is MySQL more chosen over PostgreSQL?"
},
{
"msg_contents": "On Wed, 2002-07-31 at 04:35, Curt Sampson wrote:\n> On 31 Jul 2002, Hannu Krosing wrote:\n> \n> > I would not rush to drop advanced features, as they may be hard to put\n> > back later.\n> \n> If they are hard to put back, it's generally because the other code\n> in the system that relates to it has changed, so you can't just bring\n> back what is in the old versions in the CVS repository.\n> \n> But if the code was left in, that meant that someone had to make all of\n> those integration changes you'd have to make to bring the code back;\n> it's just they had to make it as they were adding new features and\n> whatnot. If in the end you decide that the feature you didn't drop isn't\n> important, you just did a lot of work for nothing. You may also slow\n> down or stop the implementation of other, more useful features, because\n> people find that the work to add them isn't worthwhile, due to having to\n> change too much code.\n> \n> > If they stay in, even in broken form, then there wont be\n> > nearly as much patches which make fixing them harder.\n> \n> Summary: someone always has to do the patches. It's just a question of\n> whether you *might* do them *if* you decide to bring the feature back,\n> or whether you *will* do them because the feature is there.\n\nOften there are more than one way to do things. And the feature being\nthere may prompt the implementor to choose in favor of a way which does\nnot rule out the feature. It does not neccessarily make that harder for\nnew features, though it may.\n\n> > > What we (or I, anyway) are arguing is that\n> > > the relational model does everything that table inheritance does, and at\n> > > least as easily.\n> >\n> > The problem is that 'the relational model' does nothing by itself. It is\n> > always the developers/DBAs who have to do things.\n> \n> Ok. So \"the developer can do what table inheritance does just as easily\n> in the relational model.\"\n> \n> > And at least for some brain shapes it is much more convenient to inherit\n> > tables than to (re)factor stuff into several tables to simulate\n> > inheritance using the relational model.\n> \n> I highly doubt that.\n\nI said it is personal ;) Some other brain shapes are more fit to working\nin relational model, even when writing front-ends in C++ or java.\n\n> Relating two tables to each other via a key, and joining them together,\n\nIt gets more complicated fast when inheritance hierarchies get deeper,\nand some info is often lost (or at least not explicitly visible from\nschema). That's why advanced modeling tools allow you to model things as\ninheritance hierarchies even when they have to map it to relational\nmodel for databases which do not support inheritance.\n\nAn it is often easier to map OO languages to OOR database when you dont\nhave to change your mindset when going through the interface.\n\n> allows you to do everything that inheritance allows you to do,\n> but also more.\n\n* you can do anything (and more ;) that DOMAINs do without domains.\n* And you can do anything and more that can be done in C++ in C.\n* And you can do anything sequences do and more without explicit syntax\n for sequences (except making them live outside of transactions,\n but this is mainly a performance hack and sequences are outside\n of relational theory anyway ;)\n* And as I already mentioned, you can compute anything on\n a Turing Machine (I doubt you can compute more, but it is not\n entirely impossible as it has to work 'more' ;)\n\n> If you have difficulty with keys and\n> joins, well, you really probably want to stop and fix that problem\n> before you do more work on a relational database....\n\nIt is of course beneficial to make joins faster, but it is often easier\nto do for more specific cases, when the user has implicitly stated what\nkind of a join he means.\n\nOne example of that is the existance of contrib/intagg which is meant to\nmake the relational method usable (performance-wise) for a class of\nproblems where _pure_ relational way falls down. \n\n> > > Extending the model adds complexity without adding the\n> > > ability to do things you couldn't easily do before. (This, IMHO, makes\n> > > table inheritance quite inelegant.)\n> >\n> > Then explain why SQL99 has included inheritance ?\n> \n> Becuase SQL has a long, long history of doing things badly.\n\nOr to rephrase it: SQL has a long, long history of doing things (though\nbadly)\n\n> The language has been non-relational in many ways from the very beginning.\n\nSQL has had pressure to be usable for a broad range of real-world\nproblems from the beginning, which theory has not.\n\n> But Codd and Date argue that much better than I do, so I'd prefer you\n> read their books and respond to those arguments. I can provide\n> references if you need them.\n\nIn theory theory and practice are the same, in practice they are often\nnot nearly so.\n\n From your reference:\n\n|Date and Darwen's _Foundation for Future Database Systems: the\n|Third Manifesto_ goes into much more detail about how they feel\n|object-oriented stuff should happen in relational databases. Appendix E\n|(\"Subtables and Supertables\") discusses table inheritance. It ends with\n|this statement:\n|\n| To sum up: It looks as if the whole business of a subtable\n| inheriting columns from a supertable is nothing but a syntatic\n| shorthand--not that there is anything wrong with syntatic\n| shorthands in general, of course, but this particular shorthand\n| does not seem particularly useful, and in any case it is always\n| more than adequately supported by the conventional view mechanism.\n\nWhich is clearly not true in PostgreSQL's case, as adequate support\nwould IMHO mean that the rules for insert/update/delete were generated\nautomatically for views as they are for select.\n\nOf course we could go the other way and remove support for VIEW's as\nthey can be done using a table and a ON SELECT DO INSTEAD rule. \nActually this is how they are done.\n\n----------------------\nHannu\n\n",
"msg_date": "31 Jul 2002 09:06:23 +0500",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why is MySQL more chosen over PostgreSQL?"
},
{
"msg_contents": "On Tue, 30 Jul 2002, Bruce Momjian wrote:\n\n> Curt Sampson wrote:\n> > On Tue, 30 Jul 2002, Bruce Momjian wrote:\n> >\n> > > You can add children without modifying your code. It is classic C++\n> > > inheritance; parent table accesses work with the new child tables\n> > > automatically.\n> >\n> > I don't see how my method doesn't do this as well. What code do you have\n> > to modify in the relational way of doing things that you don't in this\n> > inheritance way?\n>\n> Seems like you have to modify your views to handle this, at least in the\n> example you just posted, right?\n\nYou need to create a new view for the \"child\" table, yeah. But you had to\ncreate a child table anyway. But all the previously existing code you had\ncontinues to work unchanged.\n\ncjs\n-- \nCurt Sampson <[email protected]> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n\n",
"msg_date": "Wed, 31 Jul 2002 13:31:15 +0900 (JST)",
"msg_from": "Curt Sampson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why is MySQL more chosen over PostgreSQL?"
},
{
"msg_contents": "Hannu Krosing <[email protected]> writes:\n> Of course we could go the other way and remove support for VIEW's as\n> they can be done using a table and a ON SELECT DO INSTEAD rule. \n\nTwo points for Hannu ;-)\n\nSeriously, this entire thread seems a waste of bandwidth to me.\nInheritance as a feature isn't costing us anything very noticeable\nto maintain, and so I see no credible argument for expending the\neffort to rip it out --- even if I placed zero value on the annoyance\nfactor for users who are depending on it. (Which I surely don't.)\n\nIt's true that upgrading inheritance to handle features like cross-table\nuniqueness constraints or cross-table foreign keys is not trivial. But\nI don't know of any way to handle those problems in bog-standard SQL92\neither. The fact that we don't have a solution to those issues at\npresent doesn't strike me as a reason to rip out the functionality we\ndo have.\n\nIn short: give it a rest. There's lots of things we could be more\nproductively arguing about. Think about which type conversions should\nbe implicit, if you need a topic ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 31 Jul 2002 02:17:50 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why is MySQL more chosen over PostgreSQL? "
},
{
"msg_contents": "On 31 Jul 2002, Hannu Krosing wrote:\n\n> An it is often easier to map OO languages to OOR database when you dont\n> have to change your mindset when going through the interface.\n\nBut you have to anyway! Adding this inheritance does not remove the\nrelational model; it's still there right in front of you, and you still\nhave to use it. You have simply added another model to keep track of as\nwell.\n\nAnd I've done a fair amount of OO lanugage <-> relational database\ninterfacing, and the problems I've encountered are not helped by\ntable inheritance. In fact, table inheritance has been irrelevant.\nBut maybe I missed some problems.\n\n> > allows you to do everything that inheritance allows you to do,\n> > but also more.\n>\n> * And you can do anything and more that can be done in C++ in C.\n\nOk, this is really starting to annoy me. Can we stop with this argument,\nsince you *know* it is attacking a straw man?\n\n> > If you have difficulty with keys and\n> > joins, well, you really probably want to stop and fix that problem\n> > before you do more work on a relational database....\n>\n> It is of course beneficial to make joins faster, but it is often easier\n> to do for more specific cases, when the user has implicitly stated what\n> kind of a join he means.\n\nNo, my point is, you simply cannot do good work at all on a relational\nDB without understanding keys and joins. It does not matter whether\ntable inheritance is present or not. Therefore everybody effectivly\nusing a database is going to have enough knowledge to do this stuff\nwithout table inheritance.\n\n> One example of that is the existance of contrib/intagg which is meant to\n> make the relational method usable (performance-wise) for a class of\n> problems where _pure_ relational way falls down.\n\nYou seem to be confusing the relational model with a particular\nimplementation of a relational database. The relational model handles\nthis just fine, because the relational model doesn't have performance.\n\nThis particular contrib module does not change anything at all\nabout the relational model as implemented in postgres. It just\nprovides a particular performance work-around. Note also that the\nperformance problem can also be fixed in other ways; under MS-SQL server\nI'd simply use a clustered index on the one-to-many table.\n\nIn fact, given that contrib/intagg works only with relatively static\ndata, I'm not sure why you'd use it instead of just using the\nCLUSTER command once in a while.\n\n> SQL has had pressure to be usable for a broad range of real-world\n> problems from the beginning, which theory has not.\n\nSQL is actually much less usable for many real-world problems than\na proper relational language is. But as I said, read Date, and then\nargue; I'm not going to spend days rewriting his books here.\n\n> |Date and Darwen's _Foundation for Future Database Systems: the\n> |Third Manifesto_ goes into much more detail about how they feel\n> |object-oriented stuff should happen in relational databases. Appendix E\n> |(\"Subtables and Supertables\") discusses table inheritance. It ends with\n> |this statement:\n> |\n> | To sum up: It looks as if the whole business of a subtable\n> | inheriting columns from a supertable is nothing but a syntatic\n> | shorthand--not that there is anything wrong with syntatic\n> | shorthands in general, of course, but this particular shorthand\n> | does not seem particularly useful, and in any case it is always\n> | more than adequately supported by the conventional view mechanism.\n>\n> Which is clearly not true in PostgreSQL's case, as adequate support\n> would IMHO mean that the rules for insert/update/delete were generated\n> automatically for views as they are for select.\n\nIt certainly would be nice if we did that.\n\n> Of course we could go the other way and remove support for VIEW's as\n> they can be done using a table and a ON SELECT DO INSTEAD rule.\n> Actually this is how they are done.\n\n*Sigh*. You seem to be unable to distinguish between changes to\nthe conceptual model of a system and changes to implementation\ndetails.\n\ncjs\n-- \nCurt Sampson <[email protected]> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n\n",
"msg_date": "Wed, 31 Jul 2002 15:23:43 +0900 (JST)",
"msg_from": "Curt Sampson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why is MySQL more chosen over PostgreSQL?"
},
{
"msg_contents": "Hi\n\n> And what's the problem with networkcard_products being a separate table\n> that shares a key with the products table?\n>\n> CREATE TABLE products (product_id int, ...)\n> CREATE TABLE networkcard_products_data (product_id int, ...)\n> CREATE VIEW networkcard_products AS\n> SELECT products.product_id, ...\n> FROM products\n> JOINT networkcard_products_data USING (product_id)\n>\n> What functionality does table inheritance offer that this traditional\n> relational method of doing things doesn't?\n\nWell, if you also have soundcard_products, in your example you could have a\nproduct which is both a networkcard AND a soundcard. No way to restrict that\na product can be only one 'subclass' at a time... If you can make that\nrestriction using the relational model, you can do the same as with\nsubclasses. But afaict that is very hard to do...\n\nSander.\n\n\n\n",
"msg_date": "Wed, 31 Jul 2002 10:35:12 +0200",
"msg_from": "\"Sander Steffann\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why is MySQL more chosen over PostgreSQL?"
},
{
"msg_contents": "On Tue, 2002-07-30 at 14:54, Hannu Krosing wrote:\n> On Tue, 2002-07-30 at 16:00, Curt Sampson wrote:\n> > On 30 Jul 2002, Hannu Krosing wrote:\n> > \n> > > On Tue, 2002-07-30 at 14:51, Adrian 'Dagurashibanipal' von Bidder wrote:\n> > >\n> > > > Bruce Momjian:\n> > > > > It causes too much complexity in other parts of the system.\n> > > >\n> > > > That's one reason.\n> > >\n> > > Seems like somewhat valid reason. But still not enough to do a lot of\n> > > work _and_ annoy a lot of existing users :)\n> > \n> > It's almost unquestionably more work to maintain than to drop. Dropping\n> > support for it is a one-time operation. Maintaining it is an ongoing\n> > expense.\n> \n> I would not rush to drop advanced features, as they may be hard to put\n> back later. If they stay in, even in broken form, then there wont be\n> nearly as much patches which make fixing them harder.\n\nI seem to find this argument a lot on the list here. For some reason,\nmany of the developers are under the impression that even if code is\nnever touched, it has a very high level of effort to keep it in the code\nbase. That is, of course, completely untrue. Now then, I'm not saying\nthat something as central as the topic at hand has a zero maintenance\ncost associated with it, especially if it's constantly being run into by\nthe developers, but I do see it used WAY to often here for it to be\napplicable in every case.\n\nFrom what I can tell, in many cases, when one developer on the list\ndoesn't want to maintain or sees little value in a feature, it suddenly\nseems to have a high price associated with it. We need to be sure we're\nmaking the distinction between, \"I don't care to maintain this\", and,\n\"maintaining this code is prohibitively high given it's feature\nreturn...because...\". In other words, I find this argument used often\nhere will little to nothing used in context which would quantify it. \nWorse yet, it generally goes unchallenged and unquestioned.\n\n> \n> I'm afraid that we have already dropped too much. \n> \n> For example we dropped time travel, but recent versions of Oracle now\n> have some form of it, usable mostly for recovering accidentally deleted\n> (and committed rows), although it is much harder to implement it using\n> logs than using MVCC.\n\nI must admit, I never understood this myself but I'm sure I'm ignorant\nof the details.\n\n> > That's a straw man argument.\n> \n> Actually it was meant to be 'one straw man against another straw man \n> argument' ;)\n\nWas clear to me! I thought you made the point rather well.\n\n> \n> > What we (or I, anyway) are arguing is that\n> > the relational model does everything that table inheritance does, and at\n> > least as easily.\n> \n> The problem is that 'the relational model' does nothing by itself. It is\n> always the developers/DBAs who have to do things. \n> \n> And at least for some brain shapes it is much more convenient to inherit\n> tables than to (re)factor stuff into several tables to simulate\n> inheritance using the relational model. \n\nAgreed. It's important to remember, there are some cases where the\nconceptual implications can allow for more freedom in implementation. \nThis is the point that was being made with the \"pure C\" comment. Sure,\nI can do pretty much anything in asm, but that approach doesn't suddenly\ninvalidate every other way/language/concept/idiom to trying to\naccomplish as given task.\n\nSimply put, much of the power you get from any tool is often the\nflexibility of a given tool to address a problem domain in many\ndifferent ways rather than just one. Just because it doesn't fit your\nparadigm doesn't mean it doesn't fit nicely into someone else's.\n\n> \n> I still think that inheritance should be enchanced and made compatible\n> with standards not removed.\n\nI completely agree with that!\n\n> \n> > Extending the model adds complexity without adding the\n> > ability to do things you couldn't easily do before. (This, IMHO, makes\n> > table inheritance quite inelegant.)\n> \n> Then explain why SQL99 has included inheritance ?\n> \n\nYes please. I'm very interested in hearing a rebuttal to this one.\n\nGreg",
"msg_date": "01 Aug 2002 13:27:56 -0500",
"msg_from": "Greg Copeland <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why is MySQL more chosen over PostgreSQL?"
},
{
"msg_contents": "On 1 Aug 2002, Greg Copeland wrote:\n\n> For some reason,\n> many of the developers are under the impression that even if code is\n> never touched, it has a very high level of effort to keep it in the code\n> base. That is, of course, completely untrue.\n\nWhere does this \"of course\" come from? I've been programming for quite a\nwhile now, and in my experience every line of code costs you something\nto maintain. As long as there's any interaction with other parts of\nthe system, you have to test it regularly, even if you don't need to\ndirectly change it.\n\nThat said, if you've been doing regular work on postgres code base and you\nsay that it's cheap to maintain, I'll accept that.\n\n> > Then explain why SQL99 has included inheritance ?\n>\n> Yes please. I'm very interested in hearing a rebuttal to this one.\n\nBecause SQL99 is non-relational in many ways, so I guess they\nfigured making it non-relational in one more way can't hurt.\n\nI mean come on, this is a language which started out not even\nrelationally complete!\n\ncjs\n-- \nCurt Sampson <[email protected]> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n\n",
"msg_date": "Fri, 2 Aug 2002 12:39:18 +0900 (JST)",
"msg_from": "Curt Sampson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why is MySQL more chosen over PostgreSQL?"
},
{
"msg_contents": "Greg Copeland <[email protected]> writes:\n> I seem to find this argument a lot on the list here. For some reason,\n> many of the developers are under the impression that even if code is\n> never touched, it has a very high level of effort to keep it in the code\n> base. That is, of course, completely untrue.\n\nFWIW, I did not notice any of the core developers making that case.\n\nAs far as I'm concerned, any patch to remove inheritance will be\nrejected out of hand. It's not costing us anything significant to\nmaintain as-is, and there are a goodly number of people using it.\nExtending it (eg, making cross-table indexes to support inherited\nuniqueness constraints) is a different kettle of fish --- but until\nsomeone steps up to the plate with an implementation proposal, it's\nrather futile to speculate what that might cost. In the meantime,\nthe lack of any such plan is no argument for removing the functionality\nwe do have.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 02 Aug 2002 00:30:38 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why is MySQL more chosen over PostgreSQL? "
},
{
"msg_contents": "On Fri, 2 Aug 2002, Tom Lane wrote:\n\n> Greg Copeland <[email protected]> writes:\n> > I seem to find this argument a lot on the list here. For some reason,\n> > many of the developers are under the impression that even if code is\n> > never touched, it has a very high level of effort to keep it in the code\n> > base. That is, of course, completely untrue.\n>\n> FWIW, I did not notice any of the core developers making that case.\n>\n> As far as I'm concerned, any patch to remove inheritance will be\n> rejected out of hand. It's not costing us anything significant to\n> maintain as-is, and there are a goodly number of people using it.\n> Extending it (eg, making cross-table indexes to support inherited\n> uniqueness constraints) is a different kettle of fish --- but until\n> someone steps up to the plate with an implementation proposal, it's\n> rather futile to speculate what that might cost. In the meantime,\n> the lack of any such plan is no argument for removing the functionality\n> we do have.\n\nDefinitely concur ... in fact, didn't someone recently do some work to\nimprove our inheritance code, as it wasn't 'object enough' for them?\nIsn't inheritance kinda one of those things that is required in order to\nbe consider ourselves ORBDMS, which we do classify our selves as being?\n\n",
"msg_date": "Fri, 2 Aug 2002 03:52:21 -0300 (ADT)",
"msg_from": "\"Marc G. Fournier\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why is MySQL more chosen over PostgreSQL? "
},
{
"msg_contents": "On Fri, 2 Aug 2002, Marc G. Fournier wrote:\n\n> Isn't inheritance kinda one of those things that is required in order to\n> be consider ourselves ORBDMS, which we do classify our selves as being?\n\nWell, it depends on what you call an ORDBMS. By the standards of\nDate and Darwen in _The Third Manifesto_, table inheritance is not\nrequired and is in fact discouraged as a feature trivially implemented\nwith views, foreign keys and constraints. (Though that does not\nmean that posgresql currently has an implementation of these that\nwill make it trivial.)\n\ncjs\n-- \nCurt Sampson <[email protected]> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n\n",
"msg_date": "Fri, 2 Aug 2002 15:55:57 +0900 (JST)",
"msg_from": "Curt Sampson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why is MySQL more chosen over PostgreSQL? "
},
{
"msg_contents": "On 2 Aug 2002, Hannu Krosing wrote:\n\n> Is _The Third Manifesto_ available online ?\n\nNo. It's a book, and not a terribly small one, either.\n\n http://www.amazon.com/exec/obidos/ASIN/0201709287/\n\n> Could you brief me why do they discourage a syntactical frontent to a\n> feature that is trivially implemented ?\n\nWhat's the point of adding it? It's just one more thing to learn.\n\ncjs\n-- \nCurt Sampson <[email protected]> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n\n",
"msg_date": "Fri, 2 Aug 2002 19:15:40 +0900 (JST)",
"msg_from": "Curt Sampson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why is MySQL more chosen over PostgreSQL?"
},
{
"msg_contents": "On 2 Aug 2002, Hannu Krosing wrote:\n\n> Could you point me to some pure relational languages ?\n> Preferrably not pure academic at the same time ;)\n\nThe QUEL and PostQUEL languages used in Ingres and the old Postgres were\nrather more \"relational\" than SQL.\n\n> BTW, what other parts of SQL do you consider non-relational (and thus\n> candidates for dropping) ?\n\nI have nothing particular in mind right now. Also, note that merely\nbeing non-relational does not make a language element a candidate\nfor dropping. If lots of other databases implement a feature, it\nwould be silly to destroy compatability for the sake of theory.\n\ncjs\n-- \nCurt Sampson <[email protected]> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n\n",
"msg_date": "Fri, 2 Aug 2002 19:23:04 +0900 (JST)",
"msg_from": "Curt Sampson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why is MySQL more chosen over PostgreSQL?"
},
{
"msg_contents": "* Hannu Krosing <[email protected]> [020802 06:32]:\n> Your argument can as well be used against VIEWs - whats the point of\n> having them, when they can trivially be implemented using ON XXX DO\n> INSTEAD rules.\n\nWell, at least on PostgreSQL it makes a difference. We allow views to\nhave permissions granted to them independent of the underlying tables.\nIt a nice , distinguishing feature. What other database allows you\nto grant one person access to a subset of the colums of a table as\nwell as a subset of the rows?\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 425 1212 (DoD#0082) (eNTP) | what's for dinner.\n",
"msg_date": "Fri, 2 Aug 2002 06:46:32 -0400",
"msg_from": "\"D'Arcy J.M. Cain\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why is MySQL more chosen over PostgreSQL?"
},
{
"msg_contents": "On Fri, 2002-08-02 at 08:55, Curt Sampson wrote:\n> On Fri, 2 Aug 2002, Marc G. Fournier wrote:\n> \n> > Isn't inheritance kinda one of those things that is required in order to\n> > be consider ourselves ORBDMS, which we do classify our selves as being?\n> \n> Well, it depends on what you call an ORDBMS. By the standards of\n> Date and Darwen in _The Third Manifesto_,\n\nIs _The Third Manifesto_ available online ?\n\n> table inheritance is not\n> required and is in fact discouraged as a feature trivially implemented\n> with views, foreign keys and constraints. (Though that does not\n> mean that posgresql currently has an implementation of these that\n> will make it trivial.)\n\nCould you brief me why do they discourage a syntactical frontent to a\nfeature that is trivially implemented ? \n\nIf it is just views. foreign keys and constraints anyway, it should not\nadd compexity to implementation.\n\nOTOH, stating explicitly what you mean, can give the system extra hints\nfor making good optimisation decisions.\n\n-------------\nHannu\n\n",
"msg_date": "02 Aug 2002 13:07:56 +0200",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why is MySQL more chosen over PostgreSQL?"
},
{
"msg_contents": "On Fri, 2002-08-02 at 05:39, Curt Sampson wrote:\n> Because SQL99 is non-relational in many ways, so I guess they\n> figured making it non-relational in one more way can't hurt.\n> \n> I mean come on, this is a language which started out not even\n> relationally complete!\n\nCould you point me to some pure relational languages ?\n\nPreferrably not pure academic at the same time ;)\n\nBTW, what other parts of SQL do you consider non-relational (and thus\ncandidates for dropping) ?\n\n-------------\nHannu\n\n",
"msg_date": "02 Aug 2002 13:10:47 +0200",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why is MySQL more chosen over PostgreSQL?"
},
{
"msg_contents": "On Fri, 2002-08-02 at 12:15, Curt Sampson wrote:\n> On 2 Aug 2002, Hannu Krosing wrote:\n> \n> > Is _The Third Manifesto_ available online ?\n> \n> No. It's a book, and not a terribly small one, either.\n> \n> http://www.amazon.com/exec/obidos/ASIN/0201709287/\n> \n> > Could you brief me why do they discourage a syntactical frontent to a\n> > feature that is trivially implemented ?\n> \n> What's the point of adding it? It's just one more thing to learn.\n\nYou don't have to learn it if you don't want to. But once you do, you\nhave a higher level way of expressing a whole class of models.\n\nYour argument can as well be used against VIEWs - whats the point of\nhaving them, when they can trivially be implemented using ON XXX DO\nINSTEAD rules.\n\n--------------\nHannu\n\n",
"msg_date": "02 Aug 2002 13:34:21 +0200",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why is MySQL more chosen over PostgreSQL?"
},
{
"msg_contents": "> On Fri, 2002-08-02 at 08:55, Curt Sampson wrote:\n> > On Fri, 2 Aug 2002, Marc G. Fournier wrote:\n> > \n> > > Isn't inheritance kinda one of those things that is required in order to\n> > > be consider ourselves ORBDMS, which we do classify our selves as being?\n> > \n> > Well, it depends on what you call an ORDBMS. By the standards of\n> > Date and Darwen in _The Third Manifesto_,\n> \n> Is _The Third Manifesto_ available online ?\n\nThe full book is not.\n\nAn earlier version of the work is available as: http://www.acm.org/sigmod/recor\nd/issues/9503/manifesto.ps\n\nIt's actually an easier read than the full book.\n--\n(concatenate 'string \"cbbrowne\" \"@cbbrowne.com\")\nhttp://www3.sympatico.ca/cbbrowne/finances.html\n\"very few people approach me in real life and insist on proving they\nare drooling idiots.\" -- Erik Naggum, comp.lang.lisp\n\n\n",
"msg_date": "Fri, 02 Aug 2002 09:55:07 -0400",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Third Manifesto"
},
{
"msg_contents": "On Thu, 2002-08-01 at 23:30, Tom Lane wrote:\n> Greg Copeland <[email protected]> writes:\n> > I seem to find this argument a lot on the list here. For some reason,\n> > many of the developers are under the impression that even if code is\n> > never touched, it has a very high level of effort to keep it in the code\n> > base. That is, of course, completely untrue.\n> \n> FWIW, I did not notice any of the core developers making that case.\n> \n\nI've seen it used a lot. In many cases, it's asserted with nothing to\nsupport it other than the fact that they are a core developer, however,\nthese assertions are often given against unspecified and undeveloped\ncode, so, it makes such an assertion invalid. \n\nGreg",
"msg_date": "02 Aug 2002 09:14:46 -0500",
"msg_from": "Greg Copeland <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why is MySQL more chosen over PostgreSQL?"
},
{
"msg_contents": "On Thu, 2002-08-01 at 22:39, Curt Sampson wrote:\n> On 1 Aug 2002, Greg Copeland wrote:\n> \n> > For some reason,\n> > many of the developers are under the impression that even if code is\n> > never touched, it has a very high level of effort to keep it in the code\n> > base. That is, of course, completely untrue.\n> \n> Where does this \"of course\" come from? I've been programming for quite a\n> while now, and in my experience every line of code costs you something\n> to maintain.\n\nPlease re-read my statement. Your assertion and my statement are by no\nmeans exclusionary. \"Of course\" was correctly used and does correctly\napply, however, it doesn't appear it was correctly comprehended by you\nas it applied in context. I agree with your statement of, \"...every\nline of code costs you something to maintain...\" which in no way, shape,\nor form contradicts my statement of, \"...it has a very high level of\neffort...of course not...\". Fact is, if code which is never touched and\nrequires a very level of effort to maintain, chances are you screwed up\nsomewhere.\n\nHopefully we can agree that \"...costs you something...\" does not have to\nmean, \"...very high level of effort...\"\n\n As long as there's any interaction with other parts of\n> the system, you have to test it regularly, even if you don't need to\n> directly change it.\n\nNo one said otherwise. Perhaps you were replying to someone else?! :)\n\n> \n> That said, if you've been doing regular work on postgres code base and you\n> say that it's cheap to maintain, I'll accept that.\n\nPlease re-read my statement. In my mind, this was implicately\nunderstood from the statement I made.\n\nShesh...sure hope I remembered to dot all my \"i's\"...\n\nGreg",
"msg_date": "02 Aug 2002 09:26:10 -0500",
"msg_from": "Greg Copeland <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why is MySQL more chosen over PostgreSQL?"
},
{
"msg_contents": "Greg Copeland <[email protected]> writes:\n> On Thu, 2002-08-01 at 23:30, Tom Lane wrote:\n>> FWIW, I did not notice any of the core developers making that case.\n\n> I've seen it used a lot.\n\nPerhaps my meaning wasn't clear: I meant that no one who's familiar\nwith the code base has made that argument against inheritance. It\ndoesn't impact enough of the code to be a maintenance problem. There\nis quite a bit of inheritance code in tablecmds.c, and one or two\nother files, but overall it's a very small issue.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 02 Aug 2002 10:35:37 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why is MySQL more chosen over PostgreSQL? "
},
{
"msg_contents": "> Well, if you also have soundcard_products, in your example you could have a\n> product which is both a networkcard AND a soundcard. No way to restrict\n> that a product can be only one 'subclass' at a time... If you can make that\n> restriction using the relational model, you can do the same as with\n> subclasses. But afaict that is very hard to do...\n>\n\nPerhaps I'm mistaken, but it looks to me as if the relational model still \nholds quite cleanly. \n\nCREATE TABLE products (\nid int4 primary key,\nname text );\n\nCREATE TABLE soundcard (\nprod_id int4 REFERENCES products(id),\nsome_feature BOOLEAN);\n\nCREATE VIEW soundcard_v AS SELECT * FROM products, soundcard WHERE products.id \n= soundcard.prod_id;\n\nCREATE TABLE networkcard (\nprod_id int4 REFERENCES products(id),\nhundred_base_t BOOLEAN);\n\nCREATE VIEW networkcard_v AS SELECT * FROM products, networkcard WHERE \nproducts.id = networkcard.prod_id;\n\nNow, to get the networkcard/soundcard combos, you just need to do:\nSELECT * FROM soundcard_v, networkcard_v WHERE soundcard_v.id = \nnetworkcard_v.id;\n\nFor what it's worth, I didn't make any mistakes writing it up the first time. \nIt most certainly \"fits my brain\" well and seems simple and clean.\n\nI am not advocating that we remove inheritance, but I (so far) agree with Curt \nthat it's pretty useless.\n\nRegards,\n\tJeff\n\n",
"msg_date": "Fri, 2 Aug 2002 10:53:30 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why is MySQL more chosen over PostgreSQL?"
},
{
"msg_contents": "On Fri, 2002-08-02 at 13:53, Jeff Davis wrote:\n> > Well, if you also have soundcard_products, in your example you could have a\n> > product which is both a networkcard AND a soundcard. No way to restrict\n> > that a product can be only one 'subclass' at a time... If you can make that\n> > restriction using the relational model, you can do the same as with\n> > subclasses. But afaict that is very hard to do...\n> >\n> \n> Perhaps I'm mistaken, but it looks to me as if the relational model still \n> holds quite cleanly. \n> \n> CREATE TABLE products (\n> id int4 primary key,\n> name text );\n> \n> CREATE TABLE soundcard (\n> prod_id int4 REFERENCES products(id),\n> some_feature BOOLEAN);\n> \n> CREATE VIEW soundcard_v AS SELECT * FROM products, soundcard WHERE products.id \n> = soundcard.prod_id;\n> \n> CREATE TABLE networkcard (\n> prod_id int4 REFERENCES products(id),\n> hundred_base_t BOOLEAN);\n> \n> CREATE VIEW networkcard_v AS SELECT * FROM products, networkcard WHERE \n> products.id = networkcard.prod_id;\n> \n> Now, to get the networkcard/soundcard combos, you just need to do:\n> SELECT * FROM soundcard_v, networkcard_v WHERE soundcard_v.id = \n> networkcard_v.id;\n> \n> For what it's worth, I didn't make any mistakes writing it up the first time. \n> It most certainly \"fits my brain\" well and seems simple and clean.\n\nYup, you've basically done it -- but you still need the permissions\nlines (soundcard people shouldn't be able to modify networkcard products\n-- but rules on the views could accomplish that).\n\ncreate table product(prod_id int4 primary key);\ncreate table networkcard(hundred_base_t boolean) inherits(product);\ncreate table soundcard(some_feature boolean) inherits(product);\ncreate table something(some_feature integer) inherits(product);\n\nMy favorite (and regularly abused):\n\ncreate table package_deal(package_price) inherits (product, networkcard,\nsoundcard, something);\n\n\nPoor examples as noone would make a sellable package that way, but it\nshows how it is simply shorter to do. New 'product' consists of a\nnetworkcard, soundcard, and something -- always.\n\n\nNobody is saying that:\n\nESC:%s/aba/wo/g\n\nis a real easy way to know to replace all occurrences of 'aba' with\n'wo', and there are lots of other ways of doing it -- but if you happen\nto know it, then it certainly makes life easier but is not a very\nportable command set :)\n\n\nViews don't do much else but make life easier. Putting the SQL into the\noriginal queries is just as effective and slightly lower overhead.\n\nInheritance for me makes life a little bit easier in certain places. \nIt's also easier for the programmers to follow than a wackload of views\nand double inserts.\n\n",
"msg_date": "02 Aug 2002 14:50:08 -0400",
"msg_from": "Rod Taylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why is MySQL more chosen over PostgreSQL?"
},
{
"msg_contents": "On 2 Aug 2002, Hannu Krosing wrote:\n\n> On Fri, 2002-08-02 at 12:15, Curt Sampson wrote:\n> > On 2 Aug 2002, Hannu Krosing wrote:\n> >\n> > > Could you brief me why do they discourage a syntactical frontent to a\n> > > feature that is trivially implemented ?\n> >\n> > What's the point of adding it? It's just one more thing to learn.\n>\n> You don't have to learn it if you don't want to. But once you do, you\n> have a higher level way of expressing a whole class of models.\n\nPerhaps this is the problem. I disagree that it's a \"higher\" level.\n\ncjs\n-- \nCurt Sampson <[email protected]> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n\n",
"msg_date": "Sat, 3 Aug 2002 20:32:10 +0900 (JST)",
"msg_from": "Curt Sampson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why is MySQL more chosen over PostgreSQL?"
},
{
"msg_contents": "On Sat, 2002-08-03 at 16:32, Curt Sampson wrote:\n> On 2 Aug 2002, Hannu Krosing wrote:\n> \n> > On Fri, 2002-08-02 at 12:15, Curt Sampson wrote:\n> > > On 2 Aug 2002, Hannu Krosing wrote:\n> > >\n> > > > Could you brief me why do they discourage a syntactical frontent to a\n> > > > feature that is trivially implemented ?\n> > >\n> > > What's the point of adding it? It's just one more thing to learn.\n> >\n> > You don't have to learn it if you don't want to. But once you do, you\n> > have a higher level way of expressing a whole class of models.\n> \n> Perhaps this is the problem. I disagree that it's a \"higher\" level.\n\nI don't mean \"morally higher\" ;)\n\nJust more concise and easier to grasp, same as VIEW vs. TABLE + ON xxx\nDO INSTEAD rules.\n\nWith INSTEAD rules you can do more than a VIEW does, but when all you\nwant is a VIEW, then it is easier to define a VIEW, thus VIEW is a\nhigher level construct than TABLE + ON xxx DO INSTEAD\n\nThat is the same way that C is \"higher\" than ASM and ASM is higher than\nwriting code directly using hex editor.\n\n--------------\nHannu\n\n",
"msg_date": "03 Aug 2002 18:19:30 +0500",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why is MySQL more chosen over PostgreSQL?"
},
{
"msg_contents": "Hi,\n\n> > Well, if you also have soundcard_products, in your example you could\nhave a\n> > product which is both a networkcard AND a soundcard. No way to restrict\n> > that a product can be only one 'subclass' at a time... If you can make\nthat\n> > restriction using the relational model, you can do the same as with\n> > subclasses. But afaict that is very hard to do...\n>\n> CREATE VIEW networkcard_v AS SELECT * FROM products, networkcard WHERE\n> products.id = networkcard.prod_id;\n\nI think I was not clear enough... You just demonstrated that it is possible\nto have a card that is a soundcard and a networkcard at the same time. The\npoint I tried to make was that it is difficult to _prevent_ this. Ofcourse I\nagree with you that your example fits the relational model perfectly!\n\nI have this problem in a few real-life cases, so if you have a sollution to\nthis, I would realy appreciate it!\nSander.\n\n\n\n",
"msg_date": "Sat, 3 Aug 2002 19:33:04 +0200",
"msg_from": "\"Sander Steffann\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why is MySQL more chosen over PostgreSQL?"
},
{
"msg_contents": "On 3 Aug 2002, Hannu Krosing wrote:\n\n> On Sat, 2002-08-03 at 16:32, Curt Sampson wrote:\n> > On 2 Aug 2002, Hannu Krosing wrote:\n> >\n> > Perhaps this is the problem. I disagree that it's a \"higher\" level.\n>\n> I don't mean \"morally higher\" ;)\n> Just more concise and easier to grasp, same as VIEW vs. TABLE + ON xxx\n> DO INSTEAD rules.\n\nThat's because we don't do a good job of implementing updatable views.\nViews ought to be as fully updatable as possible given the definition,\nwithout having to define rules for doing this. Simple views such as\n\n CREATE TABLE tab1 (\n\tid\tint,\n\tfoo\ttext\n\t)\n CREATE TABLE tab2 (\n\tid\tint,\n\tbar\ttext\n\t)\n CREATE VIEW something AS\n\tSELECT tab1.id, tab1.foo, tab2.bar\n\tFROM tab1, tab2\n\tWHERE tab1.id = tab2.id\n\nought to be completely updatable without any special rules.\n\nFor further info see the detailed discussion of this in Date's\ndatabase textbook.\n\n> That is the same way that C is \"higher\" than ASM and ASM is higher than\n> writing code directly using hex editor.\n\nNo, this is the same way that Smalltalk is \"higher\" than Lisp.\n(I.e., it isn't.)\n\ncjs\n-- \nCurt Sampson <[email protected]> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n\n",
"msg_date": "Sun, 4 Aug 2002 12:49:41 +0900 (JST)",
"msg_from": "Curt Sampson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why is MySQL more chosen over PostgreSQL?"
},
{
"msg_contents": "On Sat, 3 Aug 2002, Sander Steffann wrote:\n\n> I have this problem in a few real-life cases, so if you have a sollution to\n> this, I would realy appreciate it!\n\nAdd a card_type column to your main table, and insert something\nindicating the value of the card type there.\n\nThat won't stop you from having entries for the card in both\nnetwork_card and sound_card, but one of those entries will be\nmeaningless extra data.\n\nOf course, this also means you have to go back to the relational\nmodel to select all your network cards. Doing\n\n SELECT * FROM network_card\n\nmay also return (incorrectly inserted) non-network cards, if your\ndata are not clean, but\n\n SELECT card.card_id, card.whatever, network_card.*\n FROM card, network_card\n WHERE card.card_id = network_card.card_id\n\tAND card.type = 'N'\n\nis guaranteed to return correct results. And of course you can just\nmake that a view called network_card, and the same statement as\nyou used with the inerhited table will work.\n\nOops, did I just replace your \"object-oriented\" system with a\nrelational one that does everything just as easily, and even does\nsomething the object-oriented one can't do? Sorry about that. :-)\n\ncjs\n-- \nCurt Sampson <[email protected]> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n\n",
"msg_date": "Wed, 7 Aug 2002 11:41:54 +0900 (JST)",
"msg_from": "Curt Sampson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why is MySQL more chosen over PostgreSQL?"
},
{
"msg_contents": "Curt Sampson wrote:\n> On Sat, 3 Aug 2002, Sander Steffann wrote:\n> \n> \n>>I have this problem in a few real-life cases, so if you have a sollution to\n>>this, I would realy appreciate it!\n> \n> \n> Add a card_type column to your main table, and insert something\n> indicating the value of the card type there.\n> \n> That won't stop you from having entries for the card in both\n> network_card and sound_card, but one of those entries will be\n> meaningless extra data.\n\nSo again relational theory can solve the problem but at a cost in \nefficiency.\n\nSo could a Turing machine.\n\n> Of course, this also means you have to go back to the relational\n> model to select all your network cards. Doing\n> \n> SELECT * FROM network_card\n> \n> may also return (incorrectly inserted) non-network cards, if your\n> data are not clean, but\n> \n> SELECT card.card_id, card.whatever, network_card.*\n> FROM card, network_card\n> WHERE card.card_id = network_card.card_id\n> \tAND card.type = 'N'\n> \n> is guaranteed to return correct results. And of course you can just\n> make that a view called network_card, and the same statement as\n> you used with the inerhited table will work.\n\nThe view would work, but of course you have to define the view. Any \ntime you have to do something manually, even something as simple as to \ndefine a view, the chance for casual error is introduced.\n\n> Oops, did I just replace your \"object-oriented\" system with a\n> relational one that does everything just as easily, and even does\n> something the object-oriented one can't do?\n\nYou mean \"waste space with meaningless extra data\"?\n\nOf *course* you can do that in an object-oriented one. Your skills \naren't unique, nor is your skill level though you act as though you \nthink you're in a class of your own.\n\n> Sorry about that. :-)\n\nMe, too. The relational model is extremely powerful but it's not the \nbe-all and end-all of all things.\n\nYou still haven't answered my earlier observation that the PG model, \nwith all its flaws, can reduce the number of joins required.\n\nFor instance in your example card and network card need to be joined if \nyou want to return network card. That's what I see in the view.\n\n\"FROM card, network_card\"\n\nUsing PG's inheritance no join is necessary.\n\nI assume you know that because you've demonstrated your brilliance to \nsuch an extent that I can only assume you've familiarized yourself with \nthe actual details of PG's implementation?\n\nI can't imagine you're the kind of mouth-flapper that would do so \nwithout such basic research, after all.\n\nSo ... assuming my assumption is true and that you've bothered to study \nthe implementation, why should I prefer the join over the \nfaster-executing single-table extraction if I use PG's type extension \nfacility?\n\n\n-- \nDon Baccus\nPortland, OR\nhttp://donb.photo.net, http://birdnotes.net, http://openacs.org\n\n",
"msg_date": "Tue, 06 Aug 2002 20:09:48 -0700",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why is MySQL more chosen over PostgreSQL?"
},
{
"msg_contents": "On Tue, 6 Aug 2002, Don Baccus wrote:\n\n> So again relational theory can solve the problem but at a cost in\n> efficiency.\n\nIf you're talking about theory, efficiency doesn't come into it.\nThe question is how and whether you can express the constratints\nyou need to express.\n\nNote that I am not advocating removing anything that does not fit into\nrelational theory but does let us do things more efficiently. We live\nin an imperfect world, after all.\n\nIn fact, why don't we split the dicussion into two separate parts:\nrelational theory vs. object-oriented theory, and practical use\nwith postgres, and never mix the two. Ok?\n\n> So could a Turing machine.\n\nTheory: Sure. But this is much harder to express in a turing machine\nisn't it?\n\n> The view would work, but of course you have to define the view. Any\n> time you have to do something manually, even something as simple as to\n> define a view, the chance for casual error is introduced.\n\nTheory: views should automatically make themselves as updatable as\npossible, unless expressed otherwise. In fact, relationally, there\nis no difference between a view and a base table; that's only part\nof a storage model, which doesn't come into it in our perfect\ntheoretical world.\n\nPractice: defining a non-updatable view is pretty trivial in\npostgres. Defining an updatable view is rather harder, and more\nsubject to error. However, in this particular case it's a necessary\nevil, since you can't use table inheritance to do what you want.\n\n> > Oops, did I just replace your \"object-oriented\" system with a\n> > relational one that does everything just as easily, and even does\n> > something the object-oriented one can't do?\n>\n> You mean \"waste space with meaningless extra data\"?\n\nNo, I mean set up your database so that a card can be a network_card\nor a sound_card, but not both.\n\nYou may also waste some space with meaningless data, if you have bugs\nin your application, but a) that meaningless data is pretty easy to\nclean up, and b) wasting a bit of space is a lot better than having\nincorrect data.\n\n> Me, too. The relational model is extremely powerful but it's not the\n> be-all and end-all of all things.\n\nTheory: Never said it was. I said that table inheritance is an\nunnecessary addition to a relational database; it offers no capabilities\nyou can't offer within the relational model, nor does it make things\neasier to do than within the relational model. (Since we are talking\nabout theory, I hasten to add that it is possible to implement something\nwhere the OO way is easier to use than the relational way, but you're\nnot forced to implement things this way.)\n\n> You still haven't answered my earlier observation that the PG model,\n> with all its flaws, can reduce the number of joins required.\n\nSorry. Let me deal with that now: that's an incorrect observation.\n\n> For instance in your example card and network card need to be joined if\n> you want to return network card. That's what I see in the view.\n>\n> \"FROM card, network_card\"\n>\n> Using PG's inheritance no join is necessary.\n\nBut going the other way around:\n\n FROM card\n\nResult (cost=0.00..27.32 rows=6 width=36)\n -> Append (cost=0.00..27.32 rows=6 width=36)\n -> Index Scan using ih_parent_pkey on ih_parent (cost=0.00..4.82 rows=1 width=36)\n -> Seq Scan on ih_child ih_parent (cost=0.00..22.50 rows=5 width=36)\n\nSure looks like a join to me.\n\n> So ... assuming my assumption is true and that you've bothered to study\n> the implementation, why should I prefer the join over the\n> faster-executing single-table extraction if I use PG's type extension\n> facility?\n\nWell, it depends on what your more frequent queries are.\n\nBut anyway, I realized that some of the joins I've shown are\nunnecessary; I've incorrectly implemented, relationally, the inheritance\nmodel you've shown. Here's the explanation:\n\nGiven a parent with an ID field as the primary key, and two children\nthat inherit that field, you can have the same ID in child1 and child2,\nresulting in the ID appearing twice in the parent table. In other\nwords, the PRIMARY KEY constraint on the parent is a lie. If I were\nto implement that relationally (though I'm not sure why I'd want to),\nI'd just implement the parent as a view of the children, and add\nanother table to hold the parent-only data. Now the joins under all\ncircumstances would be exactly the same as in the version implemented\nwith inheritance, and you'd have the added advantage that there would be\nno lies in the database schema. (And I'm sure I've even seen complaints\nabout this before, and requests for hacks such as cross-table indexes to\nget around this.)\n\nIf you feel that I'm missing something here, please send me a schema and\nqueries that you believe that inheritance does more efficiently than any\nrelational method can in postgres, and I'll implement it relationally\nand test it. If it is indeed impossible to implement as efficiently\nrelationally as it is with inheritance, I will agree with you that, for\nthe moment, inheritance has some practical uses in postgres. (I'll also\nsubmit a change request to fix the relational stuff so that it can be\nimplemented as efficiently.)\n\nIt could even happen that you will show me something that the relational\nmodel just doesn't handle, in which case you'll have won the argument.\n\ncjs\n-- \nCurt Sampson <[email protected]> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n\n\n",
"msg_date": "Wed, 7 Aug 2002 13:48:51 +0900 (JST)",
"msg_from": "Curt Sampson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why is MySQL more chosen over PostgreSQL?"
},
{
"msg_contents": "Curt Sampson wrote:\n> On Tue, 6 Aug 2002, Don Baccus wrote:\n> \n> \n>>So again relational theory can solve the problem but at a cost in\n>>efficiency.\n> \n> \n> If you're talking about theory, efficiency doesn't come into it.\n\nThat's rather the point, isn't it?\n\nIn the real world, it does.\n\n> The question is how and whether you can express the constratints\n> you need to express.\n\nHave I said anything other than this?\n\n> Note that I am not advocating removing anything that does not fit into\n> relational theory but does let us do things more efficiently. We live\n> in an imperfect world, after all.\n> \n> In fact, why don't we split the dicussion into two separate parts:\n> relational theory vs. object-oriented theory, and practical use\n> with postgres, and never mix the two. Ok?\n\nBecause in fact you have advocated removing the OO stuff.\n\nYou won't find me suggesting that this feature can't be modelled in \nrelational theory. AFter all I've got something like a quarter million \nlines of code over at OpenACS that proves you can.\n\nHowever my co-developers and users would've glady accept the decreased \neffort in implementation and cleaner source code that the PG OO \nextensions offer if the implementation had been more complete.\n\n>>The view would work, but of course you have to define the view. Any\n>>time you have to do something manually, even something as simple as to\n>>define a view, the chance for casual error is introduced.\n> \n> \n> Theory: views should automatically make themselves as updatable as\n> possible, unless expressed otherwise. In fact, relationally, there\n> is no difference between a view and a base table; that's only part\n> of a storage model, which doesn't come into it in our perfect\n> theoretical world.\n\nWhether or not the view is written in such a way that it doesn't need to \nbe rewritten, dropped and recreated when you change the tables that its \ncomposed of, you *still* need to write that view when you first extend \nyour type using the table+view model.\n\nThat's what I was referring to above. You have to write the view and \nget it right (i.e. write the join using the proper key for it and the \nbase view you're extending).\n\nWriting extra code, no matter how trivial, increases the odds that a \nmistake will be made.\n\nYou also need to write the proper foreign key and primary key \nconstraints in the table being used to do the type extension. Of course \nthis is true of PG's current OO implementation but if it were fixed it \nwould be one less chore that the programmer needs to remember.\n\n> But anyway, I realized that some of the joins I've shown are\n> unnecessary; I've incorrectly implemented, relationally, the inheritance\n> model you've shown.\n\nYou mean you accidently supported the argument that this approach is, \nperhaps, more error prone?\n\n> It could even happen that you will show me something that the relational\n> model just doesn't handle, in which case you'll have won the argument.\n\nI haven't *made* that argument. Please stop raising strawmen.\n\nThe argument I've made is that even though that you can model PG's OO \nfeatures not just relationally but in real-live warts-and-all SQL92, \nthat doesn't mean they're not useful.\n\nWe don't need the binary \"integer\" type, either. We could just use \n\"number\". Yes, operations on \"number\" are a bit slower and they often \ntake more space, but ...\n\nShall we take a vote :)\n\n\n-- \nDon Baccus\nPortland, OR\nhttp://donb.photo.net, http://birdnotes.net, http://openacs.org\n\n",
"msg_date": "Wed, 07 Aug 2002 06:06:18 -0700",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why is MySQL more chosen over PostgreSQL?"
},
{
"msg_contents": "On Wed, 2002-08-07 at 06:48, Curt Sampson wrote:\n> On Tue, 6 Aug 2002, Don Baccus wrote:\n> \n> > So again relational theory can solve the problem but at a cost in\n> > efficiency.\n> \n> If you're talking about theory, efficiency doesn't come into it.\n> The question is how and whether you can express the constratints\n> you need to express.\n> \n> Note that I am not advocating removing anything that does not fit into\n> relational theory but does let us do things more efficiently. We live\n> in an imperfect world, after all.\n> \n> In fact, why don't we split the dicussion into two separate parts:\n> relational theory vs. object-oriented theory, and practical use\n> with postgres, and never mix the two. Ok?\n> \n> > So could a Turing machine.\n> \n> Theory: Sure. But this is much harder to express in a turing machine\n> isn't it?\n\nYou got it ;) The claim was that it is easiest to express it using\ninheritance, a little harder using pure relational model and much harder\nusing a Turing machine.\n\n> > The view would work, but of course you have to define the view. Any\n> > time you have to do something manually, even something as simple as to\n> > define a view, the chance for casual error is introduced.\n> \n> Theory: views should automatically make themselves as updatable as\n> possible, unless expressed otherwise. In fact, relationally, there\n> is no difference between a view and a base table; that's only part\n> of a storage model, which doesn't come into it in our perfect\n> theoretical world.\n> \n> Practice: defining a non-updatable view is pretty trivial in\n> postgres. Defining an updatable view is rather harder, and more\n> subject to error.\n\nBut defining an updatable inherited table is easy .\n\n> However, in this particular case it's a necessary\n> evil, since you can't use table inheritance to do what you want.\n> > > Oops, did I just replace your \"object-oriented\" system with a\n> > > relational one that does everything just as easily, and even does\n> > > something the object-oriented one can't do?\n> >\n> > You mean \"waste space with meaningless extra data\"?\n> \n> No, I mean set up your database so that a card can be a network_card\n> or a sound_card, but not both.\n\nWhy can't you do this using inheritance ?\n\ncreate table card(...);\ncreate table network_card(...) inherits(card);\ncreate table sound_card(...) inherits(card);\n\nshould do exactly that.\n\n> You may also waste some space with meaningless data, if you have bugs\n> in your application, but a) that meaningless data is pretty easy to\n> clean up, and b) wasting a bit of space is a lot better than having\n> incorrect data.\n\nin this case wasting a bit of space == having incorrect data.\n\nThe possiblity of getting out wrong data always exists if there is\nincorrect data in the system. You can't reasonably expect that nobody\nwill query just the network_card table without doing the fancy join with\nadditional card.type='N'. The join version is also bound to be always\nslower than the non-join version.\n\n> > Me, too. The relational model is extremely powerful but it's not the\n> > be-all and end-all of all things.\n> \n> Theory: Never said it was. I said that table inheritance is an\n> unnecessary addition to a relational database; it offers no capabilities\n> you can't offer within the relational model, nor does it make things\n> easier to do than within the relational model. (Since we are talking\n> about theory, I hasten to add that it is possible to implement something\n> where the OO way is easier to use than the relational way, but you're\n> not forced to implement things this way.)\n> \n> > You still haven't answered my earlier observation that the PG model,\n> > with all its flaws, can reduce the number of joins required.\n> \n> Sorry. Let me deal with that now: that's an incorrect observation.\n> \n> > For instance in your example card and network card need to be joined if\n> > you want to return network card. That's what I see in the view.\n> >\n> > \"FROM card, network_card\"\n> >\n> > Using PG's inheritance no join is necessary.\n> \n> But going the other way around:\n> \n> FROM card\n> \n> Result (cost=0.00..27.32 rows=6 width=36)\n> -> Append (cost=0.00..27.32 rows=6 width=36)\n> -> Index Scan using ih_parent_pkey on ih_parent \n(cost=0.00..4.82 rows=1 width=36)\n> -> Seq Scan on ih_child ih_parent (cost=0.00..22.50 rows=5\nwidth=36)\n> \n> Sure looks like a join to me.\n> \n\nBut you did not have to write it - it was written, debugged and\noptimised by postgres.\n\n> > So ... assuming my assumption is true and that you've bothered to study\n> > the implementation, why should I prefer the join over the\n> > faster-executing single-table extraction if I use PG's type extension\n> > facility?\n> \n> Well, it depends on what your more frequent queries are.\n> \n> But anyway, I realized that some of the joins I've shown are\n> unnecessary; I've incorrectly implemented, relationally, the inheritance\n> model you've shown. Here's the explanation:\n\nWhich proves that using lower level idioms for describing inheritance is\nmore error prone. \n\nBtw, this is a general principle - the more lines of code you write to\nsolve the same problem, the more possibilities you have to make errors.\nGiven enough possibilities, everyone makes errors.\n\nOTOH, sometimes you need to do low-level work to get the last bit of\nperformance out of the systems (sometimes down to assembly level).\n\n...\n\n> It could even happen that you will show me something that the relational\n> model just doesn't handle, in which case you'll have won the argument.\n\nAs the inheritance model is built on top of relational one, it is\nimpossible to come up with something that relational model does not\nhandle. Just as it is impossible to show you a VIEW that can't be done\nwith ON SELECT DO INSTEAD rules.\n\nWhat our current implementation does show, is that there is a subset of\ngenerated views that are updatable. They are not explicitly statically\ndefined as views (because they change dynamically as new child tables\nare inherited) but they are constructed each time you do a\nSELECT/UPDATE/DELETE on parent table. \n\nI suspect that the fact that this is implemented and general updatable\nviews are not is due to bigger complexity of doing this for a general\ncase than for specific \"inheritance\" case.\n\n---------------------\nHannu\n\n",
"msg_date": "07 Aug 2002 16:38:47 +0200",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why is MySQL more chosen over PostgreSQL?"
},
{
"msg_contents": "On 7 Aug 2002, Hannu Krosing wrote:\n\n> > Theory: Sure. But this is much harder to express in a turing machine\n> > isn't it?\n>\n> You got it ;) The claim was that it is easiest to express it using\n> inheritance, a little harder using pure relational model and much harder\n> using a Turing machine.\n\nOk. I agree that it's much harder with a turning machine. I do *not*\nagree that it's harder with the relational model. In fact, since you\n*must* use the relational model for some things, I argue that it's\nharder to switch back and forth between the relational and OO models,\nand understand the effects of each on the other, than it is just to do\nit in OO form in the first place.\n\nIn fact, I'd argue at this point, as far as table inheritance goes,\nwe don't even have a real model here. Let's look at a few of the problems.\n\n1. I create a base table with an column with a UNIQUE constraint on\nit, and two child tables. I can insert the same value into that column\ninto the two child tables, thus violating the unique constraint in the\nbase table. Now how can it be acceptable, in postgres or any other\nrelational database, to have a column declared to contain unique values\nhave non-unique values in it? (Note that this was the source of my\nerror in re-implementing some table-inheritance-modeled stuff here in\nrelational form; I preserved that unique constraint when I should not\nhave.)\n\n2. When you have child1 and child2 tables both inheriting directly\nfrom a base table, you can have entries in both child1 and child2\nwhose component from the base table is the same. What does this\nmean? Are we supposed to be able to have objects that can simultaneously\nbe both subtypes?\n\nWell, I could go on, but just from this you can see that:\n\n 1. We appear to have no proper theory even defined for how\n table inheritance should work.\n\n 2. If we did, either postgres is not consistent with it, or\n the theory itself is in conflict with the relational portion\n of the database.\n\nWhatever way you look at it, it's apparent to me that using table\ninheritance is dangerous, confusing, and should be avoided if you\nwant to maintain data integrity and a self-consistent view of your\ndata.\n\n> > No, I mean set up your database so that a card can be a network_card\n> > or a sound_card, but not both.\n>\n> Why can't you do this using inheritance ?\n>\n> create table card(...);\n> create table network_card(...) inherits(card);\n> create table sound_card(...) inherits(card);\n>\n> should do exactly that.\n\nBut it doesn't. You can have an entry in network_card and another one in\nsound_card which share the same primary key in the sound_card table.\n\n> in this case wasting a bit of space == having incorrect data.\n\nNo, it doesn't. Your queries will never return incorrect data; the\n\"unused\" records will be ignored.\n\n> The possiblity of getting out wrong data always exists if there is\n> incorrect data in the system.\n\nNo, you can't put incorrect data into the system. The data about what\ntype of card it is is not in the sound_card or network_card table, but\nin the card table itself, and thus it can only ever have one value for\nany card entry. It's impossible for that column to have more than one\nvalue, thus impossible for that column to have incorrect data.\n\nNow you may argue that, because there's an entry for that card in\nboth network_card and sound_card, that means that the card has two\ntypes. But that's just deliberate misinterpretation, because you're\ngetting the type information from the wrong place. You might as\nwell argue that a table holding temperatures is \"incorrect data\"\nbecause someone put them in in degress centigrate, and you're\ninterpreting them as degrees Fahrenheit when you pull them out.\n\n> > Sure looks like a join to me.\n>\n> But you did not have to write it - it was written, debugged and\n> optimised by postgres.\n\nSo? The argument I was replying to stated that his method was more\nefficient because it didn't use joins. Who wrote the join does not\nmatter; it turns out that inside it all joins happen, and so it's\nnot more efficient.\n\n> > But anyway, I realized that some of the joins I've shown are\n> > unnecessary; I've incorrectly implemented, relationally, the inheritance\n> > model you've shown. Here's the explanation:\n>\n> Which proves that using lower level idioms for describing inheritance is\n> more error prone.\n\nNo, it proves that the semantics of table inheritance are confusing, or\npostgres incorrectly impelements them, or both. This kind of mistake is\n*exactly* the reason I avoid table inheritance; I couldn't tell just\nwhat you were doing! And I still am not convinced that what you were\ndoing was what you wanted to do, especially given that I've seen other\ncomplaints in this forum that table inheritance specifically was *not*\ndoing what people wanted it to do (thus the plea for cross-table unique\nindexes).\n\n> I suspect that the fact that this is implemented and general updatable\n> views are not is due to bigger complexity of doing this for a general\n> case than for specific \"inheritance\" case.\n\nI'll agree with that.\n\ncjs\n-- \nCurt Sampson <[email protected]> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n\n\n",
"msg_date": "Thu, 8 Aug 2002 10:47:50 +0900 (JST)",
"msg_from": "Curt Sampson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why is MySQL more chosen over PostgreSQL?"
},
{
"msg_contents": "On Wed, 7 Aug 2002, Don Baccus wrote:\n\n> >>So again relational theory can solve the problem but at a cost in\n> >>efficiency.\n> >\n> > If you're talking about theory, efficiency doesn't come into it.\n>\n> That's rather the point, isn't it?\n>\n> In the real world, it does.\n\nWell, I think I dealt with this elsewhere in my post by showing\nthat I can always implement what you did with inheritance just as\nefficiently using relational methods, and sometimes more efficiently.\n\n> Because in fact you have advocated removing the OO stuff.\n\nActually, I'd suggested thinking about removing the OO stuff. Starting\na discussion about the concept is far from \"advocating\" it. And in fact\nI'd backed off the idea of removing it. However, now that it appears to\nme that table inheritance actually breaks the relational portion of the\ndatabase, I'm considering advocating its removal. (This requires more\ndiscussion, of course.)\n\n> Writing extra code, no matter how trivial, increases the odds that a\n> mistake will be made.\n\nYeah. But using a broken table inheritance model is far more likely to\ncause bugs and errors. It certainly did when I tried to figure out what\nyou were doing using inheritance. Not only did I get it wrong, but I'm\nnot at all convinced that what you were doing was what you really wanted\nto do.\n\n> You mean you accidently supported the argument that this approach is,\n> perhaps, more error prone?\n\nNo, supported the argument that table inheritance is either\nill-defined, broken, or both.\n\n> The argument I've made is that even though that you can model PG's OO\n> features not just relationally but in real-live warts-and-all SQL92,\n> that doesn't mean they're not useful.\n\nAll right. I disagree with that, too. I think that they are not\nonly not useful, but harmful.\n\n> We don't need the binary \"integer\" type, either. We could just use\n> \"number\". Yes, operations on \"number\" are a bit slower and they often\n> take more space, but ...\n>\n> Shall we take a vote :)\n\nIf you like. I vote we keep the integer type. Any other questions?\n\ncjs\n-- \nCurt Sampson <[email protected]> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n\n",
"msg_date": "Thu, 8 Aug 2002 10:54:45 +0900 (JST)",
"msg_from": "Curt Sampson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why is MySQL more chosen over PostgreSQL?"
},
{
"msg_contents": "On Thu, 2002-08-08 at 06:47, Curt Sampson wrote:\n> On 7 Aug 2002, Hannu Krosing wrote:\n> \n> > > Theory: Sure. But this is much harder to express in a turing machine\n> > > isn't it?\n> >\n> > You got it ;) The claim was that it is easiest to express it using\n> > inheritance, a little harder using pure relational model and much harder\n> > using a Turing machine.\n> \n> Ok. I agree that it's much harder with a turning machine. I do *not*\n> agree that it's harder with the relational model. In fact, since you\n> *must* use the relational model for some things, I argue that it's\n> harder to switch back and forth between the relational and OO models,\n\nFor me they are _not_ two different models but rather one\nobject-relational model. Same as C++ in _not_ a completely new language\nbut rather an extension of plain C.\n\nAs you seem to like fat books, check out :\n\n\"Object Relational Dbms: Tracking the Next Great Wave\" by Michael\nStonebraker, Dorothy Moore (Contributor), Paul Brown\nISBN: 1558604529\n\nI'm sure you find the requested arguments against Date there ;)\n\n> and understand the effects of each on the other, than it is just to do\n> it in OO form in the first place.\n> \n> In fact, I'd argue at this point, as far as table inheritance goes,\n> we don't even have a real model here.\n\nThe table inheritance _implementation_ in PG is in fact broken in\nseveral ways, most notably in not enforcing uniqueness over all\ninherited tables and not inheriting other constraints.\n\nBut as you often like to emphasize, model and implementation _are_\ndifferent things.\n\n--------------\nHannu\n\n",
"msg_date": "08 Aug 2002 09:01:15 +0500",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why is MySQL more chosen over PostgreSQL?"
},
{
"msg_contents": "Curt Sampson wrote:\n\n>>Because in fact you have advocated removing the OO stuff.\n\n> Actually, I'd suggested thinking about removing the OO stuff.\n\nMan, aren't we into splitting hairs?\n\nYou actually stated your case quite strongly and indeed if you hadn't, \nthe thread would've died long ago.\n\nWhatever. You're just dick-waving.\n\nEnjoy your life :)\n\n> Starting\n> a discussion about the concept is far from \"advocating\" it. And in fact\n> I'd backed off the idea of removing it. However, now that it appears to\n> me that table inheritance actually breaks the relational portion of the\n> database, I'm considering advocating its removal. (This requires more\n> discussion, of course.)\n\nExcept apparently you have no life, oh well, not my problem.\n\n>>Writing extra code, no matter how trivial, increases the odds that a\n>>mistake will be made.\n> \n> \n> Yeah. But using a broken table inheritance model is far more likely to\n> cause bugs and errors. It certainly did when I tried to figure out what\n> you were doing using inheritance. Not only did I get it wrong, but I'm\n> not at all convinced that what you were doing was what you really wanted\n> to do.\n\nI wasn't using inheritance. I didn't post an example. And all agree \nthat PG's model is broken and eventually needs to be fixed.\n\nThree strawmen in one paragraph.\n\nAgain, you're dick-waving and further discussion is not useful.\n\n>>You mean you accidently supported the argument that this approach is,\n>>perhaps, more error prone?\n\n> No, supported the argument that table inheritance is either\n> ill-defined, broken, or both.\n\nThen what you're saying is you've been arguing all this time against it \nwithout understanding how it works?\n\nBecause either\n\n1. If you understood how it worked then you screwed up your more complex \nview-based analogue, therefore supporting the argument that you've shown \nthat the mapping is more error prone.\n\n2. Or you screwed up your code because you've been dick-waving without \nbothering to learn the semantics of the PG OO extensions, which doesn't \nreally enhance your credibility.\n\nWhich is it? The idiot behind door number one or the pendantic boor \nbehind door number two?\n\n>>We don't need the binary \"integer\" type, either. We could just use\n>>\"number\". Yes, operations on \"number\" are a bit slower and they often\n>>take more space, but ...\n>>\n>>Shall we take a vote :)\n> \n> \n> If you like. I vote we keep the integer type. Any other questions?\n\nSure ... why the inconsistency without explanation?\n\n-- \nDon Baccus\nPortland, OR\nhttp://donb.photo.net, http://birdnotes.net, http://openacs.org\n\n",
"msg_date": "Wed, 07 Aug 2002 23:19:13 -0700",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why is MySQL more chosen over PostgreSQL?"
},
{
"msg_contents": "On Wed, 7 Aug 2002, Don Baccus wrote:\n\n> Whatever. You're just dick-waving....\n> Except apparently you have no life, oh well, not my problem....\n> Again, you're dick-waving and further discussion is not useful....\n> Which is it? The idiot behind door number one or the pendantic boor\n> behind door number two?\n\nUh, yeah. If ad hominem attacks win arguments, I guess you win.\nI'll let others decide whether the above arguments are a good reason\nto keep table inheritance in postgres.\n\n>\n> >>We don't need the binary \"integer\" type, either. We could just use\n> >>\"number\". Yes, operations on \"number\" are a bit slower and they often\n> >>take more space, but ...\n> >>\n> >>Shall we take a vote :)\n> >\n> > If you like. I vote we keep the integer type. Any other questions?\n>\n> Sure ... why the inconsistency without explanation?\n\nPersonally I don't find it inconsistent that I want to remove something\nthat's broken and of dubious utility but keep something that works and\nis demonstrably useful. It must be something to do with my dick, I\nsuppose. But I'll admit, your arguments are beyond me. I surrender.\n\ncjs\n-- \nCurt Sampson <[email protected]> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n\n",
"msg_date": "Thu, 8 Aug 2002 16:08:16 +0900 (JST)",
"msg_from": "Curt Sampson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why is MySQL more chosen over PostgreSQL?"
},
{
"msg_contents": "On Thu, 2002-08-08 at 17:57, Curt Sampson wrote:\n> On 8 Aug 2002, Hannu Krosing wrote:\n> \n> > For me they are _not_ two different models but rather one\n> > object-relational model.\n> \n> Well, given that we've already demonstrated two rather different ways\n> of saying \"the same thing,\" I think we have two models happening here.\n> However, feel free to explain your \"object-relational model\" in more\n> detail, including its advantages over the ordinary relational model.\n\nThe main difference (in the inheritance part) is that a relation does\nnot have one fixed set of fields, but can have any additional fields\nadded in inherited tables and still be part of to the base table as\nwell.\n\n...\n\n> > The table inheritance _implementation_ in PG is in fact broken in\n> > several ways, most notably in not enforcing uniqueness over all\n> > inherited tables and not inheriting other constraints.\n> \n> Right. I'm glad we agree on that.\n> \n> > But as you often like to emphasize, model and implementation _are_\n> > different things.\n> \n> Ok. I won't object too much to the model, but let's get rid of this\n> severely broken implementation, unless there are some prospects\n> for fixing it. How's that?\n\nActually I am not against ripping out the current broken implementation,\nbut not before there has been a new, correct model available for at\nleast two releses, so that people have had time to switch over.\n\nThe inheritance model that SQL99 prescribes is more like java's - single\ninheritance (so that you have no way of inheriting two primary keys ;) +\nLIKE in table definition (in some ways similar to java interfaces)\n\nI see that this could be implemented quite nicely by storing all the\ninherited tables in the same page file, in which case primary key would\nalmost automatically span child relations and indexes on child relations\nbecome partial indexes on the whole thing. There already is some support\nfor this present (namely tableoid system field stored in every tuple)\n\n> BTW, can someone explain the model for inherited tables here? Is\n> it really just as described _The Third Manifesto_, trivial syntactic\n> sugar over the relational model? \n\nIt is \"just\" syntactic sugar, just as VIEW is \"just\" syntactic sugar for\nON SELECT DO INSTEAD rules.\n\nVIEWs are broken too, in the sense that you can't insert into them\nwithout doing some hard work. \n\nBut guess you would rather see VIEWs \"fixed\" to be insertable and\nupdatable, rather than ripped out \"because the same thing and more\" can\nbe done using RULEs ;)\n\n> Or is it supposed to offer something\n> that the relational model doesn't do very simply?\n\nIt is supposed to help programmers express structures that they would\ndescribe as inheritance in an ERD diagramm in SQL without having to do\nmental gymnastics each time they go from model to schema.\n\nHaving a shorter description is on one hand syntactic sugar, on the\nother hand shorter.\n\n\n\n\n\n",
"msg_date": "08 Aug 2002 16:32:17 +0500",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why is MySQL more chosen over PostgreSQL?"
},
{
"msg_contents": "On 8 Aug 2002, Hannu Krosing wrote:\n\n> For me they are _not_ two different models but rather one\n> object-relational model.\n\nWell, given that we've already demonstrated two rather different ways\nof saying \"the same thing,\" I think we have two models happening here.\nHowever, feel free to explain your \"object-relational model\" in more\ndetail, including its advantages over the ordinary relational model.\n\n> \"Object Relational Dbms: Tracking the Next Great Wave\" by Michael\n> Stonebraker, Dorothy Moore (Contributor), Paul Brown\n> ISBN: 1558604529\n>\n> I'm sure you find the requested arguments against Date there ;)\n\nUnfortunately, this is a bit hard to order in Japan. So before I go\nspend 8000 yen and wait a couple of weeks to get hold of a copy, I'd\nbe interested in just what is there that would dispute Date's points.\nLooking through the index on Amazon.com, it appears that the book\ndevotes, at the very most, eight pages to table inheritance. What does\nit say about it?\n\n> The table inheritance _implementation_ in PG is in fact broken in\n> several ways, most notably in not enforcing uniqueness over all\n> inherited tables and not inheriting other constraints.\n\nRight. I'm glad we agree on that.\n\n> But as you often like to emphasize, model and implementation _are_\n> different things.\n\nOk. I won't object too much to the model, but let's get rid of this\nseverely broken implementation, unless there are some prospects\nfor fixing it. How's that?\n\nBTW, can someone explain the model for inherited tables here? Is\nit really just as described _The Third Manifesto_, trivial syntactic\nsugar over the relational model? Or is it supposed to offer something\nthat the relational model doesn't do very simply? (Not to mention\ncorrectly, in the case of postgres.)\n\ncjs\n-- \nCurt Sampson <[email protected]> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n\n",
"msg_date": "Thu, 8 Aug 2002 21:57:41 +0900 (JST)",
"msg_from": "Curt Sampson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why is MySQL more chosen over PostgreSQL?"
},
{
"msg_contents": "> > The table inheritance _implementation_ in PG is in fact broken in\n> > several ways, most notably in not enforcing uniqueness over all\n> > inherited tables and not inheriting other constraints.\n> \n> Right. I'm glad we agree on that.\n> \n> > But as you often like to emphasize, model and implementation _are_\n> > different things.\n> \n> Ok. I won't object too much to the model, but let's get rid of this\n> severely broken implementation, unless there are some prospects\n> for fixing it. How's that?\n> \n\nWasn't that what was seemingly agreed on by pretty much everyone else on\nthis thread long ago? The current implementation is problematic and\nthat it needs to be fixed.\n\nAs far as I can tell, the only difference of opinion here is, you seem\nto hold zero value in table inheritance while others do see value. At\nthis point in time, can't you guys agree to disagree and leave the\nmajority of this thread behind us?\n\n> BTW, can someone explain the model for inherited tables here? Is\n> it really just as described _The Third Manifesto_, trivial syntactic\n> sugar over the relational model? Or is it supposed to offer something\n> that the relational model doesn't do very simply? (Not to mention\n> correctly, in the case of postgres.)\n\nI would, however, enjoy seeing the theory portion continued as long as\nit were kept at the theoretical level. After all, I think everyone\nagreed that Postgres' implementation is broken. It doesn't seem like we\nneed to keep beating that horse.\n\nAny takers? ;)\n\nGreg",
"msg_date": "08 Aug 2002 09:31:33 -0500",
"msg_from": "Greg Copeland <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why is MySQL more chosen over PostgreSQL?"
},
{
"msg_contents": "> BTW, can someone explain the model for inherited tables here? Is\n> it really just as described _The Third Manifesto_, trivial syntactic\n> sugar over the relational model? Or is it supposed to offer something\n> that the relational model doesn't do very simply? (Not to mention\n> correctly, in the case of postgres.)\n\nNo matter how much you grandstand, we're not getting rid of the\ninheritance support. It's not going to happen. People are using it.\n\nChris\n\n\n",
"msg_date": "Thu, 8 Aug 2002 22:40:20 +0800 (WST)",
"msg_from": "Christopher Kings-Lynne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why is MySQL more chosen over PostgreSQL?"
},
{
"msg_contents": "Greg,\n\nWell put, I can't agree more. I think even the horse has gotten up and\nleft.\n\nI think what would be useful is to discuss the theory part. When we go down\nthat path, we should all be referring to a consistent set of references.\nThere by we can have a common ground from which to talk. In that spirit, I\nwould offer up the following references:\n\n- Date has 3, however his most current work is dated 2000, The Third\nManifesto SECOND EDITION.\n-There is the work done by Dr Kim, perhaps 'Modern Database Systems, The\nObject Model, Interoperability, and Beyond'.\n- Silberschatz, Korth, Sudarshan, A book I am sure we have all read,\nDatabase System Concepts - Third Edition.\n\nIn any case, we should use the current editions of these books, not\nsomething the author has reconsidered, re-written, and published again.\n\nJordan Henderson\n\n----- Original Message -----\nFrom: \"Greg Copeland\" <[email protected]>\nTo: \"Curt Sampson\" <[email protected]>\nCc: \"Hannu Krosing\" <[email protected]>; \"Don Baccus\" <[email protected]>;\n\"PostgresSQL Hackers Mailing List\" <[email protected]>\nSent: Thursday, August 08, 2002 10:31 AM\nSubject: Re: [HACKERS] Why is MySQL more chosen over PostgreSQL?\n\n\n\n",
"msg_date": "Thu, 8 Aug 2002 10:42:37 -0400",
"msg_from": "\"Jordan Henderson\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why is MySQL more chosen over PostgreSQL?"
},
{
"msg_contents": "On 8 Aug 2002, Hannu Krosing wrote:\n\n> The main difference (in the inheritance part) is that a relation does\n> not have one fixed set of fields, but can have any additional fields\n> added in inherited tables and still be part of to the base table as\n> well.\n\nThis is trivial to do with a view.\n\n> Actually I am not against ripping out the current broken implementation,\n> but not before there has been a new, correct model available for at\n> least two releses, so that people have had time to switch over.\n\nSo in other words, you want to let people use broken stuff, rather\nthan switch to another method, currently available, that has all\nof the functionality but is not broken. I guess that's an opinion, all right.\n\n> VIEWs are broken too, in the sense that you can't insert into them\n> without doing some hard work.\n\nViews are missing functionality. That is rather different from\nmaking other tables lie about what they contain, essentially\ndestroying the requested data integrity.\n\ncjs\n-- \nCurt Sampson <[email protected]> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n\n",
"msg_date": "Fri, 9 Aug 2002 10:38:38 +0900 (JST)",
"msg_from": "Curt Sampson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why is MySQL more chosen over PostgreSQL?"
},
{
"msg_contents": "Curt Sampson wrote:\n> On 8 Aug 2002, Hannu Krosing wrote:\n> \n> \n>>The main difference (in the inheritance part) is that a relation does\n>>not have one fixed set of fields, but can have any additional fields\n>>added in inherited tables and still be part of to the base table as\n>>well.\n> \n> \n> This is trivial to do with a view.\n\nAnd views of this sort are trivial to do using PG's OO extensions.\n\nI think I see a trend in this thread. Why not give it up, dude?\n\n\n-- \nDon Baccus\nPortland, OR\nhttp://donb.photo.net, http://birdnotes.net, http://openacs.org\n\n",
"msg_date": "Thu, 08 Aug 2002 18:46:05 -0700",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why is MySQL more chosen over PostgreSQL?"
},
{
"msg_contents": "On Thu, 8 Aug 2002, Don Baccus wrote:\n\n> And views of this sort are trivial to do using PG's OO extensions.\n\nSo long as you don't mind them being broken, yeah. But hell, when someone\nasks for a unique constraint, they probably don't really mean it, do they?\nAnd what's wrong with multiple records with the same primary key? It's clear\nto me now I've been working from the wrong direction; we should leave the OO\nstuff and delete the relational stuff from the database instead.\n\ncjs\n-- \nCurt Sampson <[email protected]> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n\n",
"msg_date": "Fri, 9 Aug 2002 10:56:02 +0900 (JST)",
"msg_from": "Curt Sampson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why is MySQL more chosen over PostgreSQL?"
},
{
"msg_contents": "Curt Sampson wrote:\n> On Thu, 8 Aug 2002, Don Baccus wrote:\n> \n> \n>>And views of this sort are trivial to do using PG's OO extensions.\n> \n> \n> So long as you don't mind them being broken, yeah. But hell, when someone\n> asks for a unique constraint, they probably don't really mean it, do they?\n\nGood grief, we all agree that they're currently broken and need to be \nfixed someday.\n\nGive it up. You're being a boor.\n\n\n-- \nDon Baccus\nPortland, OR\nhttp://donb.photo.net, http://birdnotes.net, http://openacs.org\n\n",
"msg_date": "Thu, 08 Aug 2002 19:02:48 -0700",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why is MySQL more chosen over PostgreSQL?"
},
{
"msg_contents": "On Thu, 8 Aug 2002, Jordan Henderson wrote:\n\n> I think what would be useful is to discuss the theory part.\n\nAs do I.\n\n> - Date has 3, however his most current work is dated 2000, The Third\n> Manifesto SECOND EDITION.\n\nThis is actually Date and Darwen.\n\nI think we should also add Date's _An Introduction to Database Systems,\n7th Edition_, as it covers some relational stuff in more detail than\nthan _The Third Manifesto_. For example, it investigates the details of\nautomatic view updatability, which came up during this discussion, and\nwhich most books just completely cop out on. (For example, _Database\nSystem Concepts_ just points out a couple of problems with view\nupdatability and says, \"Because of problems such as these, modifications\nare generally not permitted on view relations, except in limited\ncases.\")\n\n> - Silberschatz, Korth, Sudarshan, A book I am sure we have all read,\n> Database System Concepts - Third Edition.\n> ...\n> In any case, we should use the current editions of these books, not\n> something the author has reconsidered, re-written, and published again.\n\nIn that case we ought to use the fourth edition of this book.\n\nHere are some questions I'd like to see people answer or propose\nanswers to:\n\n 1. What models of table inheritance have been proposed, and how\n do they differ?\n\n 2. What models of table inheritance are actually implemented in\n currently available database systems?\n\n 3. What are the advantages of describing something using table\n inheritance rather than an equivalant relational description?\n\n 4. If you think table inheritance is \"object oriented,\" why do\n you think so.\n\n 5. How ought we to fix the table inheritance in postgres?\n\nThe last question comes up because, during the conversation up to this\npoint, we seem to have implicitly accepted that table inheritance is\nan \"object-oriented\" way of doing things. Thinking further on this,\nhowever, I've decided that it's not in fact object-oriented at all.\n\ncjs\n-- \nCurt Sampson <[email protected]> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n\n",
"msg_date": "Mon, 12 Aug 2002 08:41:38 +0900 (JST)",
"msg_from": "Curt Sampson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Table Inheritance Discussion"
},
{
"msg_contents": "Curt Sampson wrote:\n\n> The last question comes up because, during the conversation up to this\n> point, we seem to have implicitly accepted that table inheritance is\n> an \"object-oriented\" way of doing things. Thinking further on this,\n> however, I've decided that it's not in fact object-oriented at all.\n\nIt's just type extensibility, really.\n\nAs to why, again there's an efficiency argument, as I said earlier some \njoins can be avoided given PG's implementation of this feature:\n\ndotlrn=# create table foo(i integer primary key);\nNOTICE: CREATE TABLE / PRIMARY KEY will create implicit index \n'foo_pkey' for table 'foo'\nCREATE\ndotlrn=# create table bar(j integer) inherits (foo);\nCREATE\ndotlrn=# explain select * from bar;\nNOTICE: QUERY PLAN:\n\nSeq Scan on bar (cost=0.00..20.00 rows=1000 width=8)\n\nEXPLAIN\n...\n\ndotlrn=# create table foo(i integer primary key);\nNOTICE: CREATE TABLE / PRIMARY KEY will create implicit index \n'foo_pkey' for table 'foo'\nCREATE\ndotlrn=# create table bar(i integer references foo primary key, j integer);\nNOTICE: CREATE TABLE / PRIMARY KEY will create implicit index \n'bar_pkey' for table 'bar'\nNOTICE: CREATE TABLE will create implicit trigger(s) for FOREIGN KEY \ncheck(s)\nCREATE\ndotlrn=# create view foobar as select foo.*, bar.j from foo, bar;\nCREATE\n\ndotlrn=# explain select * from foobar;\nNOTICE: QUERY PLAN:\n\nNested Loop (cost=0.00..30020.00 rows=1000000 width=8)\n -> Seq Scan on foo (cost=0.00..20.00 rows=1000 width=4)\n -> Seq Scan on bar (cost=0.00..20.00 rows=1000 width=4)\n\nEXPLAIN\n\nThere's also some error checking (using my inherited example):\n\ndotlrn=# drop table foo;\nERROR: Relation \"bar\" inherits from \"foo\"\ndotlrn=#\n\nWhich doesn't exist in the view approach in PG at least (I'm unclear on \nstandard SQL92 and of course this says nothing about the relational \nmodel in theory, just PG and perhaps SQL92 in practice).\n\n-- \nDon Baccus\nPortland, OR\nhttp://donb.photo.net, http://birdnotes.net, http://openacs.org\n\n",
"msg_date": "Sun, 11 Aug 2002 16:59:48 -0700",
"msg_from": "Don Baccus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Table Inheritance Discussion"
},
{
"msg_contents": "On Sun, 11 Aug 2002, Don Baccus wrote:\n\n> It's just type extensibility, really.\n\nYeah.\n\n> As to why, again there's an efficiency argument, as I said earlier some\n> joins can be avoided given PG's implementation of this feature:\n> [TI and relational examples deleted]\n\nWhat you gave is not the relational equivalant of the TI case as\nimplemented in postgres. Modeled correctly, you should be creating\na table for the child, and a view for the parent. Then you will\nfind that the relational definition uses or avoids joins exactly\nwhere the TI definition does.\n\n> There's also some error checking (using my inherited example):\n\nThe relational definition doesn't force the dependency, but as you\ncan delete and recreate the view at will without data loss, the\namount of safety is the same.\n\ncjs\n-- \nCurt Sampson <[email protected]> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n\n",
"msg_date": "Mon, 12 Aug 2002 09:18:16 +0900 (JST)",
"msg_from": "Curt Sampson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Table Inheritance Discussion"
},
{
"msg_contents": "\n> > > Yes, that's the whole point. If I have a constraint on a table, I think\n> > > it should *never* be possible for that constraint to be violated. If a\n> > > subtable should not have constraint the supertable has, it shouldn't\n> > > inherit from the supertable.\n> >\n> > If you want that, you simply need to only create constraints that apply to\n> > all tables in the hierarchy. Note that you *can* do this. It should imho be\n> > the default behavior.\n> \n> So what you're saying is that constraints shouldn't be inherited?\n\nNo. I even said that inheriting should be the default.\n \n> > > To do otherwise breaks the relational model.\n> >\n> > That is probably a point of argument. Imho the inheritance feature\n> > is something orthogonal to the relational model. It is something else, and\n> > thus cannot break the relational model.\n> \n> So then constraints must be inherited. The relational model, if I\n> am not incorrect here, says that, given a table definition such as\n> this:\n> \n> CREATE TABLE my_table (\n> \tmy_key int PRIMARY KEY,\n> \tmy_value text UNIQUE,\n> \tmy_other_value int CHECK (my_other_value > 0)\n> )\n\nA local constraint should be made obvious from looking at the schema, \na possible syntax (probably both ugly :-):\nCHECK my_table ONLY (my_other_value > 0)\nor\nCHECK LOCAL (my_other_value > 0)\n\n> \n> You will never, ever, when selecting from this table, have \n> returned to you\n> \n> 1. two rows with the same value of my_key but different values\n> for the other columns,\n> \n> 2. two rows with the same value of my_value but different values\n> for the other columns, or\n> \n> 3. a row in which the value of my_other_value is not \n> greater than zero.\n> \n\nWell, that is where I do not think this is flexible enough, and keep in mind \nthat all triggers and rules would then also need such restrictions. \n\n> I would strongly object to that.\n\nRegardless whether your objection is *strong* or not :-)\nIf you don't like the feature (to add a local constraint), don't use it.\n(Remember you are talking about removing an implemented feature)\n\nAndreas\n",
"msg_date": "Mon, 19 Aug 2002 16:30:46 +0200",
"msg_from": "\"Zeugswetter Andreas SB SD\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inheritance "
},
{
"msg_contents": "On Mon, 19 Aug 2002, Zeugswetter Andreas SB SD wrote:\n\n> > So what you're saying is that constraints shouldn't be inherited?\n>\n> No. I even said that inheriting should be the default.\n\nAh. So you think it should be possible not to inherit constraints.\n\n> A local constraint should be made obvious from looking at the schema,\n\nOk, this now I could live with. Though I'm not sure that its\ntheoretically very defensible, or worth the effort. Other languages\nthat offer constraints, such as Eiffel (and soon Java), do not allow\nconstraints that are not inherited, as far as I know. Do you have some\ncounterexamples.\n\n> Well, that is where I do not think this is flexible enough, and keep in mind\n> that all triggers and rules would then also need such restrictions.\n\nYes, all triggers, rules, and everything else would have to be inherited.\n\n> Regardless whether your objection is *strong* or not :-)\n> If you don't like the feature (to add a local constraint), don't use it.\n> (Remember you are talking about removing an implemented feature)\n\n1. It's not exactly an implemented feature, it's an accident of an\nincomplete implementation of inheritance done in a certain way.\n\n2. Should we change the way we decide to implement inheritance,\nperhaps to make fixing the current problems much easier, it might\nbe a lot of work to add this.\n\ncjs\n-- \nCurt Sampson <[email protected]> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n\n",
"msg_date": "Mon, 19 Aug 2002 23:42:51 +0900 (JST)",
"msg_from": "Curt Sampson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inheritance "
},
{
"msg_contents": "On Mon, 2002-08-19 at 15:42, Curt Sampson wrote:\n> > A local constraint should be made obvious from looking at the schema,\n> \n> Ok, this now I could live with. Though I'm not sure that its\n> theoretically very defensible, or worth the effort. Other languages\n> that offer constraints, such as Eiffel (and soon Java), do not allow\n> constraints that are not inherited, as far as I know. Do you have some\n> counterexamples.\n\nIn Eiffel, at least, I can say \"invariant feature_x\" and redefine\nfeature_x in a descendant class, thus effectively redefining the\nconstraint. If we decide to inherit constraints unconditionally, the\napplication writer can achieve similar flexibility by moving the logic\nof the constraint into a function whose behaviour depends on which table\nit is used on. This would put the burden on the application rather than\nrequiring additional syntax in PostgreSQL.\n\n-- \nOliver Elphick [email protected]\nIsle of Wight, UK \nhttp://www.lfix.co.uk/oliver\nGPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n ========================================\n \"For every one that asketh receiveth; and he that \n seeketh findeth; and to him that knocketh it shall be \n opened.\" Luke 11:10 \n\n",
"msg_date": "19 Aug 2002 16:06:27 +0100",
"msg_from": "Oliver Elphick <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inheritance"
},
{
"msg_contents": "On Mon, 2002-08-19 at 09:42, Curt Sampson wrote:\n> On Mon, 19 Aug 2002, Zeugswetter Andreas SB SD wrote:\n> \n> > > So what you're saying is that constraints shouldn't be inherited?\n> >\n> > No. I even said that inheriting should be the default.\n> \n> Ah. So you think it should be possible not to inherit constraints.\n\nI've been silent for a bit because I wanted to kick the concept around\nin my head. After some thought, I say that I support children\ninheriting constraints. In a more abstract sense, we are really setting\nconditions for all entities of a given type (class) which must be met to\nclassify as a defined type. Again, in an abstract sense, if I say all\n\"candies\" (type/class, candy) must have sugar (constraint), and I go on\nto create a subclass of candy which I desire not to have sugar, then\nthere is a fundamental problem. Either I incorrectly identified my\nproblem domain and didn't properly create my entities which address my\ndomain needs or what I'm trying to express really isn't a candy at all. \nIn other words, it sounds like candy should of been a subclass of a more\nabstract base entity. Likewise, the newly desired class which doesn't\nhave sugar should also inherit from the newly created base class and not\nbe derived from candy at all.\n\n\n> \n> > A local constraint should be made obvious from looking at the schema,\n> \n> Ok, this now I could live with. Though I'm not sure that its\n> theoretically very defensible, or worth the effort. Other languages\n> that offer constraints, such as Eiffel (and soon Java), do not allow\n> constraints that are not inherited, as far as I know. Do you have some\n> counterexamples.\n\nI tend to agree. Constraints should be inherited. See above.\n\n> \n> > Well, that is where I do not think this is flexible enough, and keep in mind\n> > that all triggers and rules would then also need such restrictions.\n> \n> Yes, all triggers, rules, and everything else would have to be inherited.\n\nAgreed.\n\n> \n> > Regardless whether your objection is *strong* or not :-)\n> > If you don't like the feature (to add a local constraint), don't use it.\n> > (Remember you are talking about removing an implemented feature)\n> \n> 1. It's not exactly an implemented feature, it's an accident of an\n> incomplete implementation of inheritance done in a certain way.\n> \n> 2. Should we change the way we decide to implement inheritance,\n> perhaps to make fixing the current problems much easier, it might\n> be a lot of work to add this.\n> \n\nI'm still trying to figure out if subclasses should be allowed to have\nlocalized constraints. I tend to think yes even though it's certainly\npossible to create seemingly illogical/incompatible/conflicting\nconstraints with parent classes. Then again, my gut feeling is, that's\nmore and an architectural/design issue rather than a fundamental issue\nwith the concept.\n\n\n--Greg Copeland",
"msg_date": "19 Aug 2002 10:10:57 -0500",
"msg_from": "Greg Copeland <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inheritance"
},
{
"msg_contents": "The August draft of the SQL:200n standard (9075-2 Foundation) says in\nSection 4.17.2: \"Every table constraint specified for base table T is\nimplicitly a constraint on every subtable of T, by virtue of the fact\nthat every row in a subtable is considered to have a corresponding\nsuperrow in every one of its supertables.\"\n\nPeter Gulutzan\nCo-Author, SQL-99 Complete, Really\nCo-Author, SQL Performance Tuning\n",
"msg_date": "28 Aug 2002 06:23:46 -0700",
"msg_from": "[email protected] (Peter Gulutzan)",
"msg_from_op": false,
"msg_subject": "Re: Inheritance"
},
{
"msg_contents": "Peter Gulutzan wrote:\n> The August draft of the SQL:200n standard (9075-2 Foundation) says in\n> Section 4.17.2: \"Every table constraint specified for base table T is\n> implicitly a constraint on every subtable of T, by virtue of the fact\n> that every row in a subtable is considered to have a corresponding\n> superrow in every one of its supertables.\"\n\nYep, this is where we are stuck; having an index span multiple tables\nin some way.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Tue, 3 Sep 2002 12:36:37 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inheritance"
},
{
"msg_contents": "On Tue, 3 Sep 2002, Bruce Momjian wrote:\n\n> Yep, this is where we are stuck; having an index span multiple tables\n> in some way.\n\nOr implementing it by keeping all data in the table in which it\nwas declared. (I.e., supertable holds all rows; subtable holds\nonly the primary key and those columns of the row that are not\nin the supertable.)\n\n From looking at the various discussions of this in books, and what\nit appears to me that the SQL standard says, it seems that their\noverall vision of table inheritance is to be consistent with the\nimplementation that I described above.\n\ncjs\n-- \nCurt Sampson <[email protected]> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n\n",
"msg_date": "Thu, 5 Sep 2002 10:57:05 +0900 (JST)",
"msg_from": "Curt Sampson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inheritance"
},
{
"msg_contents": "On 5 Sep 2002, Hannu Krosing wrote:\n\n> On Thu, 2002-09-05 at 03:57, Curt Sampson wrote:\n>\n> > Or implementing it by keeping all data in the table in which it\n> > was declared. (I.e., supertable holds all rows; subtable holds\n> > only the primary key and those columns of the row that are not\n> > in the supertable.)\n>\n> How would you do it for _multiple_ inheritance ?\n\nExactly the same way. Each column resides in only one physical table,\nso you need only find the table it resides in, and do the insert there.\nI'll be happy to provide an example if this is not clear.\n\n> 1) the way you describe (parent holding common columns + child tables\n> for added child columns), which makes it easy to define constraints but\n> hard to do inserts/updates/deletes on inherited tables\n\nI wouldn't say it makes it \"hard\" to do inserts, updates and deletes.\nPostgres already has pretty near all of the code it needs to support\nthese updates, because these are the semantic equivalant of the separate\nactions applied to the separate tables within one transaction.\n\n> 2) the postgresql way (a new table for each child), which makes it hard\n> to define constraints but easy to do inserts/updates/deletes.\n\nI agree that making constraints work in this model is very difficult and\na lot of work.\n\n> This way it could probably be done even more effectively than you\n> describe by:\n>\n> 1) keeping _all_ (not only the inherited columns) the data for\n> inheritance hierarchy in the same physical file.\n\nYou appear to have delved into a different database layer than one\nI'm looking at, here. I was examining storage on the table level,\nwhich is unrelated to files. (E.g., postgres sometimes stores a\ntable in one file, sometimes in more than one. MS SQL Server stores\nmany tables in one file. It doesn't matter which approach is used when\ndiscussing the two inheritance implementation options above.)\n\n> 4) update/delete of all child tables are trivial as they are actually\n> done in the same table and not using joins\n\nOr are you talking about storing all of the columns in a single\ntable? That's a possibility, but wouldn't it be costly to update\nthe entire table every time you add a new child table? And table\nscans on child tables would certainly be more costly if you had\nmany of them, becuase the effective row width would be much wider.\nBut it might be worth thinking about.\n\n> It seems that single inheritance avoids other conceptual problems, like\n> what to do with primary keys when inheriting from two tables that have\n> them.\n\nI don't see where there's a conceptual problem here, either. With\nmultiple inheritance you can simply demote both keys to candidate\nkeys, and continue on as normal. (The only difference between a\nprimary key and a candidate key is that you can leave out the column\nnames when declaring foreign keys in another table.)\n\ncjs\n-- \nCurt Sampson <[email protected]> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n\n",
"msg_date": "Thu, 5 Sep 2002 16:28:16 +0900 (JST)",
"msg_from": "Curt Sampson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inheritance"
},
{
"msg_contents": "On Thu, 2002-09-05 at 03:57, Curt Sampson wrote:\n> On Tue, 3 Sep 2002, Bruce Momjian wrote:\n> \n> > Yep, this is where we are stuck; having an index span multiple tables\n> > in some way.\n> \n> Or implementing it by keeping all data in the table in which it\n> was declared. (I.e., supertable holds all rows; subtable holds\n> only the primary key and those columns of the row that are not\n> in the supertable.)\n\nHow would you do it for _multiple_ inheritance ?\n\nWhen implementing it on top of standard relational model you have more\nor less two ways to slice the problem \n\n1) the way you describe (parent holding common columns + child tables\nfor added child columns), which makes it easy to define constraints but\nhard to do inserts/updates/deletes on inherited tables\n\n2) the postgresql way (a new table for each child), which makes it hard\nto define constraints but easy to do inserts/updates/deletes.\n\n> From looking at the various discussions of this in books, and what\n> it appears to me that the SQL standard says, it seems that their\n> overall vision of table inheritance is to be consistent with the\n> implementation that I described above.\n\nYes. The SQL99 standard specifies only _single_ inheritance for tables +\nLIKE in column definition part, making the model somewhat similar to\nJava's (single inheritance + interfaces).\n\nThis way it could probably be done even more effectively than you\ndescribe by:\n\n1) keeping _all_ (not only the inherited columns) the data for\ninheritance hierarchy in the same physical file.\n\n2) having partial indexes (involving tableoid=thiskindoftable) for\npossible speeding up of SELECT .. ONLY queries.\n\n3) no changes to (unique) indexes - they still reference simple TID's\nwithout additional table part.\n\n4) update/delete of all child tables are trivial as they are actually\ndone in the same table and not using joins\n\n\nIt seems that single inheritance avoids other conceptual problems, like\nwhat to do with primary keys when inheriting from two tables that have\nthem.\n\n--------------------\nHannu\n\n\n",
"msg_date": "05 Sep 2002 10:05:09 +0200",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inheritance"
},
{
"msg_contents": "On 5 Sep 2002, Hannu Krosing wrote:\n\n> What I meant was that it is relatively more costly to update several\n> \"physical\" tables than updating one .\n\nOh, I see. Not that this is that big a deal, I think. Given that\nit doesn't work correctly at the moment, making it work fast is a\ndefinite second priority, I would think.\n\nOnce it's working right, one can always replace the internals with\nsomething else that does the same job but is more efficient.\n\n> > I agree that making constraints work in this model is very difficult and\n> > a lot of work.\n>\n> But again this is not _conceptually_ hard, just hard to implement\n> efficiently.\n\nNo, it's conceptually hard. Not all constraints are implemented with\njust a unique index you know. And changing a constraint means you have\nto check all the child tables, etc. etc. It's difficult just to track\ndown down all the things you have to try to preserve. Not to mention,\nthere's always the question of what happens to triggers and suchlike\nwhen handed a tuple with extra columns from what it expects, and having\nit modify the insert into a different table.\n\nThe beauty of storing all supertable columns in the supertable itself is\nthat the behaviour is automatically correct.\n\n> What I was actually trying to describe was that the tuple format would\n> be what it is currently, just stored in the same table with parent.\n\nSo what you're saying is that each tuple in the table would have a\nformat appropriate for its \"subtype,\" and the table would be full of\ntuples of varying types? At first blush, that seems like a reasonable\napproach, if it can be done.\n\ncjs\n-- \nCurt Sampson <[email protected]> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n\n",
"msg_date": "Thu, 5 Sep 2002 17:52:08 +0900 (JST)",
"msg_from": "Curt Sampson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inheritance"
},
{
"msg_contents": "I have a question about inheritance:\n\nYou have 2 tables: Programmer and employee. Programmer inherits employee. You \nput in a generic employee record for someone, but then she becomes a \nprogrammer. What do you do? (I borrowed this example from a book by C.J. \nDate, who posed this question). Do you DELETE then INSERT? Something seems \nwrong with that somehow. Are the postgres developers agreed upon how that \nsituation should be handled? What about the database users, and their \nexpectations of the behavior? \n\nI am not advocating that we remove inheritence (I say this because this topic \nhas generated some significant discussion about that). However, I will stick \nto the well-defined relational model until I see something useful from the \ninheritance system that is as well-defined. I agree it saves a few keystrokes \n(and can help organize things for you, as do objects in a programming \nlanguage), but mind is more at peace when I am actually sure of what's \nhappening. I can always throw more rules/views/triggers at the situation \nuntil I have a nice set of things to work with in the application.\n\nOr, I suppose, if someone shows me something that I can't do in the relational \nmodel, but can with inheritance, I might be convinced otherwise.\n\nRegards,\n\tJeff Davis\n\n\n\nOn Thursday 05 September 2002 01:05 am, Hannu Krosing wrote:\n> On Thu, 2002-09-05 at 03:57, Curt Sampson wrote:\n> > On Tue, 3 Sep 2002, Bruce Momjian wrote:\n> > > Yep, this is where we are stuck; having an index span multiple tables\n> > > in some way.\n> >\n> > Or implementing it by keeping all data in the table in which it\n> > was declared. (I.e., supertable holds all rows; subtable holds\n> > only the primary key and those columns of the row that are not\n> > in the supertable.)\n>\n> How would you do it for _multiple_ inheritance ?\n>\n> When implementing it on top of standard relational model you have more\n> or less two ways to slice the problem\n>\n> 1) the way you describe (parent holding common columns + child tables\n> for added child columns), which makes it easy to define constraints but\n> hard to do inserts/updates/deletes on inherited tables\n>\n> 2) the postgresql way (a new table for each child), which makes it hard\n> to define constraints but easy to do inserts/updates/deletes.\n>\n> > From looking at the various discussions of this in books, and what\n> > it appears to me that the SQL standard says, it seems that their\n> > overall vision of table inheritance is to be consistent with the\n> > implementation that I described above.\n>\n> Yes. The SQL99 standard specifies only _single_ inheritance for tables +\n> LIKE in column definition part, making the model somewhat similar to\n> Java's (single inheritance + interfaces).\n>\n> This way it could probably be done even more effectively than you\n> describe by:\n>\n> 1) keeping _all_ (not only the inherited columns) the data for\n> inheritance hierarchy in the same physical file.\n>\n> 2) having partial indexes (involving tableoid=thiskindoftable) for\n> possible speeding up of SELECT .. ONLY queries.\n>\n> 3) no changes to (unique) indexes - they still reference simple TID's\n> without additional table part.\n>\n> 4) update/delete of all child tables are trivial as they are actually\n> done in the same table and not using joins\n>\n>\n> It seems that single inheritance avoids other conceptual problems, like\n> what to do with primary keys when inheriting from two tables that have\n> them.\n>\n> --------------------\n> Hannu\n>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n>\n> http://archives.postgresql.org\n\n",
"msg_date": "Thu, 5 Sep 2002 02:04:23 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inheritance"
},
{
"msg_contents": "On Thu, 2002-09-05 at 09:28, Curt Sampson wrote:\n> On 5 Sep 2002, Hannu Krosing wrote:\n> \n> > On Thu, 2002-09-05 at 03:57, Curt Sampson wrote:\n> >\n> > > Or implementing it by keeping all data in the table in which it\n> > > was declared. (I.e., supertable holds all rows; subtable holds\n> > > only the primary key and those columns of the row that are not\n> > > in the supertable.)\n> >\n> > How would you do it for _multiple_ inheritance ?\n> \n> Exactly the same way. Each column resides in only one physical table,\n> so you need only find the table it resides in, and do the insert there.\n> I'll be happy to provide an example if this is not clear.\n> \n> > 1) the way you describe (parent holding common columns + child tables\n> > for added child columns), which makes it easy to define constraints but\n> > hard to do inserts/updates/deletes on inherited tables\n> \n> I wouldn't say it makes it \"hard\" to do inserts, updates and deletes.\n> Postgres already has pretty near all of the code it needs to support\n> these updates, because these are the semantic equivalant of the separate\n> actions applied to the separate tables within one transaction.\n\nWhat I meant was that it is relatively more costly to update several\n\"physical\" tables than updating one .\n\n> > 2) the postgresql way (a new table for each child), which makes it hard\n> > to define constraints but easy to do inserts/updates/deletes.\n> \n> I agree that making constraints work in this model is very difficult and\n> a lot of work.\n\nBut again this is not _conceptually_ hard, just hard to implement\nefficiently.\n\n> > This way it could probably be done even more effectively than you\n> > describe by:\n> >\n> > 1) keeping _all_ (not only the inherited columns) the data for\n> > inheritance hierarchy in the same physical file.\n> \n> You appear to have delved into a different database layer than one\n> I'm looking at, here.\n\nprobably. I was describing to a way to efficiently implement single\ninheritance. \n\nThe layer was somewhere between physical files and logical tables, i.e.\nabove splitting stuff into main/toast and also above splitting big files\nto 1Gb chunks, but below logical tables, which are (or are not when\nomitting ONLY ;) still separate logically.\n\nPerhaps it could be named \"logical file\".\n\n> I was examining storage on the table level, which is unrelated to files.\n\n> (E.g., postgres sometimes stores a table in one file, sometimes in more\n> than one. MS SQL Server stores many tables in one file.\n> It doesn't matter which approach is used when\n> discussing the two inheritance implementation options above.)\n\nIt does not matter in case you are assuming that the storage model can't\nbe changed. The trick with inherited tables is that in some sense they\nare the same table and in another sense they are separate tables.\n\n> > 4) update/delete of all child tables are trivial as they are actually\n> > done in the same table and not using joins\n> \n> Or are you talking about storing all of the columns in a single\n> table? That's a possibility, but wouldn't it be costly to update\n> the entire table every time you add a new child table?\n\nYou should not need it, as the storage for existing tuples does not\nchange - even now you can do ADD COLUMN without touching existing\ntuples.\n\n> And table\n> scans on child tables would certainly be more costly if you had\n> many of them, becuase the effective row width would be much wider.\n\nIt would not be noticably wider (only 1 bit/column) even if I did\npropose storing all columns.\n\nWhat I was actually trying to describe was that the tuple format would\nbe what it is currently, just stored in the same table with parent.\n\n> But it might be worth thinking about.\n> \n> > It seems that single inheritance avoids other conceptual problems, like\n> > what to do with primary keys when inheriting from two tables that have\n> > them.\n> \n> I don't see where there's a conceptual problem here, either. With\n> multiple inheritance you can simply demote both keys to candidate\n> keys, and continue on as normal. (The only difference between a\n> primary key and a candidate key is that you can leave out the column\n> names when declaring foreign keys in another table.)\n\nThat's one possibility. The other would be to keep the one from the\nfirst table as primary and demote onlly the other primary keys.\n\nWith single inheritance you don't even have to think about it.\n\n-----------------\nHannu\n\n\n\n",
"msg_date": "05 Sep 2002 11:15:27 +0200",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inheritance"
},
{
"msg_contents": "On Thu, 5 Sep 2002, Jeff Davis wrote:\n\n> You have 2 tables: Programmer and employee. Programmer inherits employee. You\n> put in a generic employee record for someone, but then she becomes a\n> programmer. What do you do? (I borrowed this example from a book by C.J.\n> Date, who posed this question). Do you DELETE then INSERT? Something seems\n> wrong with that somehow.\n\nThis is not so wrong. If you think about it, you have the same\nproblem in most object-oriented programming languages: a person\nobject can't generally easily become a subclass of itself after\nbeing created.\n\nThis is a case, I would say, where you simply don't want to use\ninheritance. A person has-a job, not is-a job.\n\n> What about the database users, and their expectations of the behavior?\n\nNobody really knows; table inheritance in databases is not well-defined.\n(Though perhaps the latest SQL spec. changes that.)\n\n> However, I will stick to the well-defined relational model until I see\n> something useful from the inheritance system that is as well-defined.\n\nAmen! :-)\n\n> Or, I suppose, if someone shows me something that I can't do in the\n> relational model, but can with inheritance, I might be convinced\n> otherwise.\n\nI think that most people are at this point agreed that table\ninheritance, at least as currently implemented in any known system,\ndoesn't offer anything that can't easily be done relationally.\n\ncjs\n-- \nCurt Sampson <[email protected]> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n\n",
"msg_date": "Thu, 5 Sep 2002 18:16:48 +0900 (JST)",
"msg_from": "Curt Sampson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inheritance"
},
{
"msg_contents": "On 5 Sep 2002, Hannu Krosing wrote:\n\n> > Oh, I see. Not that this is that big a deal, I think. Given that\n> > it doesn't work correctly at the moment, making it work fast is a\n> > definite second priority, I would think.\n>\n> But choosing an implementation that _can_be_ made to work fast is not.\n\nI would say it definitely is. A correctly working implementation\ncan be replaced. An incorrectly working implementation destroys\ndata integrety.\n\nWhich is more important for PostgreSQL? Speed or maintaining data\nintegrity?\n\n> > Not to mention,\n> > there's always the question of what happens to triggers and suchlike\n> > when handed a tuple with extra columns from what it expects, and having\n> > it modify the insert into a different table.\n>\n> IMHO that the trigger should not be aware of underlying implementation -\n> so it needs not worry about modifying the insert into a different table.\n\nI agree.\n\n> > The beauty of storing all supertable columns in the supertable itself is\n> > that the behaviour is automatically correct.\n>\n> But \"automatically correct\" may not be what you want ;)\n>\n> What about trigger that generates a cached printname using function\n> printname(row) that is different for each table - here you definitely do\n> not want to run the function defined for base table for anything\n> inherited.\n\nRight. But that will be \"automatically correct\" when you store all\nbase data in the base table. It's when you start storing those data\nin other tables that the trigger can get confused.\n\nOr are you saying that when I insert a row into \"just\" a child\ntable, the trigger shouldn't be invoked on the \"parent table\"\nportion of that insert? If so, I'd strongly disagree. If that\ntrigger is acting as an integrety constraint on the base table,\nyou might destroy the table's integrity.\n\ncjs\n-- \nCurt Sampson <[email protected]> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n\n",
"msg_date": "Thu, 5 Sep 2002 18:34:42 +0900 (JST)",
"msg_from": "Curt Sampson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inheritance"
},
{
"msg_contents": "On Thu, 2002-09-05 at 10:52, Curt Sampson wrote:\n> On 5 Sep 2002, Hannu Krosing wrote:\n> \n> > What I meant was that it is relatively more costly to update several\n> > \"physical\" tables than updating one .\n> \n> Oh, I see. Not that this is that big a deal, I think. Given that\n> it doesn't work correctly at the moment, making it work fast is a\n> definite second priority, I would think.\n\nBut choosing an implementation that _can_be_ made to work fast is not.\n\n> Once it's working right, one can always replace the internals with\n> something else that does the same job but is more efficient.\n\nI still think that choosing the right implementation can also help in\nmaking it work right.\n\n> > > I agree that making constraints work in this model is very difficult and\n> > > a lot of work.\n> >\n> > But again this is not _conceptually_ hard, just hard to implement\n> > efficiently.\n> \n> No, it's conceptually hard. Not all constraints are implemented with\n> just a unique index you know. And changing a constraint means you have\n> to check all the child tables, etc. etc. It's difficult just to track\n> down down all the things you have to try to preserve.\n\nIt may be a lot of work, but not _conceptually_ hard. Conceptually you\nhave to do the same thing as for a single table, but just for all\ninherited tables.\n\n> Not to mention,\n> there's always the question of what happens to triggers and suchlike\n> when handed a tuple with extra columns from what it expects, and having\n> it modify the insert into a different table.\n\nIMHO that the trigger should not be aware of underlying implementation -\nso it needs not worry about modifying the insert into a different table.\n\n> The beauty of storing all supertable columns in the supertable itself is\n> that the behaviour is automatically correct.\n\nBut \"automatically correct\" may not be what you want ;)\n\nWhat about trigger that generates a cached printname using function\nprintname(row) that is different for each table - here you definitely do\nnot want to run the function defined for base table for anything\ninherited.\n\n> > What I was actually trying to describe was that the tuple format would\n> > be what it is currently, just stored in the same table with parent.\n> \n> So what you're saying is that each tuple in the table would have a\n> format appropriate for its \"subtype,\" and the table would be full of\n> tuples of varying types? At first blush, that seems like a reasonable\n> approach, if it can be done.\n\nAt least it makes some parts easier ;)\n\n----------------\nHannu\n\n\n\n\n\n\n\n\n",
"msg_date": "05 Sep 2002 12:23:54 +0200",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inheritance"
},
{
"msg_contents": ">\n> This is not so wrong. If you think about it, you have the same\n> problem in most object-oriented programming languages: a person\n> object can't generally easily become a subclass of itself after\n> being created.\n>\n> This is a case, I would say, where you simply don't want to use\n> inheritance. A person has-a job, not is-a job.\n>\n\nBut a person is-a employee (allow me to momentarily step aside from the rules \nof english grammer, if you would), and a person is-a programmer. That's why I \ndidn't call my table \"job\" :) [1]\n\nI don't like the way some OO programming languages handle objects, if they \nmean to say you can't change an object's type without performing a logical \ndata copy to a new object. If you don't use some kind of extra layer of \nabstraction in C, you will end up with that problem: you'd need to copy all \nthat RAM over to change from one struct to another. Most people would rather \ntake that RAM copying hit than all the hits for allowing \"room to expand\" (at \nleast in some applications). However, postgres needs to provide that \"room to \nexpand\" for each tuple anyway, so to go through the same copying seems bad \n(especially since we're no longer just talking RAM). \n\nTake as an example python... it's easy to emulate other objects: just assign \nto the attribute, even if it's not there yet, it'll add the attribute. Same \nwith python, it's providing room to expand for it's objects already, so why \ndo all the copying? Now compare with Java, and see why you'd be annoyed. It \nhas the facilities to change the objects all around, but you can't do it.\n\nEven if you disregard all implementation details, and assume that the database \nis intelligent enough to not redundantly write data (and if you could name \none such database, I would like to know), you're still doing something that \ndoesn't logically make sense: you're deleting and inserting atomically, when \nthe more obvious logical path is to expand on the data you already carry \nabout an entity.\n\nI like entities to be mutable, at least as far as makes sense to an \napplication. Try telling an employee that as part of a promotion, they're \ngoing to be fired, lose their workstation, then be re-hired, and get a new \nworkstation; I bet he'd have an interesting expression on his face (hey, at \nleast postgres guarantees the \"A\" in ACID, or else bad things could happen to \nthat poor guy :)\n\nThanks for responding, and I agreed with everything else you said. As you \nmight have guessed, I don't much like \"most object-oriented languages\" if \nthat's what they're going to try to tell me I have to do. Python works \nnicely, however :)\n\nRegards,\n\tJeff Davis\n\n[1] Come to think of it, the JOIN operator seems to, at least on a first \nthought, represent the \"has-a\" relationship you describe. You could have the \ntuples \"manager\" and \"programmer\" in the table \"job\" and join with a \"people\" \ntable. Don't ask about inheritance yet for this model, I'm still thinking \nabout that one (does \"has-a\" even have an analogue to inheriteance?). Send me \nyour thoughts about this, if you should have any.\n",
"msg_date": "Thu, 5 Sep 2002 03:29:43 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inheritance"
},
{
"msg_contents": "On Thu, 2002-09-05 at 11:34, Curt Sampson wrote:\n> On 5 Sep 2002, Hannu Krosing wrote:\n> \n> > > Oh, I see. Not that this is that big a deal, I think. Given that\n> > > it doesn't work correctly at the moment, making it work fast is a\n> > > definite second priority, I would think.\n> >\n> > But choosing an implementation that _can_be_ made to work fast is not.\n> \n> I would say it definitely is. A correctly working implementation\n> can be replaced. An incorrectly working implementation destroys\n> data integrety.\n> \n> Which is more important for PostgreSQL? Speed or maintaining data\n> integrity?\n\nBoth of course. The lack of one often makes the other unusable.\n\nBut as MySQL experience suggest, often people select speed over data\nintegrity. OTOH sometimes you happily accept a 10sec delay in updates to\nhave data integrity (like when doing a money transfer over internet;)\n\n> > > Not to mention,\n> > > there's always the question of what happens to triggers and suchlike\n> > > when handed a tuple with extra columns from what it expects, and having\n> > > it modify the insert into a different table.\n> >\n> > IMHO that the trigger should not be aware of underlying implementation -\n> > so it needs not worry about modifying the insert into a different table.\n> \n> I agree.\n> \n> > > The beauty of storing all supertable columns in the supertable itself is\n> > > that the behaviour is automatically correct.\n> >\n> > But \"automatically correct\" may not be what you want ;)\n> >\n> > What about trigger that generates a cached printname using function\n> > printname(row) that is different for each table - here you definitely do\n> > not want to run the function defined for base table for anything\n> > inherited.\n> \n> Right. But that will be \"automatically correct\" when you store all\n> base data in the base table. It's when you start storing those data\n> in other tables that the trigger can get confused.\n> \n> Or are you saying that when I insert a row into \"just\" a child\n> table, the trigger shouldn't be invoked on the \"parent table\"\n> portion of that insert? If so, I'd strongly disagree.\n\nConceptually there are no \"portions\" of table - the trigger is invoked\non one _tuple_ exactly (pg has only row-level triggers), and each tuple\nbelongs to only one table regardless how it is implemented internally.\n\n> If that\n> trigger is acting as an integrety constraint on the base table,\n> you might destroy the table's integrity.\n\nWhat I try to say is that you should have the same freedom with triggers\nthat you have with select/insert/update/delete - you must be able to\nchoose if the trigger is on the parent table ONLY or on parent and all\nchildren. \n\nAnd you should be able to override a trigger for child table even if it\nis defined on parent as applying to all children - I guess that\noverriding by trigger _name_ would be what most people expect.\n\nSuppose you have a table CITIZEN with table-level constraint IS_GOOD\nwhich is defined as kills_not_others(CITIZEN). and there is table\nCIVIL_SERVANT (..) UNDER CITIZEN. Now you have just one table MILITARY\n(...) UNDER CIVIL_SERVANT, where you have other criteria for IS_GOOD so\nyou must either be able to override the trigger for that table (and its\nchildren) or make sure that the functions used are dynamically mached to\nthe actual tuple type (header in Relational Model parlance) so that\nkills_not_others(MILITARY) will be used, which presents the system\nMILITARYs view of the being good ;)\n\nWhat I'm after here is dynamic (and automatic) row level dispach of the\nright function based on row type - so that for rows in CITIZEN or\nCIVIL_SERVANT the function kills_not_others(CITIZEN) will be used but\nfor rows in MILITAY the kills_not_others(MILITARY) is used.\n\n---------\n Hannu\n\n\n",
"msg_date": "05 Sep 2002 15:15:06 +0200",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inheritance"
},
{
"msg_contents": "On Thu, 2002-09-05 at 12:29, Jeff Davis wrote:\n> >\n> > This is not so wrong. If you think about it, you have the same\n> > problem in most object-oriented programming languages: a person\n> > object can't generally easily become a subclass of itself after\n> > being created.\n> >\n> > This is a case, I would say, where you simply don't want to use\n> > inheritance. A person has-a job, not is-a job.\n> >\n> \n> But a person is-a employee (allow me to momentarily step aside from the rules \n> of english grammer, if you would), and a person is-a programmer. That's why I \n> didn't call my table \"job\" :) [1]\n> \n> I don't like the way some OO programming languages handle objects, if they \n> mean to say you can't change an object's type without performing a logical \n> data copy to a new object. If you don't use some kind of extra layer of \n> abstraction in C, you will end up with that problem: you'd need to copy all \n> that RAM over to change from one struct to another. Most people would rather \n> take that RAM copying hit than all the hits for allowing \"room to expand\" (at \n> least in some applications). However, postgres needs to provide that \"room to \n> expand\" for each tuple anyway, so to go through the same copying seems bad \n> (especially since we're no longer just talking RAM). \n\nI would like to have UPDATEs both up and down the inheritance hierarchy,\nso that when I have hierarchy\n\nOBJECT(id serial primary key)\n + HUMAN(name text,age int)\n + EMPLOYEE(salary numeric)\n + ENGINEER(workstation computer)\n + PHB(laptop computer)\n\nand ENGINEER named Bob\n\nI could do\n\nUPDATE ENGINEER \n TO PHB\n SET salary = salary * 2 + age * 1000,\n laptop.disksize = max(laptop.disksize ,\n workstation.disksize + 1000000)\n WHERE name='Bob'\n;\n\nto promote Bob from an engineer to phb, give him a salary rise and a\nlaptop with default configuration ensuring big enough disk to keep all\nhis old files, but still keep all FK related records.\n\n> Take as an example python... it's easy to emulate other objects: just assign \n> to the attribute, even if it's not there yet, it'll add the attribute. Same \n> with python, it's providing room to expand for it's objects already, so why \n> do all the copying?\n\nthat's unless you use the new-style objects and __slots__\n\n>>> class myobj(object):\n... __slots__ = ['a','b']\n... \n>>> M = myobj()\n>>> M.a =1\n>>> M.c =1\nTraceback (most recent call last):\n File \"<stdin>\", line 1, in ?\nAttributeError: 'myobj' object has no attribute 'c'\n>>> \n\n> Same with python, it's providing room to expand for it's objects already,\n> so why do all the copying?\n\n\n> [1] Come to think of it, the JOIN operator seems to, at least on a first \n> thought, represent the \"has-a\" relationship you describe. You could have the \n> tuples \"manager\" and \"programmer\" in the table \"job\" and join with a \"people\" \n> table. Don't ask about inheritance yet for this model, I'm still thinking \n> about that one (does \"has-a\" even have an analogue to inheriteance?).\n\nNot in inheritance, but in OO world attributes are used to express has-a\nrelations. So\n\n bob = people(name='Bob')\n bob.job = job('Manager')\n\nmakes an has-a relation between Bob and his job in python\n\nBTW, good programming guidelines in python tell you not to test if bob\nis-a something but rather test if the interface for something exists -\nto see if you can iterate over bob you do not test if bob is a sequence\nbut just try it:\n\ntry:\n for idea in bob:\n examine(idea)\nexcept TypeError:\n print 'Failed to iterate over %s %s !' % (bob,job.name, bob.name)\n\n---------------\nHannu\n\n\n\n\n\n",
"msg_date": "05 Sep 2002 16:00:23 +0200",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inheritance"
},
{
"msg_contents": "I really like Hannu's idea of storing an entire (single-inheritance)\nhierarchy in a single file.\n\nI guess the question we need to ask ourselves is if we're prepared to\nabandon support of multiple inheritance. Personally I am, but...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 05 Sep 2002 10:23:02 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inheritance "
},
{
"msg_contents": "On Thu, 2002-09-05 at 19:23, Tom Lane wrote:\n> I really like Hannu's idea of storing an entire (single-inheritance)\n> hierarchy in a single file.\n> \n> I guess the question we need to ask ourselves is if we're prepared to\n> abandon support of multiple inheritance. Personally I am, but...\n\nSo am I, but I think we should move in stages -\n\n1) first implement the SQL99 standard \n CREATE TABLE mytable() UNDER parenttable ;\n using the above idea and make it work right vs constraints,\n triggers, functions, etc.\n\n This should include the ability to include other table structures\n using LIKE :\n\n CREATE TABLE engine(...);\n CREATE TABLE vehicule(...);\n CREATE TABLE car (\n model text,\n wheels wheel[],\n LIKE engine,\n ) UNDER vehicule;\n\n which could then hopefully be used for migrating most code of form\n\n CREATE TABLE car (\n model text primary key,\n wheels wheel[]\n ) INHERITS (vehicule, engine);\n\n it would be nice (maybe even neccessary) to keep the current\n functionality that columns introduced by LIKE are automatically\n added/renamed/deleted when LIKE's base table changes.\n\n2) when it is working announce non-SQL99-standard-and-broken INHERITS\n to be deprecated and removed in future.\n\n3) give people time for some releases to move over to UNDER + LIKE .\n Or if someone comes up with bright ideas/impementations for fixing\n multiple inheritance, then un-deprecate and keep it.\n\n4) else try to remove INHERITS.\n\n5) if too many people object, goto 3) ;)\n\n-------------------\nHannu\n\n\n",
"msg_date": "05 Sep 2002 20:10:59 +0500",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inheritance"
},
{
"msg_contents": "On Thu, Sep 05, 2002 at 10:23:02AM -0400, Tom Lane wrote:\n> I really like Hannu's idea of storing an entire (single-inheritance)\n> hierarchy in a single file.\n\nWouldn't this require solving the ALTER TABLE ADD COLUMN (to parent)\ncolumn ordering problem? \n\n> I guess the question we need to ask ourselves is if we're prepared to\n> abandon support of multiple inheritance. Personally I am, but...\n\nNo opinion - I've not used the inheritance much, since I'm not willing to\ngive up referential integrity.\n\nRoss\n",
"msg_date": "Thu, 5 Sep 2002 12:02:42 -0500",
"msg_from": "\"Ross J. Reedstrom\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inheritance"
},
{
"msg_contents": "On Fri, 2002-09-06 at 03:19, Greg Copeland wrote:\n> On Thu, 2002-09-05 at 08:15, Hannu Krosing wrote:\n> > On Thu, 2002-09-05 at 11:34, Curt Sampson wrote:\n> > > On 5 Sep 2002, Hannu Krosing wrote:\n> \n> > > If that\n> > > trigger is acting as an integrety constraint on the base table,\n> > > you might destroy the table's integrity.\n> > \n> > What I try to say is that you should have the same freedom with triggers\n> > that you have with select/insert/update/delete - you must be able to\n> > choose if the trigger is on the parent table ONLY or on parent and all\n> > children. \n> \n> Sounds like a mechanism to make the distinction between virtual (child\n> can override parent) and non-virtual (child is constrained by the\n> parent) constraints are needed.\n> \n> After all, there are two basic needs for constraints. One is for\n> relational integrity and the other is business rule integrity. That is,\n> one seeks to ensure that the database makes sense in respect to the data\n> model (a shoe is a product) while the other is to enforce business rules\n> (products are never free). Seems like the DBA should be able to dictate\n> which domain his constraint falls into in some manner.\n>\n> > And you should be able to override a trigger for child table even if\nit\n> > is defined on parent as applying to all children - I guess that\n> > overriding by trigger _name_ would be what most people expect.\n> > \n> \n> That's the reason I used virtual and non-virtual above. If we think\n> using C++ idioms, the child is stuck with it if it's deemed\n> non-virtual. Generally speaking, if someone designed something with\n> that expectation in mind, there's probably a good reason for it. In\n> this case, we could assume that such non-virtual constraints would be to\n> help ensure proper RI. Something that otherwise, IMO, would be tossed\n> out with the bath water.\n\nI agree to this.\n\nWhat I described (making overriding decision solely in child) is\nprobably a bad idea.\n\n> > What I'm after here is dynamic (and automatic) row level dispach of the\n> > right function based on row type - so that for rows in CITIZEN or\n> > CIVIL_SERVANT the function kills_not_others(CITIZEN) will be used but\n> > for rows in MILITAY the kills_not_others(MILITARY) is used.\n> \n> I think we're touching on some form of RTTI information here. That is,\n> triggers and even functions may need to be able to dynamically determine\n> the row type that is actively being worked on.\n\nShould be easy if the row comes directly from a table : just use\ntableoid column.\n\n> If we're on the same page, I think that seemingly makes a lot of sense.\n> \n> What about the concept of columns being public or private? That is,\n> certain columns may not be inherited by a child? Any thought to such a\n> concept? Perhaps different types of table inheritance can be considered\n> in our model...has-a, is-a, etc...\n\nI can't fit this in my mental model of table inheritance for two reasons\n\n1) all parent table columns must be present in child\n\n2) granting some right to parent should automatically allow selecting\nfrom children\n\nboth are required for select/insert/update/delete to work on table and\nits children (i.e. without ONLY)\n\n\nBut maybe i just need to think more about it ;)\n\n------------------\nHannu\n\n\n\n\n\n\n\n\n\n\n\n",
"msg_date": "06 Sep 2002 01:51:51 +0500",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inheritance"
},
{
"msg_contents": "On Thu, 2002-09-05 at 08:15, Hannu Krosing wrote:\n> On Thu, 2002-09-05 at 11:34, Curt Sampson wrote:\n> > On 5 Sep 2002, Hannu Krosing wrote:\n\n> > If that\n> > trigger is acting as an integrety constraint on the base table,\n> > you might destroy the table's integrity.\n> \n> What I try to say is that you should have the same freedom with triggers\n> that you have with select/insert/update/delete - you must be able to\n> choose if the trigger is on the parent table ONLY or on parent and all\n> children. \n\nSounds like a mechanism to make the distinction between virtual (child\ncan override parent) and non-virtual (child is constrained by the\nparent) constraints are needed.\n\nAfter all, there are two basic needs for constraints. One is for\nrelational integrity and the other is business rule integrity. That is,\none seeks to ensure that the database makes sense in respect to the data\nmodel (a shoe is a product) while the other is to enforce business rules\n(products are never free). Seems like the DBA should be able to dictate\nwhich domain his constraint falls into in some manner.\n\n> \n> And you should be able to override a trigger for child table even if it\n> is defined on parent as applying to all children - I guess that\n> overriding by trigger _name_ would be what most people expect.\n> \n\nThat's the reason I used virtual and non-virtual above. If we think\nusing C++ idioms, the child is stuck with it if it's deemed\nnon-virtual. Generally speaking, if someone designed something with\nthat expectation in mind, there's probably a good reason for it. In\nthis case, we could assume that such non-virtual constraints would be to\nhelp ensure proper RI. Something that otherwise, IMO, would be tossed\nout with the bath water.\n\n> What I'm after here is dynamic (and automatic) row level dispach of the\n> right function based on row type - so that for rows in CITIZEN or\n> CIVIL_SERVANT the function kills_not_others(CITIZEN) will be used but\n> for rows in MILITAY the kills_not_others(MILITARY) is used.\n\nI think we're touching on some form of RTTI information here. That is,\ntriggers and even functions may need to be able to dynamically determine\nthe row type that is actively being worked on.\n\nIf we're on the same page, I think that seemingly makes a lot of sense.\n\nWhat about the concept of columns being public or private? That is,\ncertain columns may not be inherited by a child? Any thought to such a\nconcept? Perhaps different types of table inheritance can be considered\nin our model...has-a, is-a, etc...\n\n\nRegards,\n\n\tGreg Copeland",
"msg_date": "05 Sep 2002 17:19:32 -0500",
"msg_from": "Greg Copeland <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inheritance"
},
{
"msg_contents": "On 5 Sep 2002, Hannu Krosing wrote:\n\n> Suppose you have a table CITIZEN with table-level constraint IS_GOOD\n> which is defined as kills_not_others(CITIZEN). and there is table\n> CIVIL_SERVANT (..) UNDER CITIZEN. Now you have just one table MILITARY\n> (...) UNDER CIVIL_SERVANT, where you have other criteria for IS_GOOD....\n\nThis I very much disagree with.\n\nIn most object-oriented languages (Eiffel being a notable exception, IIRC),\nyou can't specify constraints on objects. But in a relational database,\nyou can specify constraints on tables, and it should *never* *ever* be\npossible to violate those constraints, or the constraints are pointless.\n\nSo if I have a constraint that says, \"no rows appearing in this\ntable will ever violate constraint X,\" and then you go and create\na way of inserting rows into that table that violate that constraint,\nI think you've just made the database into a non-relational database.\nI really don't want to break postgres' relational side for some\ninheritance features of dubious utility. Constraints should be explicitly\nremoved from tables if they are no longer needed, not implicitly removed\nthrough the creation of another table.\n\nI think we should settle this point before going any further.\n\ncjs\n-- \nCurt Sampson <[email protected]> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n\n",
"msg_date": "Fri, 6 Sep 2002 14:37:59 +0900 (JST)",
"msg_from": "Curt Sampson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inheritance"
},
{
"msg_contents": "On 5 Sep 2002, Greg Copeland wrote:\n\n> Sounds like a mechanism to make the distinction between virtual (child\n> can override parent) and non-virtual (child is constrained by the\n> parent) constraints are needed.\n\nOh, I should mention that I have no problem with being able to declare a\nconstraint \"overridable\" by subtables, so long as it's not the default,\nand it's clear from the table definition that it might be overridden.\n\ncjs\n-- \nCurt Sampson <[email protected]> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n\n",
"msg_date": "Fri, 6 Sep 2002 14:40:57 +0900 (JST)",
"msg_from": "Curt Sampson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inheritance"
},
{
"msg_contents": "On Thu, 5 Sep 2002, Jeff Davis wrote:\n\n> But a person is-a employee (allow me to momentarily step aside from\n> the rules of english grammer, if you would), and a person is-a\n> programmer. That's why I didn't call my table \"job\" :) [1]\n\nCertainly it's not the case that a person is-a job, by virtue of the\nfact that a person can have no job. Nor is it the case that a person\nis-a programmer; not all people are programmers.\n\nPerhaps you're reversing the sense of \"is-a\"? One says \"subtype is-a\nsupertype,\" not \"supertype is-a subtype.\"\n\nBut even reversing these, it's not the case that job is-a person, by\nvirtue of the fact that you cannot use a job anywhere you can use a\nperson. (A person can file his tax return, a job can't.) That might be a\nmatter of bad mappings of object names to real-world concepts, though.\n\nAs for \"programmer is-a person,\" yes, you could model things that way if\nyou really wanted to. But it's a bad way to do it because, as you point\nout, a person can change his job, or not have a job. Now what do you do\nwith that programmer-subtype-of-person object you created? I think in\nthis case English misled you: we do say that \"he is a programmer,\" but\nwhat we really mean is that \"one of the characteristics of that person\nis that he programs.\" So create a separate characteristic type and have\nthe person object \"have-a\" as many or as few of those characteristics as\nyou need.\n\n> I don't like the way some OO programming languages handle objects, if they\n> mean to say you can't change an object's type without performing a logical\n> data copy to a new object.\n\nThat's not a problem with the programming language; that's you\nmodelling things badly.\n\n> Take as an example python... it's easy to emulate other objects: just assign\n> to the attribute, even if it's not there yet, it'll add the attribute. Same\n> with python, it's providing room to expand for it's objects already, so why\n> do all the copying? Now compare with Java, and see why you'd be annoyed. It\n> has the facilities to change the objects all around, but you can't do it.\n\nYes, you can't do it in Java because you Can't Do It in a language where\nyou can specify static typing. If I have field that holds a String, I'm\ngiven a guarantee that, if I can put a reference in that field, it is\nand always will be a String.\n\nIn non-statically-typed languages that give you the option of changing\ntypes, you might give a referenc to a string, change the objects type on\nme, and then I might blow up when I try to use it later. These bugs tend\nto be quite difficult to track down because the source and manifestation\nof the problem can be widely separated in code and in time. That's why\nmost languages don't allow this.\n\n> ...when the more obvious logical path is to expand on the data you\n> already carry about an entity.\n\nYes, that's the perfectly obvious path. And that's just what the\nrelational model lets us do, and do very well.\n\nWhy do you want to use an ill-fitting, error-prone model when you've\nalready got something that works better?\n\n> [1] Come to think of it, the JOIN operator seems to, at least on a first\n> thought, represent the \"has-a\" relationship you describe.\n\nYou bet! Hey, this relational stuff doesn't suck so badly after\nall, does it? Especially for a 30-year old theory. :-)\n\ncjs\n-- \nCurt Sampson <[email protected]> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n\n",
"msg_date": "Fri, 6 Sep 2002 14:54:42 +0900 (JST)",
"msg_from": "Curt Sampson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inheritance"
},
{
"msg_contents": "On 6 Sep 2002, Hannu Krosing wrote:\n\n> > In most object-oriented languages (Eiffel being a notable exception, IIRC),\n> > you can't specify constraints on objects. But in a relational database,\n> > you can specify constraints on tables, and it should *never* *ever* be\n> > possible to violate those constraints, or the constraints are pointless.\n>\n> That's not how real world (which data is supposed to model) operates ;)\n\nSure it is. Please don't blame the language for being wrong when you\nincorrectly model things for your purposes. To chose a much simpler\nand more obvious example: if you stored birthdate as a date only, and\nsomeone complained that you're not born all day, but at a particular\ntime on that day, you don't blame the language for having the date type\nnot store the time of day. You fix your problem to use both a date and a\ntime to store that value.\n\nIf the language specifies that contstraints on tables are not to be\nviolated, then don't use those constraints when you don't want them.\n\n> To elaborate on Gregs example if you have table GOODS and under it a\n> table CAMPAIGN_GOODS then you may place a general overridable constraint\n> valid_prices on GOODS which checks that you dont sell cheaper than you\n> bought, but you still want sell CAMPAIGN_GOODS under aquiring price, so\n> you override the constraint for CAMPAIGN_GOODS.\n\nThis looks like a classic case of incorrect modelling to me. Does the\ngood itself change when it becomes a campaign_good? No. The price\nchanges, but that's obviously not an integral part of the good itself.\nSo separate your price information from your good information, and then\nyou can do things like have campaign prices, multiple prices per good\n(since you probably want to keep the original price information as\nwell), and so on.\n\nI'm really getting the feeling a lot of these applications that\nwant table inheritance want it just to be different, not because\nit provides anything useful.\n\nI am completely committed to object-oriented programming, and use\ninheritance heavily, so it's not that I don't understand or like the\nconcepts. But just because a concept works well in one type of use does\nnot mean it will do any good, or even not do harm, when brought into a\ncompletely different world.\n\n> SQL standard constraints should be non-overridable. I still think that\n> Constraint triggers should be overridable/dynamic.\n\nI still don't like it. Eiffel had good reasons for making the\nconstraints non-overridable. Other OO languages don't have constraints,\nor they would probably do the same.\n\nThat said, I could live with dynamic dispatch, if the default were\nto make it non-dynamic, and you had to add a special flag to make it\ndynamic. That way it would be obvious to the casual user or a DBA\nfamiliar with other databases but not postgres that something unusual is\ngoing on.\n\ncjs\n-- \nCurt Sampson <[email protected]> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n\n",
"msg_date": "Fri, 6 Sep 2002 16:53:12 +0900 (JST)",
"msg_from": "Curt Sampson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inheritance"
},
{
"msg_contents": "On Fri, 2002-09-06 at 07:37, Curt Sampson wrote:\n> On 5 Sep 2002, Hannu Krosing wrote:\n> \n> > Suppose you have a table CITIZEN with table-level constraint IS_GOOD\n> > which is defined as kills_not_others(CITIZEN). and there is table\n> > CIVIL_SERVANT (..) UNDER CITIZEN. Now you have just one table MILITARY\n> > (...) UNDER CIVIL_SERVANT, where you have other criteria for IS_GOOD....\n> \n> This I very much disagree with.\n> \n> In most object-oriented languages (Eiffel being a notable exception, IIRC),\n> you can't specify constraints on objects. But in a relational database,\n> you can specify constraints on tables, and it should *never* *ever* be\n> possible to violate those constraints, or the constraints are pointless.\n\nThat's not how real world (which data is supposed to model) operates ;)\n\nAs Greg already pointed out, there are two kinds of constraints -\ndatabase integrity constraints (foreign key, unique, not null, check),\nwhich should never be overridden and business-rule constraints which\nshould be overridable in child tables.\n\none can argue that the latter are not constraints at all, but they sure\nlook like constraints to me ;)\n\nTo elaborate on Gregs example if you have table GOODS and under it a\ntable CAMPAIGN_GOODS then you may place a general overridable constraint\nvalid_prices on GOODS which checks that you dont sell cheaper than you\nbought, but you still want sell CAMPAIGN_GOODS under aquiring price, so\nyou override the constraint for CAMPAIGN_GOODS.\n\n> So if I have a constraint that says, \"no rows appearing in this\n> table will ever violate constraint X,\" and then you go and create\n> a way of inserting rows into that table that violate that constraint,\n> I think you've just made the database into a non-relational database.\n\nSQL standard constraints should be non-overridable. I still think that\nConstraint triggers should be overridable/dynamic. \n\nOr maybe it is better to just make the check function should be\ndynamically dispatched, so the constraint will always hold, it just can\nmean different things for different types.\n\n> I really don't want to break postgres' relational side for some\n> inheritance features of dubious utility. Constraints should be explicitly\n> removed from tables if they are no longer needed, not implicitly removed\n> through the creation of another table.\n> \n> I think we should settle this point before going any further.\n\nIt seems that the dynamic dispatch of trigger function should be enough\nfor business-rule constraints. \n\nAnd it is also simpler and cleaner (both conceptually and to implement)\nif constraints themselves are not overridable.\n\nSo in my CAMPAIGN_GOODS example you just have different\nvalid_prices(GOODS) and valid_prices(CAMPAIGN_GOODS), but one constraint\non GOODS which states that price must be valid . \n\nDoing it this way ensures that you are not able to have a record in\nGOODS for which valid_price(ROW) does not hold.\n\nIf you don't want inherited tables to be able to override valid_price()\nuse it in CHECK constraint in GOODS, which should use the\nvalid_prices(cast(ROW as GOODS)) for any inherited type.\n\n-----------------\nHannu\n\n\n\n\n\n\n\n\n\n\n",
"msg_date": "06 Sep 2002 10:21:52 +0200",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inheritance"
},
{
"msg_contents": "On Fri, 2002-09-06 at 09:53, Curt Sampson wrote:\n> \n> If the language specifies that contstraints on tables are not to be\n> violated, then don't use those constraints when you don't want them.\n\nBut what _should_ i use then if i want the same business rule on most\ntop-level types, but a changed one on some down the hierarchy ?\n \n> > To elaborate on Gregs example if you have table GOODS and under it a\n> > table CAMPAIGN_GOODS then you may place a general overridable constraint\n> > valid_prices on GOODS which checks that you dont sell cheaper than you\n> > bought, but you still want sell CAMPAIGN_GOODS under aquiring price, so\n> > you override the constraint for CAMPAIGN_GOODS.\n> \n> This looks like a classic case of incorrect modelling to me. Does the\n> good itself change when it becomes a campaign_good? No. The price\n> changes, but that's obviously not an integral part of the good itself.\n\nPerhaps we mean different things by good. I meant a GOOD to be a THING \nbought with the purpose of reselling. Price (actually prices: \nselling_price and buying_price) is what makes it a GOOD and thus it is\nan integral part of it.\n\n> So separate your price information from your good information, and then\n> you can do things like have campaign prices, multiple prices per good\n> (since you probably want to keep the original price information as\n> well), and so on.\n\nIt does not solve the problem described above - the price at which the\ngood is soled is still constrained differently for orninary and campaign\ngoods.\n\nin standard relational model you would make the distinction inside the\nconstraint (CHECK (selling_price > buying_price) OR is_campaign_good)\nbut this localises the check in wrong place - in OO model I'd expect it\nto be possible to define the constraint near the child type, not change\nthe parent constraint each time I derive new child types.\n\n> I'm really getting the feeling a lot of these applications that\n> want table inheritance want it just to be different, not because\n> it provides anything useful.\n\nAs with any other inheritance, it is just a way to organize stuff.\n\nIn case of being able to override constraints for child tables it can\nalso be a significant performance boost - if you have 10 000 000 goods\nin a table you don't want to change a constraint on GOODS to allow\ncampaign goods to be sold cheaper than bought as it would have to check\nall goods for validity according to new constraint - putting the\nconstraint on just CAMPAIGN_GOODS will enable the DB engine to check\njust tuples in CAMPAIGN_GOODS.\n\n> I am completely committed to object-oriented programming, and use\n> inheritance heavily, so it's not that I don't understand or like the\n> concepts. But just because a concept works well in one type of use does\n> not mean it will do any good, or even not do harm, when brought into a\n> completely different world.\n\n Surely great caution is needed when defining the desired behaviour.\n\n> > SQL standard constraints should be non-overridable. I still think that\n> > Constraint triggers should be overridable/dynamic.\n> \n> I still don't like it. Eiffel had good reasons for making the\n> constraints non-overridable. Other OO languages don't have constraints,\n> or they would probably do the same.\n> \n> That said, I could live with dynamic dispatch, if the default were\n> to make it non-dynamic, and you had to add a special flag to make it\n> dynamic. That way it would be obvious to the casual user or a DBA\n> familiar with other databases but not postgres that something unusual is\n> going on.\n\nThat seems about the right compromise between constraining and developer\nfreedom.\n\n-------------\nHannu\n\n\n\n\n\n\n\n",
"msg_date": "06 Sep 2002 14:53:37 +0200",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inheritance"
},
{
"msg_contents": "On Thu, 2002-09-05 at 15:51, Hannu Krosing wrote:\n> On Fri, 2002-09-06 at 03:19, Greg Copeland wrote:\n> > \n> > What about the concept of columns being public or private? That is,\n> > certain columns may not be inherited by a child? Any thought to such a\n> > concept? Perhaps different types of table inheritance can be considered\n> > in our model...has-a, is-a, etc...\n> \n> I can't fit this in my mental model of table inheritance for two reasons\n> \n> 1) all parent table columns must be present in child\n\nOkay, I must admit, I'm not really sure why. If we look at it in a\nphysical versus logical manner, even if it's physically there, why must\nit be logically exposed? Can you help me understand why it would even\nneed to physically be there. After all, if a child can't update it,\nthey don't need to see it.\n\n> \n> 2) granting some right to parent should automatically allow selecting\n> from children\n\nUnless the parent deemed it inappropriate access (private)?\n\nIf a column were deemed private, that would have a couple of\nstipulations on it. That is, it would have to ensure that \"NOT NULL\"\nwhere not one of the constraints, or, if it did, ensure that a default\nvalue were also provided.\n\n> \n> both are required for select/insert/update/delete to work on table and\n> its children (i.e. without ONLY)\n> \n> \n> But maybe i just need to think more about it ;)\n> \n\nWell, I guess I'm lagging behind you on this manner. Perhaps \"holding\nmy hand\" and explaining it a bit will allow you to work through it some\nmore and help bring me in line with what you're thinking.\n\nGreg",
"msg_date": "06 Sep 2002 08:01:27 -0500",
"msg_from": "Greg Copeland <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inheritance"
},
{
"msg_contents": "On Fri, 2002-09-06 at 07:53, Hannu Krosing wrote:\n> On Fri, 2002-09-06 at 09:53, Curt Sampson wrote:\n> > This looks like a classic case of incorrect modelling to me. Does the\n> > good itself change when it becomes a campaign_good? No. The price\n> > changes, but that's obviously not an integral part of the good itself.\n> \n> Perhaps we mean different things by good. I meant a GOOD to be a THING \n> bought with the purpose of reselling. Price (actually prices: \n> selling_price and buying_price) is what makes it a GOOD and thus it is\n> an integral part of it.\n\nNo matter now you look at the example, invalidating it does not address\nthe issue raised as it still exists. Either way, Hannu and I seem to\nagree that some class of constraints need to be able to be overridden.\n\n> In case of being able to override constraints for child tables it can\n> also be a significant performance boost - if you have 10 000 000 goods\n> in a table you don't want to change a constraint on GOODS to allow\n> campaign goods to be sold cheaper than bought as it would have to check\n> all goods for validity according to new constraint - putting the\n> constraint on just CAMPAIGN_GOODS will enable the DB engine to check\n> just tuples in CAMPAIGN_GOODS.\n\nI had not considered this before. Does that still hold true if we go\nwith a parent contains all columns implementation? Of are you simply\nsaying that it doesn't matter as when the constraint were applied it\nwould only scan the rows the below to the child? Perhaps this doesn't\nmatter for this portion of the conversation. But hey, I was curious. \n:)\n\n> \n> > > SQL standard constraints should be non-overridable. I still think that\n> > > Constraint triggers should be overridable/dynamic.\n> > \n> > I still don't like it. Eiffel had good reasons for making the\n> > constraints non-overridable. Other OO languages don't have constraints,\n> > or they would probably do the same.\n\nWell Curt, as you outlined above (clipped out) about it being a\ndifferent world...I think also applies here. IMO, we are treading\nlightly on new and perhaps thin ground so we need to be careful that we\napply common parallels and idioms only we are certain that they need\napply. What I'm trying to say is, just because it's not allowed in\nEiffel does have to mean the same applies here.\n\n> > \n> > That said, I could live with dynamic dispatch, if the default were\n> > to make it non-dynamic, and you had to add a special flag to make it\n> > dynamic. That way it would be obvious to the casual user or a DBA\n> > familiar with other databases but not postgres that something unusual is\n> > going on.\n> \n> That seems about the right compromise between constraining and developer\n> freedom.\n> \n\nI agree. That does appear to be pointing us in a conservatively sane\nand safe direction.\n\n\nGreg",
"msg_date": "06 Sep 2002 08:19:51 -0500",
"msg_from": "Greg Copeland <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inheritance"
},
{
"msg_contents": "> On Fri, 2002-09-06 at 07:37, Curt Sampson wrote:\n> > On 5 Sep 2002, Hannu Krosing wrote:\n> > \n> > > Suppose you have a table CITIZEN with table-level constraint IS_GOOD\n> > > which is defined as kills_not_others(CITIZEN). and there is table\n> > > CIVIL_SERVANT (..) UNDER CITIZEN. Now you have just one table MILITARY\n> > > (...) UNDER CIVIL_SERVANT, where you have other criteria for IS_GOOD....\n> > \n> > This I very much disagree with.\n> > \n> > In most object-oriented languages (Eiffel being a notable exception, IIRC),\n> > you can't specify constraints on objects. But in a relational database,\n> > you can specify constraints on tables, and it should *never* *ever* be\n> > possible to violate those constraints, or the constraints are pointless.\n> \n> That's not how real world (which data is supposed to model) operates ;)\n> \n> As Greg already pointed out, there are two kinds of constraints -\n> database integrity constraints (foreign key, unique, not null, check),\n> which should never be overridden and business-rule constraints which\n> should be overridable in child tables.\n> \n> one can argue that the latter are not constraints at all, but they sure\n> look like constraints to me ;)\n> \n> To elaborate on Gregs example if you have table GOODS and under it a\n> table CAMPAIGN_GOODS then you may place a general overridable constraint\n> valid_prices on GOODS which checks that you dont sell cheaper than you\n> bought, but you still want sell CAMPAIGN_GOODS under aquiring price, so\n> you override the constraint for CAMPAIGN_GOODS.\n\nWhat that tells me is that the constraint, valid_prices, shouldn't have been \non GOODS in the first place. If it is not a legitimate constraint for the \nchildren, then it is not a legitimate constraint for the parent.\n\nIn human inheritance, if you marry someone with \"funny coloured skin,\" you \ndon't get to choose that your children won't have \"funny coloured skin.\" \nThat's a pretty forcible \"constraint.\" :-).\n\nFor the GOODS situation, the constraint ought not to be on GOODS in the first \nplace. There ought to be a table ORDINARY_GOODS, or some such thing, to which \nthe constraint applies, and from which CAMPAIGN_GOODS will _not_ be inheriting.\n\n> > So if I have a constraint that says, \"no rows appearing in this\n> > table will ever violate constraint X,\" and then you go and create\n> > a way of inserting rows into that table that violate that constraint,\n> > I think you've just made the database into a non-relational database.\n> \n> SQL standard constraints should be non-overridable. I still think that\n> Constraint triggers should be overridable/dynamic. \n> \n> Or maybe it is better to just make the check function should be\n> dynamically dispatched, so the constraint will always hold, it just can\n> mean different things for different types.\n\nOr maybe if someone is doing an Object Oriented design, and making extensive \nuse of inheritance, they'll need to apply constraints in a manner that allow \nthem to be properly inherited.\n--\n(concatenate 'string \"aa454\" \"@freenet.carleton.ca\")\nhttp://cbbrowne.com/info/\nIf a cow laughed, would milk come out its nose? \n\n\n",
"msg_date": "Fri, 06 Sep 2002 09:57:03 -0400",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: Inheritance "
},
{
"msg_contents": "On Fri, 2002-09-06 at 08:57, [email protected] wrote:\n> > On Fri, 2002-09-06 at 07:37, Curt Sampson wrote:\n> > > On 5 Sep 2002, Hannu Krosing wrote:\n> > \n> > To elaborate on Gregs example if you have table GOODS and under it a\n> > table CAMPAIGN_GOODS then you may place a general overridable constraint\n> > valid_prices on GOODS which checks that you dont sell cheaper than you\n> > bought, but you still want sell CAMPAIGN_GOODS under aquiring price, so\n> > you override the constraint for CAMPAIGN_GOODS.\n> \n> What that tells me is that the constraint, valid_prices, shouldn't have been \n> on GOODS in the first place. If it is not a legitimate constraint for the \n> children, then it is not a legitimate constraint for the parent.\n> \n\nI don't agree with you on that point. This concept is common to many\nOO-implementations. Unless you can come up with a powerful argument as\nto why our \"to-be\" picture should never do this, I'm less than\nconvinced.\n\n> In human inheritance, if you marry someone with \"funny coloured skin,\" you \n> don't get to choose that your children won't have \"funny coloured skin.\" \n> That's a pretty forcible \"constraint.\" :-).\n> \n\nFine, but that only works for YOUR specific example. In that example,\nthe color constraint should be non-virtual, meaning, the child should\nnot be able to change it. On the other hand, if I replace \"human\" with\n\"metal product\", hopefully I won't be stuck with gun metal gray for\nevery derived product. Hopefully, somewhere along the lines, I'll be\nable to override the parent's color constraint.\n\n> > Or maybe it is better to just make the check function should be\n> > dynamically dispatched, so the constraint will always hold, it just can\n> > mean different things for different types.\n> \n> Or maybe if someone is doing an Object Oriented design, and making extensive \n> use of inheritance, they'll need to apply constraints in a manner that allow \n> them to be properly inherited.\n\nThe problem with that assumption is that there is normally nothing wrong\nwith having seemingly mutually exclusive sets of *business rules* for a\nparent and child.\n\nGreg",
"msg_date": "06 Sep 2002 10:27:26 -0500",
"msg_from": "Greg Copeland <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inheritance"
},
{
"msg_contents": "Oops! [email protected] (Greg Copeland) was seen spray-painting on a wall:\n> --=-eu74lKXry3SVx8eZ/qBD\n> Content-Type: text/plain\n> Content-Transfer-Encoding: quoted-printable\n> On Fri, 2002-09-06 at 08:57, [email protected] wrote:\n>> > On Fri, 2002-09-06 at 07:37, Curt Sampson wrote:\n>> > > On 5 Sep 2002, Hannu Krosing wrote:\n>> > To elaborate on Gregs example if you have table GOODS and under it a\n>> > table CAMPAIGN_GOODS then you may place a general overridable constraint\n>> > valid_prices on GOODS which checks that you dont sell cheaper than you\n>> > bought, but you still want sell CAMPAIGN_GOODS under aquiring price, so\n>> > you override the constraint for CAMPAIGN_GOODS.\n\n>> What that tells me is that the constraint, valid_prices, shouldn't\n>> have been on GOODS in the first place. If it is not a legitimate\n>> constraint for the children, then it is not a legitimate constraint\n>> for the parent.\n\n> I don't agree with you on that point. This concept is common to\n> many OO-implementations. Unless you can come up with a powerful\n> argument as to why our \"to-be\" picture should never do this, I'm\n> less than convinced.\n\nIf the plan is for table CAMPAIGN_GOODS to virtually be a view on GOODS,\nthen I'd say it _is_ necessary.\n\n>> In human inheritance, if you marry someone with \"funny coloured skin,\" yo=\n> u=20\n>> don't get to choose that your children won't have \"funny coloured skin.\"=\n> =20=20\n>> That's a pretty forcible \"constraint.\" :-).\n>>=20\n\nIs there something broken with your mailer? It's reformatting quotes\nrather horribly...\n\n> Fine, but that only works for YOUR specific example. In that\n> example, the color constraint should be non-virtual, meaning, the\n> child should not be able to change it. On the other hand, if I\n> replace \"human\" with \"metal product\", hopefully I won't be stuck\n> with gun metal gray for every derived product. Hopefully, somewhere\n> along the lines, I'll be able to override the parent's color\n> constraint.\n\nThat happens by _adding_ an additional characteristic, presumably that\nof \"what kind of paint the metal is covered with.\" That doesn't\noverride the fundamental constraint that if it's a metal product,\nthere _will_ be metallic properties.\n\nIf you decide to add in some \"non-metallic\" products, then it would be\n_silly_ to have them inherit all their characteristics from\n\"METAL_PRODUCTS;\" they should head back up the class hierarchy and\ninherit their basic characteristics from the _appropriate_ parent.\n\nReality, with the \"GOODS/CAMPAIGN_GOODS\" example, is that GOODS isn't\nthe appropriate parent class for CAMPAIGN_GOODS. Both should be\ninheriting the common characteristics from some common ancestor. If\nthat is done, then there's nothing to \"override.\"\n\n>> > Or maybe it is better to just make the check function should be\n>> > dynamically dispatched, so the constraint will always hold, it just can\n>> > mean different things for different types.\n>>=20\n>> Or maybe if someone is doing an Object Oriented design, and making extens=\n> ive=20\n>> use of inheritance, they'll need to apply constraints in a manner that al=\n> low=20\n>> them to be properly inherited.\n\n> The problem with that assumption is that there is normally nothing\n> wrong with having seemingly mutually exclusive sets of *business\n> rules* for a parent and child.\n\nIf the rules are totally different, it begs the question of why they\n_should_ be considered to be related in a \"parent/child\" relationship.\n\nIt may well be that they _aren't_ related as \"parent/child.\" They may\nmerely be \"cousins,\" sharing some common ancestors.\n-- \n(concatenate 'string \"chris\" \"@cbbrowne.com\")\nhttp://cbbrowne.com/info/spreadsheets.html\n\"Note that if I can get you to `su and say' something just by asking,\nyou have a very serious security problem on your system and you should\nlook into it.\" -- Paul Vixie, vixie-cron 3.0.1 installation notes\n",
"msg_date": "Fri, 06 Sep 2002 12:05:03 -0400",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: Inheritance"
},
{
"msg_contents": "\n\nThere was a comment earlier that was not really addressed.\nWhat can you do with table inheritance that you can not do\nwith a relational implementation? Or what would work *better*\nas inheritance? (you define better)\n\nThis is a genuine question, not a snarky comment. I really\nwant to know. This is the reason I can think of to use\ninheritance: Several tables have a common set of attributes and\nthere is some reason for these tables to be separate AND there\nis some reason for the common columns to be queried en masse.\nWhat kinds of \"some reasons\" are there, though? And if my\ncondition for using table inheritance is lacking or misguided, what should\nbe the criteria for using table inheritance?\n\nCreating indexes across tables is a project. Is it the most important\nproject? Will it benefit the most users? Will it benefit any users?\nTheory is great and important, but if no one uses the functionality,\nwho cares? If these changes will enable people to use the functionality\nthat until now had been too much of a PITA then it might be worth\nit. However, I suspect the majority of people who would use these\nchanges are participating in these discussions.\n\nThese features were never widely used in Illustra nor Informix although\ntheir implementations were a little smoother imho.\n\nTo weigh in on the constraints issues, it seems problematic\nthat currently some constraints (check) are inherited and\nothers are not (foreign keys). The chcheers,oice of which ones are\nor aren't is clear to people familiar with the implementation\nbut what about the rest of the world who just want some\nconsistent rule.\n\nI also agree with the people who say, if we inherit constrainsts,\nthen we must be able to override them in the subtables.\nI like the suggested \"LOCAL\" keyword, myself.\n\ncheers,\n\nelein\n\n\n:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:\n [email protected] (510)543-6079\n \"Taking a Trip. Not taking a Trip.\" --anonymous\n:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:\n\n",
"msg_date": "Fri, 06 Sep 2002 11:00:01 -0700",
"msg_from": "elein <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inheritance"
},
{
"msg_contents": "On Fri, 2002-09-06 at 11:05, [email protected] wrote:\n> Oops! [email protected] (Greg Copeland) was seen spray-painting on a wall:\n> >> That's a pretty forcible \"constraint.\" :-).\n> >>=20\n> \n> Is there something broken with your mailer? It's reformatting quotes\n> rather horribly...\n\nHmm...not that I know off. Never had complaints before anyways. Looks\nlike an issue with MIME contents...perhaps your mailer doesn't properly\nparse some MIME and/or mine is hosing it some how. Not really sure.\n\n> Reality, with the \"GOODS/CAMPAIGN_GOODS\" example, is that GOODS isn't\n> the appropriate parent class for CAMPAIGN_GOODS. Both should be\n> inheriting the common characteristics from some common ancestor. If\n> that is done, then there's nothing to \"override.\"\n> \n\nYou can complain about and redefine the model to suit your needs all day\nlong and get no where. It doesn't change the need for it. Fact is, it\nwould be nice to allow. Fact is, OO-implementations tend to allow\nthis. I'm quite happy to let you go to every OO computer language camp\nand inform them that they've done it all wrong. ;)\n\nCiting that a specific example is all wrong hardly invalidates the\nconcept. Since we are pretty much at the conceptual stage, I welcome a\nconceptual argument on why this is bad and should never be done. \nPlease, be high level and generic. After all, I too can give you a\nhundred specific reasons why a cat is not dog (i.e. bad model)...but it\ndoes nothing to facilitate the topic at hand.\n\n> > The problem with that assumption is that there is normally nothing\n> > wrong with having seemingly mutually exclusive sets of *business\n> > rules* for a parent and child.\n> \n> If the rules are totally different, it begs the question of why they\n> _should_ be considered to be related in a \"parent/child\" relationship.\n\nBecause this is how the real world works. Often there are exceptions to\nthe rules. When these rules differ, I've not seen a valid high level\nconceptual reason that should prevent it.\n\nExample:\n\nanimal\n\tquadruped (has 4-trunk limbs)\n\t\tdog\n\t\tinjuredDog (has 0 or more trunk limbs)\n\nHopefully we can agree that a dog is still a dog even if it only has\nthree legs? Hopefully you'll realize this was given to illustrate an\nexample and to prove a point. Sometimes a model needs to allow for\nexceptions to the rule. You can argue that a three-legged dog is no\nlonger a quadruped but I prefer to believe that it is a quadruped which\njust happens to be an exception to the rule.\n\t\n> \n> It may well be that they _aren't_ related as \"parent/child.\" They may\n> merely be \"cousins,\" sharing some common ancestors.\n\nYes, it's true. Sometimes the wrong model is applied but that hardly\ninvalidates the concept or alleviates the need.\n\nRegards,\n\n\tGreg Copeland",
"msg_date": "06 Sep 2002 15:23:40 -0500",
"msg_from": "Greg Copeland <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inheritance"
},
{
"msg_contents": "On Fri, 2002-09-06 at 19:00, elein wrote:\n> \n> \n> There was a comment earlier that was not really addressed.\n> What can you do with table inheritance that you can not do\n> with a relational implementation? Or what would work *better*\n> as inheritance? (you define better)\n\nThere is nothing that you cannot do in some way; that way may not be\nvery convenient compared to the use of inheritance. I consider\nsimplicity to be preferable to conceptual purity.\n\n> This is a genuine question, not a snarky comment. I really\n> want to know. This is the reason I can think of to use\n> inheritance: Several tables have a common set of attributes and\n> there is some reason for these tables to be separate AND there\n> is some reason for the common columns to be queried en masse.\n> What kinds of \"some reasons\" are there, though? And if my\n> condition for using table inheritance is lacking or misguided, what should\n> be the criteria for using table inheritance?\n\nI use it when a group of tables are closely related; they are all\nmembers of some higher class. For example:\n\n person <.......................> address\n |\n +--------------+--------------+\n | |\n organisation individual <......> pay_tax\n | |\n +--------+--------+ +---------+---------+\n | | | | | |\n customer supplier ...etc... staff homeworker ...etc...\n |\n +----+-------------+\n | |\nhome_customer export_customer\n\nIt is convenient to use a higher class when you are interested in all\nits members and only in the attributes of the higher class. So I can\nsay\n\n SELECT * FROM person,address\n WHERE address.person = person.id AND\n address.town = 'London';\n\nto get all rows for people in London. I will only get those attributes\nthat are in person itself; if I want to know about credit limits, that\nis only relevant in the customer hierarchy and I have to SELECT from\ncustomer instead..\n\nSimilarly, I can use the whole customer hierarchy when changing or\nreporting on outstanding customer balances.\n\nIf foreign key relations were valid against an inheritance tree, I could\nimplement it for a table of addresses referencing the highest level\n(every person has an address) and of pay and tax records at the\nindividual level. These don't change as you go down the hierarchy, but\na purely relational implementation has to be redone at each level. A\nreciprocal relation requires an extra table to hold all the hierarchy's\nkeys and that in turn needs triggers to keep that table maintained.\n(I.e., person should have a FK reference to address and address to\nperson; instead, address needs a reference to person_keys, which I have\nto create because FK against a hierarchy isn't valid.) The lack of\ninherited RI makes the design more complex and more difficult to\nunderstand.\n\n-- \nOliver Elphick [email protected]\nIsle of Wight, UK \nhttp://www.lfix.co.uk/oliver\nGPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n ========================================\n \"For whosoever shall call upon the name of the Lord \n shall be saved.\" Romans 10:13 \n\n",
"msg_date": "07 Sep 2002 16:33:18 +0100",
"msg_from": "Oliver Elphick <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inheritance"
},
{
"msg_contents": "\n\nAt 08:33 AM 9/7/02, Oliver Elphick wrote:\n>On Fri, 2002-09-06 at 19:00, elein wrote:\n> >\n> >\n> > There was a comment earlier that was not really addressed.\n> > What can you do with table inheritance that you can not do\n> > with a relational implementation? Or what would work *better*\n> > as inheritance? (you define better)\n>\n>There is nothing that you cannot do in some way; that way may not be\n>very convenient compared to the use of inheritance. I consider\n>simplicity to be preferable to conceptual purity.\n\nyes, simplicity is a very reasonable criteria for better.\n\n> > This is a genuine question, not a snarky comment. I really\n> > want to know. This is the reason I can think of to use\n> > inheritance: Several tables have a common set of attributes and\n> > there is some reason for these tables to be separate AND there\n> > is some reason for the common columns to be queried en masse.\n> > What kinds of \"some reasons\" are there, though? And if my\n> > condition for using table inheritance is lacking or misguided, what should\n> > be the criteria for using table inheritance?\n\nIn non-OO terms, you have both reasons for tables to\nbe separate and reasons to query an entire hierarchy.\nYour exact reasons are clear and reasonable.\nThis is helpful.\n\n>I use it when a group of tables are closely related; they are all\n>members of some higher class. For example:\n>\n> person <.......................> address\n> |\n> +--------------+--------------+\n> | |\n> organisation individual <......> pay_tax\n> | |\n> +--------+--------+ +---------+---------+\n> | | | | | |\n> customer supplier ...etc... staff homeworker ...etc...\n> |\n> +----+-------------+\n> | |\n>home_customer export_customer\n>\n>It is convenient to use a higher class when you are interested in all\n>its members and only in the attributes of the higher class. So I can\n>say\n>\n> SELECT * FROM person,address\n> WHERE address.person = person.id AND\n> address.town = 'London';\n>\n>to get all rows for people in London. I will only get those attributes\n>that are in person itself; if I want to know about credit limits, that\n>is only relevant in the customer hierarchy and I have to SELECT from\n>customer instead..\n>\n>Similarly, I can use the whole customer hierarchy when changing or\n>reporting on outstanding customer balances.\n\nI don't think table inheritance will \"go away\" and so being\nconsistent about the indexes and constraints is necessary\nin order to keep its usage simpler. This might lessen the PITA\nfactor for a few more people, but we should prioritize the\nproject. I think few people have put the thought into it that\nyou have.\n\n\n>If foreign key relations were valid against an inheritance tree, I could\n>implement it for a table of addresses referencing the highest level\n>(every person has an address) and of pay and tax records at the\n>individual level. These don't change as you go down the hierarchy, but\n>a purely relational implementation has to be redone at each level. A\n>reciprocal relation requires an extra table to hold all the hierarchy's\n>keys and that in turn needs triggers to keep that table maintained.\n>(I.e., person should have a FK reference to address and address to\n>person; instead, address needs a reference to person_keys, which I have\n>to create because FK against a hierarchy isn't valid.) The lack of\n>inherited RI makes the design more complex and more difficult to\n>understand.\n>\n>--\n>Oliver Elphick [email protected]\n>Isle of Wight, UK\n>http://www.lfix.co.uk/oliver\n>GPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n> ========================================\n> \"For whosoever shall call upon the name of the Lord\n> shall be saved.\" Romans 10:13\n>\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 5: Have you checked our extensive FAQ?\n>\n>http://www.postgresql.org/users-lounge/docs/faq.html\n\n:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:\n [email protected] (510)543-6079\n \"Taking a Trip. Not taking a Trip.\" --anonymous\n:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:\n\n",
"msg_date": "Sat, 07 Sep 2002 09:54:33 -0700",
"msg_from": "elein <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inheritance"
},
{
"msg_contents": "I am working on getting a shrink-wrapped version of PostgreSQL for Windows\n\nCurrently it installs a customized version of Cygwin, PostgreSQL 7.2.3, \ncygipc, psqlodbc, and pgadminII\n\nI currently have the setup done.\n\nThe target audience is not the enterprise, it is aimed at people using \nAccess wanting to upgrade.\n\nI've looked long and hard and can't find any license issues. Does anyone \nknow of any that I may have missed? As far as I can see, as long as I \nmaintain GPL restrictions, I should be fine.\n\n\n\n",
"msg_date": "Tue, 03 Dec 2002 01:17:23 -0500",
"msg_from": "mlw <[email protected]>",
"msg_from_op": false,
"msg_subject": "Shrinkwrap Windows Product, any issues? Anyone?"
},
{
"msg_contents": "> I've looked long and hard and can't find any license issues. Does anyone \n> know of any that I may have missed? As far as I can see, as long as I \n> maintain GPL restrictions, I should be fine.\n\nPostgreSQL isn't licensed under the GPL, so it sounds to me as though you're \nconfused about the licensing issues.\n--\n(concatenate 'string \"cbbrowne\" \"@cbbrowne.com\")\nhttp://www3.sympatico.ca/cbbrowne/lsf.html\n\"My mom said she learned how to swim. Someone took her out in the lake\nand threw her off the boat. That's how she learned how to swim. I\nsaid, 'Mom, they weren't trying to teach you how to swim.' \" \n-- Paula Poundstone\n\n\n",
"msg_date": "Tue, 03 Dec 2002 07:16:29 -0500",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: Shrinkwrap Windows Product, any issues? Anyone? "
},
{
"msg_contents": "\n\[email protected] wrote:\n\n>>I've looked long and hard and can't find any license issues. Does anyone \n>>know of any that I may have missed? As far as I can see, as long as I \n>>maintain GPL restrictions, I should be fine.\n>> \n>>\n>\n>PostgreSQL isn't licensed under the GPL, so it sounds to me as though you're \n>confused about the licensing issues.\n>\nI'm not confused about the licensing issues. PostgreSQL is less \nrestrictive than is GPL. Maybe I should have phrased it as the most \nrestrictive license is GPL, so as long as I maintain GPL restrictions I \nshould be fine.\n\n>--\n>(concatenate 'string \"cbbrowne\" \"@cbbrowne.com\")\n>http://www3.sympatico.ca/cbbrowne/lsf.html\n>\"My mom said she learned how to swim. Someone took her out in the lake\n>and threw her off the boat. That's how she learned how to swim. I\n>said, 'Mom, they weren't trying to teach you how to swim.' \" \n>-- Paula Poundstone\n>\n>\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 4: Don't 'kill -9' the postmaster\n>\n> \n>\n\n\n",
"msg_date": "Tue, 03 Dec 2002 08:01:24 -0500",
"msg_from": "mlw <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Shrinkwrap Windows Product, any issues? Anyone?"
},
{
"msg_contents": "Wow, there's been a lot of discussion on this issue!\n\n\nWhile it won't address some of the issues that have been brought up,\nthere is one very simple thing we can do that will help sysadmins\nquite a lot: eliminate the postmaster's use of $PGDATA, and force the\ndata directory to be specified on the command line. It's fine if the\nshell scripts still use $PGDATA, but the postmaster should not.\n\nThe reason is that at least it'll always be possible for\nadministrators to figure out where the data is by looking at the\noutput of 'ps'.\n\n\nWhile I'd prefer to also see a GUC variable added to the config file\nthat tells the postmaster where to look for the data, the above will\nat least simplify the postmaster's code (since the logic for dealing\nwith $PGDATA can be eliminated) while eliminating some of the trouble\nadministrators currently have with it.\n\n\n\n-- \nKevin Brown\t\t\t\t\t [email protected]\n",
"msg_date": "Thu, 13 Feb 2003 17:22:18 -0800",
"msg_from": "Kevin Brown <[email protected]>",
"msg_from_op": false,
"msg_subject": "Location of the configuration files, round 2"
},
{
"msg_contents": "One of the things that I HATE about this discussion is that everyone \nwants to put limits on configurability.\n\nI am an old fashion UNIX guy, capability without enforcing policy! \nAdding an ability is different than enforcing a policy. All I any to do \nis add the capability of configuration in a way that most admins would \nbe used to.\n\nIf people want an FHS compatible install, I don't care. I want to enable \nit, but it should not be enforced.\n\n\nKevin Brown wrote:\n\n>Wow, there's been a lot of discussion on this issue!\n>\n>\n>While it won't address some of the issues that have been brought up,\n>there is one very simple thing we can do that will help sysadmins\n>quite a lot: eliminate the postmaster's use of $PGDATA, and force the\n>data directory to be specified on the command line. It's fine if the\n>shell scripts still use $PGDATA, but the postmaster should not.\n>\n>The reason is that at least it'll always be possible for\n>administrators to figure out where the data is by looking at the\n>output of 'ps'.\n>\n>\n>While I'd prefer to also see a GUC variable added to the config file\n>that tells the postmaster where to look for the data, the above will\n>at least simplify the postmaster's code (since the logic for dealing\n>with $PGDATA can be eliminated) while eliminating some of the trouble\n>administrators currently have with it.\n>\n>\n>\n> \n>\n\n\n",
"msg_date": "Sat, 15 Feb 2003 01:13:04 -0500",
"msg_from": "mlw <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Location of the configuration files, round 2"
},
{
"msg_contents": "\nmlw <[email protected]> wrote: \n\n> I am an old fashion UNIX guy, capability without enforcing policy! \n> Adding an ability is different than enforcing a policy. \n\nYou can push this too far, though. Every capability that\nyou add still complicates testing and documentation, even if\nvery difficult to code. And ultimately you're going to want\nit to default or auto-configure in a reasonable way, so most\npeople aren't even going to look at the additional\ncapabilities.\n\n> All I any to do is add the capability of configuration in\n> a way that most admins would be used to.\n\nWhich is an important issue of course. So you need some \ndiscussion to establish what the common practices are. \n\n",
"msg_date": "Sat, 15 Feb 2003 08:48:02 -0800",
"msg_from": "\"J. M. Brenner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Location of the configuration files, round 2 "
},
{
"msg_contents": "I have been working on moving some of my software to a more SOAP \ncompatible interface. As I was doing it, it occured to me that a generic \nfunction could be written, in PostgreSQL's new function manager that \nallows multiple columns to be returned, that is a generic SOAP interface.\n\nAll one would need do is define what is expected from the SOAP call in \nthe \"CREATE FUNCTION\" statement. Then the generic SOAP function could \nthen read what is expected and return the XML/SOAP data as a set of \nresults as if it were a subquery.\n\nWhat is needed is an efficient way to find the data types and names from \nthe functions definition. Does anyone know how to do that?\n\nA small program could also parse a WSDL file and write a \"CREATE \nFUNCTION\" script for the XML as well.\n\nOn the flip side, I am also working on a PostgreSQL SOAP interface, \nwhere one does this:\n\nhttp://somehost/postgresql?query=\"select * from table\"\n\nAnd a SOAP compatible resultset is returned.\n\nOn a more advanced horizon, one should be able to do this:\n\nselect * from localtable, \nmysoap('http://remotehost/postgresql?query=select * from foo') as soap \nwhere soap.field = localtable.field;\n\nIf we can do that, PostgreSQL could fit into almost ANY service \nenvironment. What do you guys think? Anyone want to help out?\n\n",
"msg_date": "Fri, 28 Mar 2003 09:01:08 -0500",
"msg_from": "mlw <[email protected]>",
"msg_from_op": false,
"msg_subject": "PostgreSQL and SOAP, version 7.4/8.0"
},
{
"msg_contents": "First, a SOAP query should be posted in SOAP message format, not using the\nquery string as you do. Second, I like the idea of calling external SOAP\nservices, but consider creating a language 'soap' you could do with a CREATE\nFUNCTION type thing. e.g.\n\nCREATE FUNCTION \"foo\" (TEXT) RETURNS INTEGER AS\n\t'http://somewhere.com/path/to/.wsdl', 'foo'\n\tLANGUAGE 'soap';\n\n(hmm, it is unclear if this is what you are suggesting or not...)\n\nSecond, I hate SOAP because it is too bloated (have you read the spec(s)?).\nIf you can support xmlrpc instead, you'll save yourself a lot of headaches.\nIf you got SOAP working, though, I'd use it. It's more an implementation\nthing.\n\n\nOn Fri, Mar 28, 2003 at 09:01:08AM -0500, mlw wrote:\n> I have been working on moving some of my software to a more SOAP \n> compatible interface. As I was doing it, it occured to me that a generic \n> function could be written, in PostgreSQL's new function manager that \n> allows multiple columns to be returned, that is a generic SOAP interface.\n> \n> All one would need do is define what is expected from the SOAP call in \n> the \"CREATE FUNCTION\" statement. Then the generic SOAP function could \n> then read what is expected and return the XML/SOAP data as a set of \n> results as if it were a subquery.\n> \n> What is needed is an efficient way to find the data types and names from \n> the functions definition. Does anyone know how to do that?\n> \n> A small program could also parse a WSDL file and write a \"CREATE \n> FUNCTION\" script for the XML as well.\n> \n> On the flip side, I am also working on a PostgreSQL SOAP interface, \n> where one does this:\n> \n> http://somehost/postgresql?query=\"select * from table\"\n> \n> And a SOAP compatible resultset is returned.\n> \n> On a more advanced horizon, one should be able to do this:\n> \n> select * from localtable, \n> mysoap('http://remotehost/postgresql?query=select * from foo') as soap \n> where soap.field = localtable.field;\n> \n> If we can do that, PostgreSQL could fit into almost ANY service \n> environment. What do you guys think? Anyone want to help out?\n> \n\nI have no time to volunteer for projects, but what the hell...! It's too\ncool. I can't spend much time on it but bounce things off me and I'll\ndo whatever hacking I can squeeze in. What soap implementation would you\nuse for the PostgreSQL plugin? libsoap, last I checked, is a wee bit \nout of date. And not documented.\n\n-Jason\n\n",
"msg_date": "Fri, 28 Mar 2003 12:17:57 -0500",
"msg_from": "\"Jason M. Felice\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL and SOAP, version 7.4/8.0"
},
{
"msg_contents": "Jason wrote:\n> If you can support xmlrpc instead, you'll save yourself a lot of headaches.\n\nXML-RPC has three merits over SOAP:\n\n 1. It's a simple specification, and thus readily implemented.\n\n 2. Microsoft and IBM aren't fighting over control over it, so it's\n not suffering from the \"we keep adding pseudo-standards to it\"\n problem. (Which further complicates the specifications.)\n You can have a /complete/ implementation of XML-RPC, whereas,\n for SOAP, you can hold ghastly long arguments as to what SOAP\n means, anyways.\n\n 3. There's a (perhaps not \"standard\", but definitely widely\n implemented) scheme for bundling multiple XML-RPC requests into\n one message, which improves latency a LOT for small messages.\n\nOf course, CORBA has actually been quite formally standardized, suffers\nfrom many fairly interoperable implementations, and is rather a lot less\nbloated than any of the XML-based schemes. It might be worth trying,\ntoo...\n--\nIf this was helpful, <http://svcs.affero.net/rm.php?r=cbbrowne> rate me\nhttp://cbbrowne.com/info/soap.html\nI just got skylights put in my place. The people who live above me are\nfurious.\n\n",
"msg_date": "Fri, 28 Mar 2003 13:36:43 -0500",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL and SOAP, version 7.4/8.0 "
},
{
"msg_contents": "On Fri, Mar 28, 2003 at 01:36:43PM -0500, [email protected] wrote:\n> Of course, CORBA has actually been quite formally standardized, suffers\n> from many fairly interoperable implementations, and is rather a lot less\n> bloated than any of the XML-based schemes. It might be worth trying,\n> too...\n\nThe ability to use the HTTP transport has it's advantages with web services--\nYou can throw something together with a few lines of PHP, you don't have to\nworry about how to activate objects, I've never been able to figure out how\nto handle transport-layer security and authentication with CORBA (of course,\nthis was all fairly new stuff when I was working with it), all this stuff\ncomes for free with the HTTP transport.\n\nI like CORBA, though, and I'd probably find a CORBA module useful, but it\ndoesn't solve all the same problems.\n\nHrm, I wonder if the overhead of XML-RPC wouldn't be too bad for the new\nPostgreSQL protocol... it probably would, but it would be entirely useful.\nYou can make XML-RPC calls from mozilla javascript, so you could do some\npretty sweet tweaking to keep your addresses in a pgsql database.\n\nAs an \"additional\" protocol which postmaster can listen to it would rule.\nI'm making a habit of putting all the business logic into stored procedures,\nand this would basically publish the business logic in a very useful way.\n\n-Jason\n\n",
"msg_date": "Fri, 28 Mar 2003 13:52:20 -0500",
"msg_from": "\"Jason M. Felice\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL and SOAP, version 7.4/8.0"
},
{
"msg_contents": "Jason M. Felice wrote:\n\n>First, a SOAP query should be posted in SOAP message format, not using the\n>query string as you do. Second, I like the idea of calling external SOAP\n>services, but consider creating a language 'soap' you could do with a CREATE\n>FUNCTION type thing. e.g.\n>\n>CREATE FUNCTION \"foo\" (TEXT) RETURNS INTEGER AS\n>\t'http://somewhere.com/path/to/.wsdl', 'foo'\n>\tLANGUAGE 'soap';\n>\n>(hmm, it is unclear if this is what you are suggesting or not...)\n>\n>Second, I hate SOAP because it is too bloated (have you read the spec(s)?).\n>If you can support xmlrpc instead, you'll save yourself a lot of headaches.\n>If you got SOAP working, though, I'd use it. It's more an implementation\n>thing.\n>\n\nHere's the thing, yes I know there are a \"lot\" of alternatives to SOAP, \nall with varying levels of \"being better than SOAP.\" It still stands \nthat a SOAP interface would be useful for people.\n\n>\n>\n>On Fri, Mar 28, 2003 at 09:01:08AM -0500, mlw wrote:\n> \n>\n>>I have been working on moving some of my software to a more SOAP \n>>compatible interface. As I was doing it, it occured to me that a generic \n>>function could be written, in PostgreSQL's new function manager that \n>>allows multiple columns to be returned, that is a generic SOAP interface.\n>>\n>>All one would need do is define what is expected from the SOAP call in \n>>the \"CREATE FUNCTION\" statement. Then the generic SOAP function could \n>>then read what is expected and return the XML/SOAP data as a set of \n>>results as if it were a subquery.\n>>\n>>What is needed is an efficient way to find the data types and names from \n>>the functions definition. Does anyone know how to do that?\n>>\n>>A small program could also parse a WSDL file and write a \"CREATE \n>>FUNCTION\" script for the XML as well.\n>>\n>>On the flip side, I am also working on a PostgreSQL SOAP interface, \n>>where one does this:\n>>\n>>http://somehost/postgresql?query=\"select * from table\"\n>>\n>>And a SOAP compatible resultset is returned.\n>>\n>>On a more advanced horizon, one should be able to do this:\n>>\n>>select * from localtable, \n>>mysoap('http://remotehost/postgresql?query=select * from foo') as soap \n>>where soap.field = localtable.field;\n>>\n>>If we can do that, PostgreSQL could fit into almost ANY service \n>>environment. What do you guys think? Anyone want to help out?\n>>\n>> \n>>\n>\n>I have no time to volunteer for projects, but what the hell...! It's too\n>cool. I can't spend much time on it but bounce things off me and I'll\n>do whatever hacking I can squeeze in. What soap implementation would you\n>use for the PostgreSQL plugin? libsoap, last I checked, is a wee bit \n>out of date. And not documented.\n>\n\nI was thinking of using SOAP over HTTP as the protocol, and a minimalist \nversion at best. If the people want \"more\" let them add it.\n\nI have an HTTP service class in my open source library. It would br \ntrivial to accept a SQL query formatted as a GET request, and then \nexecute the query and, using libpq, format the result as XML. It should \nbe simple enough to do.\n\n>\n>-Jason\n>\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 1: subscribe and unsubscribe commands go to [email protected]\n>\n> \n>\n\n\n\n\n\n\n\n\n\nJason M. Felice wrote:\n\nFirst, a SOAP query should be posted in SOAP message format, not using the\nquery string as you do. Second, I like the idea of calling external SOAP\nservices, but consider creating a language 'soap' you could do with a CREATE\nFUNCTION type thing. e.g.\n\nCREATE FUNCTION \"foo\" (TEXT) RETURNS INTEGER AS\n\t'http://somewhere.com/path/to/.wsdl', 'foo'\n\tLANGUAGE 'soap';\n\n(hmm, it is unclear if this is what you are suggesting or not...)\n\nSecond, I hate SOAP because it is too bloated (have you read the spec(s)?).\nIf you can support xmlrpc instead, you'll save yourself a lot of headaches.\nIf you got SOAP working, though, I'd use it. It's more an implementation\nthing.\n\n\nHere's the thing, yes I know there are a \"lot\" of alternatives to SOAP, all\nwith varying levels of \"being better than SOAP.\" It still stands that a SOAP\ninterface would be useful for people.\n\n\n\n\n\nOn Fri, Mar 28, 2003 at 09:01:08AM -0500, mlw wrote:\n \n\nI have been working on moving some of my software to a more SOAP \ncompatible interface. As I was doing it, it occured to me that a generic \nfunction could be written, in PostgreSQL's new function manager that \nallows multiple columns to be returned, that is a generic SOAP interface.\n\nAll one would need do is define what is expected from the SOAP call in \nthe \"CREATE FUNCTION\" statement. Then the generic SOAP function could \nthen read what is expected and return the XML/SOAP data as a set of \nresults as if it were a subquery.\n\nWhat is needed is an efficient way to find the data types and names from \nthe functions definition. Does anyone know how to do that?\n\nA small program could also parse a WSDL file and write a \"CREATE \nFUNCTION\" script for the XML as well.\n\nOn the flip side, I am also working on a PostgreSQL SOAP interface, \nwhere one does this:\n\nhttp://somehost/postgresql?query=\"select * from table\"\n\nAnd a SOAP compatible resultset is returned.\n\nOn a more advanced horizon, one should be able to do this:\n\nselect * from localtable, \nmysoap('http://remotehost/postgresql?query=select * from foo') as soap \nwhere soap.field = localtable.field;\n\nIf we can do that, PostgreSQL could fit into almost ANY service \nenvironment. What do you guys think? Anyone want to help out?\n\n \n\n\nI have no time to volunteer for projects, but what the hell...! It's too\ncool. I can't spend much time on it but bounce things off me and I'll\ndo whatever hacking I can squeeze in. What soap implementation would you\nuse for the PostgreSQL plugin? libsoap, last I checked, is a wee bit \nout of date. And not documented.\n\n\nI was thinking of using SOAP over HTTP as the protocol, and a minimalist\nversion at best. If the people want \"more\" let them add it.\n\nI have an HTTP service class in my open source library. It would br trivial\nto accept a SQL query formatted as a GET request, and then execute the query\nand, using libpq, format the result as XML. It should be simple enough to\ndo. \n\n\n\n-Jason\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 1: subscribe and unsubscribe commands go to [email protected]",
"msg_date": "Fri, 28 Mar 2003 16:39:34 -0500",
"msg_from": "mlw <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL and SOAP, version 7.4/8.0"
},
{
"msg_contents": "On Fri, 2003-03-28 at 14:39, mlw wrote:\n\n> I was thinking of using SOAP over HTTP as the protocol, and a\n> minimalist version at best. If the people want \"more\" let them add it.\n> \n> I have an HTTP service class in my open source library. It would br\n> trivial to accept a SQL query formatted as a GET request, and then\n> execute the query and, using libpq, format the result as XML. It\n> should be simple enough to do. \n\nIt would be easy. I've done something similar (using ODBC to\nget to PostgreSQL) - but using a language none of the rest of\nyou are likely to be interested in (Unicon). Works just fine,\nthough the implementation (deliberately, by personal preference)\navoids accepting arbitrary SQL statements from SOAP clients,\ninstead forcing the clients to use an RPC interface so I can\ndo sanity checking in the Unicon [which I know better than I know\nPostgreSQL...] SOAP servers.\n\nI, too, opted for a 'minimal-SOAP' implementation. A 'real'\nimplementation boggles the mind.\n\n-- \nSteve Wampler <[email protected]>\nNational Solar Observatory\n\n",
"msg_date": "28 Mar 2003 14:47:09 -0700",
"msg_from": "Steve Wampler <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL and SOAP, version 7.4/8.0"
},
{
"msg_contents": "mlw writes:\n\n> On the flip side, I am also working on a PostgreSQL SOAP interface,\n> where one does this:\n>\n> http://somehost/postgresql?query=\"select * from table\"\n>\n> And a SOAP compatible resultset is returned.\n\nThat looks quite similar to the planned XML functionality. While that\nplan doesn't contain the word \"SOAP\", one could of course create a small\nlayer to convert the output format to any other XML format. If you're\ninterested, please look up the XML discussion of the last few weeks.\n\n-- \nPeter Eisentraut [email protected]\n\n",
"msg_date": "Sat, 29 Mar 2003 12:26:00 +0100 (CET)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: PostgreSQL and SOAP, version 7.4/8.0"
},
{
"msg_contents": "Given a HTTP formatted query:\nGET \"http://localhost:8181/pgmuze?query=select+*+from+zsong+limit+2\"\n\nThe output is entered below.\n\nQuestions:\nIs there a way, without spcifying a binary cursor, to get the data types \nassociated with columns? Right now I am just using undefined, as the \nODBC version works.\n\nAnyone see any basic improvements needed?\n\n<?xml version = \"1.0\"?>\n<soap:Envelope xmlns:MWSSQL=\"http://www.mohawksoft.com/MWSSQL/envelope\">\n <soap:Header>\n <!-- Fields in set -->\n <Columns count=\"9\">\n <muzenbr>undefined</muzenbr>\n <disc>undefined</disc>\n <trk>undefined</trk>\n <song>undefined</song>\n <artistid>undefined</artistid>\n <acd>undefined</acd>\n <trackid>undefined</trackid>\n <datasrc>undefined</datasrc>\n <extid>undefined</extid>\n </Columns>\n </soap:Header>\n <soap:Body>\n <ROWSET columns=\"9\" rows=\"2\">\n <ROW ROWID=\"0\">\n <muzenbr>424965</muzenbr>\n <disc>1</disc>\n <trk>5</trk>\n <song>Write My Name In The Groove</song>\n <artistid>100021391</artistid>\n <acd>A</acd>\n <trackid>203429573</trackid>\n <datasrc>1</datasrc>\n <extid>203429573</extid>\n </ROW>\n <ROW ROWID=\"1\">\n <muzenbr>177516</muzenbr>\n <disc>1</disc>\n <trk>1</trk>\n <song>Papa Was A Rolling Stone</song>\n <artistid>100000411</artistid>\n <acd>P</acd>\n <trackid>200000000</trackid>\n <datasrc>1</datasrc>\n <extid>200000000</extid>\n </ROW>\n </ROWSET>\n </soap:Body>\n</soap:Envelope>\n\n\nSteve Wampler wrote:\n\n>On Fri, 2003-03-28 at 14:39, mlw wrote:\n>\n> \n>\n>>I was thinking of using SOAP over HTTP as the protocol, and a\n>>minimalist version at best. If the people want \"more\" let them add it.\n>>\n>>I have an HTTP service class in my open source library. It would br\n>>trivial to accept a SQL query formatted as a GET request, and then\n>>execute the query and, using libpq, format the result as XML. It\n>>should be simple enough to do. \n>> \n>>\n>\n>It would be easy. I've done something similar (using ODBC to\n>get to PostgreSQL) - but using a language none of the rest of\n>you are likely to be interested in (Unicon). Works just fine,\n>though the implementation (deliberately, by personal preference)\n>avoids accepting arbitrary SQL statements from SOAP clients,\n>instead forcing the clients to use an RPC interface so I can\n>do sanity checking in the Unicon [which I know better than I know\n>PostgreSQL...] SOAP servers.\n>\n>I, too, opted for a 'minimal-SOAP' implementation. A 'real'\n>implementation boggles the mind.\n>\n> \n>\n\n",
"msg_date": "Sun, 30 Mar 2003 19:43:12 -0500",
"msg_from": "mlw <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL and SOAP, suggestions?"
},
{
"msg_contents": "mlw kirjutas E, 31.03.2003 kell 03:43:\n> Given a HTTP formatted query:\n> GET \"http://localhost:8181/pgmuze?query=select+*+from+zsong+limit+2\"\n> \n> The output is entered below.\n> \n> Questions:\n> Is there a way, without spcifying a binary cursor, to get the data types \n> associated with columns? Right now I am just using undefined, as the \n> ODBC version works.\n> \n> Anyone see any basic improvements needed?\n> \n> <?xml version = \"1.0\"?>\n> <soap:Envelope xmlns:MWSSQL=\"http://www.mohawksoft.com/MWSSQL/envelope\">\n> <soap:Header>\n> <!-- Fields in set -->\n> <Columns count=\"9\">\n\nThe SOAP 1.1 spec specifies (p4.2) the following about SOAP Header:\n\nThe encoding rules for header entries are as follows: \n\n 1. A header entry is identified by its fully qualified element\n name, which consists of the namespace URI and the local name.\n All immediate child elements of the SOAP Header element MUST be\n namespace-qualified. \n\nI'm not sure that SOAP Header is the right place for Query header info,\nas the header is meant for:\n\n\n SOAP provides a flexible mechanism for extending a message in a \n decentralized and modular way without prior knowledge between the \n communicating parties. Typical examples of extensions that can be \n implemented as header entries are authentication, transaction \n management, payment etc.\n\nSo the definition of structure should probably be inside SOAP:Body .\n\n---------------\nHannu\n\n",
"msg_date": "31 Mar 2003 15:43:18 +0300",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL and SOAP, suggestions?"
},
{
"msg_contents": "Actually, as far as I am aware, the header is for metadata, i.e. it is the\nplace to describe the data being returned. The description of the fields\nisn't the actual data retrieved, so it doesn't belong in the body, so it\nshould go into the header.\n\n\n\n> mlw kirjutas E, 31.03.2003 kell 03:43:\n>> Given a HTTP formatted query:\n>> GET \"http://localhost:8181/pgmuze?query=select+*+from+zsong+limit+2\"\n>> \n>> The output is entered below.\n>> \n>> Questions:\n>> Is there a way, without spcifying a binary cursor, to get the data\n>> types associated with columns? Right now I am just using undefined,\n>> as the ODBC version works.\n>> \n>> Anyone see any basic improvements needed?\n>> \n>> <?xml version = \"1.0\"?>\n>> <soap:Envelope\n>> xmlns:MWSSQL=\"http://www.mohawksoft.com/MWSSQL/envelope\">\n>> <soap:Header>\n>> <!-- Fields in set -->\n>> <Columns count=\"9\">\n> \n> The SOAP 1.1 spec specifies (p4.2) the following about SOAP Header:\n> \n> The encoding rules for header entries are as follows: \n> \n> 1. A header entry is identified by its fully qualified element\n> name, which consists of the namespace URI and the local name.\n> All immediate child elements of the SOAP Header element MUST be\n> namespace-qualified. \n> \n> I'm not sure that SOAP Header is the right place for Query header info,\n> as the header is meant for:\n> \n> \n> SOAP provides a flexible mechanism for extending a message in a \n> decentralized and modular way without prior knowledge between the \n> communicating parties. Typical examples of extensions that can be \n> implemented as header entries are authentication, transaction \n> management, payment etc.\n> \n> So the definition of structure should probably be inside SOAP:Body .\n> \n> ---------------\n> Hannu\n> \n> \n> ---------------------------(end of\n> broadcast)--------------------------- TIP 5: Have you checked our\n> extensive FAQ?\n> \n> http://www.postgresql.org/docs/faqs/FAQ.html\n\n",
"msg_date": "Mon, 31 Mar 2003 11:52:03 -0500 (EST)",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL and SOAP, suggestions?"
},
{
"msg_contents": "[email protected] kirjutas E, 31.03.2003 kell 19:52:\n> Actually, as far as I am aware, the header is for metadata, i.e. it is the\n> place to describe the data being returned.\n\nDid you read the SOAP spec ?\n\n> The description of the fields\n> isn't the actual data retrieved, so it doesn't belong in the body, so it\n> should go into the header.\n\nThat is logical, but this is not what the spec tells.\n\nAlso the spec requires immediate child elements of SOAP:Header to have\nfull namespace URI's.\n\nAnd another question - why do you have the namespace MWSSQL defined but\nnever used ?\n\n-------------\nHannu\n\n",
"msg_date": "31 Mar 2003 19:56:54 +0300",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL and SOAP, suggestions?"
},
{
"msg_contents": "\n\nHannu Krosing wrote:\n\n>[email protected] kirjutas E, 31.03.2003 kell 19:52:\n> \n>\n>>Actually, as far as I am aware, the header is for metadata, i.e. it is the\n>>place to describe the data being returned.\n>> \n>>\n>\n>Did you read the SOAP spec ?\n>\nyes\n\n>\n> \n>\n>>The description of the fields\n>>isn't the actual data retrieved, so it doesn't belong in the body, so it\n>>should go into the header.\n>> \n>>\n>\n>That is logical, but this is not what the spec tells.\n>\nThis is exactly what the spec calles for. The spec, at least 1.1, says \nvery little about what should not be in the header. For an XML request, \nit should carry. It is very particular about soap header attributes, but \nheader contents is very flexable.\n\n>\n>Also the spec requires immediate child elements of SOAP:Header to have\n>full namespace URI's.\n>\nYup, that was a bug.\n\n>\n>And another question - why do you have the namespace MWSSQL defined but\n>never used ?\n>\nThat was part of the same bug as above, it now outputs this:\n\n<?xml version = \"1.0\"?>\n<mwssql:Envelope xmlns:mwssql=\"http://www.mohawksoft.com/mwssql/envelope\">\n <mwssql:Header>\n <exec:sql>update cgrpairs set ratio=0 where srcitem=100098670</exec:sql>\n <exec:affected>2657</exec:affected>\n <qry:sql>select * from ztitles limit 2</qry:sql>\n <qry:ROWSET>\n <qry:ROW columns=\"28\">\n <t:acd>undefined</t:acd>\n <t:muzenbr>undefined</t:muzenbr>\n <t:cat2>undefined</t:cat2>\n <t:cat3>undefined</t:cat3>\n <t:cat4>undefined</t:cat4>\n <t:performer>undefined</t:performer>\n <t:performer2>undefined</t:performer2>\n <t:title>undefined</t:title>\n <t:artist1>undefined</t:artist1>\n <t:engineer>undefined</t:engineer>\n <t:producer>undefined</t:producer>\n <t:labelname>undefined</t:labelname>\n <t:catalog>undefined</t:catalog>\n <t:distribut>undefined</t:distribut>\n <t:released>undefined</t:released>\n <t:origrel>undefined</t:origrel>\n <t:nbrdiscs>undefined</t:nbrdiscs>\n <t:spar>undefined</t:spar>\n <t:minutes>undefined</t:minutes>\n <t:seconds>undefined</t:seconds>\n <t:monostereo>undefined</t:monostereo>\n <t:studiolive>undefined</t:studiolive>\n <t:available>undefined</t:available>\n <t:previews>undefined</t:previews>\n <t:pnotes>undefined</t:pnotes>\n <t:artistid>undefined</t:artistid>\n <t:datasrc>undefined</t:datasrc>\n <t:extid>undefined</t:extid>\n </qry:ROW>\n </qry:ROWSET>\n </mwssql:Header>\n <mwssql:Body>\n <ROWSET columns=\"28\" rows=\"2\">\n <ROW ROWID=\"0\">\n <acd>P</acd>\n <muzenbr>68291</muzenbr>\n <cat2>Performer</cat2>\n <cat3>Jazz Instrument</cat3>\n <cat4>Guitar</cat4>\n <performer>Steve Khan</performer>\n <performer2>Khan, Steve</performer2>\n <title>Evidence</title>\n <artist1></artist1>\n <engineer></engineer>\n <producer></producer>\n <labelname>Novus</labelname>\n <catalog>3074</catalog>\n <distribut>BMG</distribut>\n <released>02/13/1990</released>\n <origrel>n/a</origrel>\n <nbrdiscs>1</nbrdiscs>\n <spar>n/a</spar>\n <minutes></minutes>\n <seconds></seconds>\n <monostereo>Stereo</monostereo>\n <studiolive>Studio</studiolive>\n <available>N</available>\n <previews></previews>\n <pnotes></pnotes>\n <artistid>100025343</artistid>\n <datasrc>1</datasrc>\n <extid>68291</extid>\n </ROW>\n <ROW ROWID=\"1\">\n <acd>P</acd>\n <muzenbr>67655</muzenbr>\n <cat2>Collection</cat2>\n <cat3>Jazz Instrument</cat3>\n <cat4></cat4>\n <performer>Various Artists</performer>\n <performer2>Various Artists</performer2>\n <title>Metropolitan Opera House Jam Session</title>\n <artist1></artist1>\n <engineer></engineer>\n <producer></producer>\n <labelname>Jazz Anthology</labelname>\n <catalog>550212</catalog>\n <distribut>n/a</distribut>\n <released>1992</released>\n <origrel>n/a</origrel>\n <nbrdiscs>1</nbrdiscs>\n <spar>n/a</spar>\n <minutes></minutes>\n <seconds></seconds>\n <monostereo>Mono</monostereo>\n <studiolive>Live</studiolive>\n <available>N</available>\n <previews></previews>\n <pnotes></pnotes>\n <artistid>100050450</artistid>\n <datasrc>1</datasrc>\n <extid>67655</extid>\n </ROW>\n </ROWSET>\n </mwssql:Body>\n</mwssql:Envelope>\n\n",
"msg_date": "Tue, 01 Apr 2003 07:29:51 -0500",
"msg_from": "mlw <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL and SOAP, suggestions?"
},
{
"msg_contents": "Out of curiousity, what is the purpose of putting the qry:ROWSET\ndescription into the message at all (header or not)? Isn't it a\nperfectly valid SOAP message (and just as parseable) with that removed?\n\nI freely admit to not being a soap expert, but similar SOAP\nmessages I generate from queries seem to work fine without this\nmetadata. Is having it required by some part of the SOAP spec\nI don't understand?\n\nThanks!\n\nOn Tue, 2003-04-01 at 05:29, mlw wrote:\n\n> That was part of the same bug as above, it now outputs this:\n> \n> <?xml version = \"1.0\"?>\n> <mwssql:Envelope xmlns:mwssql=\"http://www.mohawksoft.com/mwssql/envelope\">\n> <mwssql:Header>\n> <exec:sql>update cgrpairs set ratio=0 where srcitem=100098670</exec:sql>\n> <exec:affected>2657</exec:affected>\n> <qry:sql>select * from ztitles limit 2</qry:sql>\n> <qry:ROWSET>\n> <qry:ROW columns=\"28\">\n> <t:acd>undefined</t:acd>\n> <t:muzenbr>undefined</t:muzenbr>\n> <t:cat2>undefined</t:cat2>\n> <t:cat3>undefined</t:cat3>\n> <t:cat4>undefined</t:cat4>\n> <t:performer>undefined</t:performer>\n> <t:performer2>undefined</t:performer2>\n> <t:title>undefined</t:title>\n> <t:artist1>undefined</t:artist1>\n> <t:engineer>undefined</t:engineer>\n> <t:producer>undefined</t:producer>\n> <t:labelname>undefined</t:labelname>\n> <t:catalog>undefined</t:catalog>\n> <t:distribut>undefined</t:distribut>\n> <t:released>undefined</t:released>\n> <t:origrel>undefined</t:origrel>\n> <t:nbrdiscs>undefined</t:nbrdiscs>\n> <t:spar>undefined</t:spar>\n> <t:minutes>undefined</t:minutes>\n> <t:seconds>undefined</t:seconds>\n> <t:monostereo>undefined</t:monostereo>\n> <t:studiolive>undefined</t:studiolive>\n> <t:available>undefined</t:available>\n> <t:previews>undefined</t:previews>\n> <t:pnotes>undefined</t:pnotes>\n> <t:artistid>undefined</t:artistid>\n> <t:datasrc>undefined</t:datasrc>\n> <t:extid>undefined</t:extid>\n> </qry:ROW>\n> </qry:ROWSET>\n> </mwssql:Header>\n> <mwssql:Body>\n> <ROWSET columns=\"28\" rows=\"2\">\n> <ROW ROWID=\"0\">\n> <acd>P</acd>\n> <muzenbr>68291</muzenbr>\n> <cat2>Performer</cat2>\n> <cat3>Jazz Instrument</cat3>\n> <cat4>Guitar</cat4>\n> <performer>Steve Khan</performer>\n> <performer2>Khan, Steve</performer2>\n> <title>Evidence</title>\n> <artist1></artist1>\n> <engineer></engineer>\n> <producer></producer>\n> <labelname>Novus</labelname>\n> <catalog>3074</catalog>\n> <distribut>BMG</distribut>\n> <released>02/13/1990</released>\n> <origrel>n/a</origrel>\n> <nbrdiscs>1</nbrdiscs>\n> <spar>n/a</spar>\n> <minutes></minutes>\n> <seconds></seconds>\n> <monostereo>Stereo</monostereo>\n> <studiolive>Studio</studiolive>\n> <available>N</available>\n> <previews></previews>\n> <pnotes></pnotes>\n> <artistid>100025343</artistid>\n> <datasrc>1</datasrc>\n> <extid>68291</extid>\n> </ROW>\n> <ROW ROWID=\"1\">\n> <acd>P</acd>\n> <muzenbr>67655</muzenbr>\n> <cat2>Collection</cat2>\n> <cat3>Jazz Instrument</cat3>\n> <cat4></cat4>\n> <performer>Various Artists</performer>\n> <performer2>Various Artists</performer2>\n> <title>Metropolitan Opera House Jam Session</title>\n> <artist1></artist1>\n> <engineer></engineer>\n> <producer></producer>\n> <labelname>Jazz Anthology</labelname>\n> <catalog>550212</catalog>\n> <distribut>n/a</distribut>\n> <released>1992</released>\n> <origrel>n/a</origrel>\n> <nbrdiscs>1</nbrdiscs>\n> <spar>n/a</spar>\n> <minutes></minutes>\n> <seconds></seconds>\n> <monostereo>Mono</monostereo>\n> <studiolive>Live</studiolive>\n> <available>N</available>\n> <previews></previews>\n> <pnotes></pnotes>\n> <artistid>100050450</artistid>\n> <datasrc>1</datasrc>\n> <extid>67655</extid>\n> </ROW>\n> </ROWSET>\n> </mwssql:Body>\n> </mwssql:Envelope>\n-- \nSteve Wampler <[email protected]>\nNational Solar Observatory\n\n",
"msg_date": "01 Apr 2003 10:40:24 -0700",
"msg_from": "Steve Wampler <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL and SOAP, suggestions?"
},
{
"msg_contents": "\nI can certainly imagine cases for processing where having the field names\nand other metadata up front (maybe add type info, nullable, etc instead of\njust \"undefined\") would be useful.\n\nhere's another question:\n\nIf the intention is to use field names as (local) tag names, how will you\nhandle the case where the field name isn't a valid XML name? Of course, one\ncould do some sort of mapping (replace illegal chars with \"_\", for example)\nbut then you can't be 100% certain that you haven't generated a collision,\nI should think.\n\nandrew\n\n----- Original Message -----\nFrom: \"Steve Wampler\" <[email protected]>\nTo: \"mlw\" <[email protected]>\nCc: \"Hannu Krosing\" <[email protected]>; <[email protected]>;\n\"Postgres-hackers\" <[email protected]>\nSent: Tuesday, April 01, 2003 12:40 PM\nSubject: Re: [HACKERS] PostgreSQL and SOAP, suggestions?\n\n\n> Out of curiousity, what is the purpose of putting the qry:ROWSET\n> description into the message at all (header or not)? Isn't it a\n> perfectly valid SOAP message (and just as parseable) with that removed?\n>\n> I freely admit to not being a soap expert, but similar SOAP\n> messages I generate from queries seem to work fine without this\n> metadata. Is having it required by some part of the SOAP spec\n> I don't understand?\n>\n> Thanks!\n>\n> On Tue, 2003-04-01 at 05:29, mlw wrote:\n>\n> > That was part of the same bug as above, it now outputs this:\n> >\n> > <?xml version = \"1.0\"?>\n> > <mwssql:Envelope\nxmlns:mwssql=\"http://www.mohawksoft.com/mwssql/envelope\">\n> > <mwssql:Header>\n> > <exec:sql>update cgrpairs set ratio=0 where\nsrcitem=100098670</exec:sql>\n> > <exec:affected>2657</exec:affected>\n> > <qry:sql>select * from ztitles limit 2</qry:sql>\n> > <qry:ROWSET>\n> > <qry:ROW columns=\"28\">\n> > <t:acd>undefined</t:acd>\n> > <t:muzenbr>undefined</t:muzenbr>\n> > <t:cat2>undefined</t:cat2>\n> > <t:cat3>undefined</t:cat3>\n> > <t:cat4>undefined</t:cat4>\n> > <t:performer>undefined</t:performer>\n> > <t:performer2>undefined</t:performer2>\n> > <t:title>undefined</t:title>\n> > <t:artist1>undefined</t:artist1>\n> > <t:engineer>undefined</t:engineer>\n> > <t:producer>undefined</t:producer>\n> > <t:labelname>undefined</t:labelname>\n> > <t:catalog>undefined</t:catalog>\n> > <t:distribut>undefined</t:distribut>\n> > <t:released>undefined</t:released>\n> > <t:origrel>undefined</t:origrel>\n> > <t:nbrdiscs>undefined</t:nbrdiscs>\n> > <t:spar>undefined</t:spar>\n> > <t:minutes>undefined</t:minutes>\n> > <t:seconds>undefined</t:seconds>\n> > <t:monostereo>undefined</t:monostereo>\n> > <t:studiolive>undefined</t:studiolive>\n> > <t:available>undefined</t:available>\n> > <t:previews>undefined</t:previews>\n> > <t:pnotes>undefined</t:pnotes>\n> > <t:artistid>undefined</t:artistid>\n> > <t:datasrc>undefined</t:datasrc>\n> > <t:extid>undefined</t:extid>\n> > </qry:ROW>\n> > </qry:ROWSET>\n> > </mwssql:Header>\n> > <mwssql:Body>\n> > <ROWSET columns=\"28\" rows=\"2\">\n> > <ROW ROWID=\"0\">\n> > <acd>P</acd>\n> > <muzenbr>68291</muzenbr>\n> > <cat2>Performer</cat2>\n> > <cat3>Jazz Instrument</cat3>\n> > <cat4>Guitar</cat4>\n> > <performer>Steve Khan</performer>\n> > <performer2>Khan, Steve</performer2>\n> > <title>Evidence</title>\n> > <artist1></artist1>\n> > <engineer></engineer>\n> > <producer></producer>\n> > <labelname>Novus</labelname>\n> > <catalog>3074</catalog>\n> > <distribut>BMG</distribut>\n> > <released>02/13/1990</released>\n> > <origrel>n/a</origrel>\n> > <nbrdiscs>1</nbrdiscs>\n> > <spar>n/a</spar>\n> > <minutes></minutes>\n> > <seconds></seconds>\n> > <monostereo>Stereo</monostereo>\n> > <studiolive>Studio</studiolive>\n> > <available>N</available>\n> > <previews></previews>\n> > <pnotes></pnotes>\n> > <artistid>100025343</artistid>\n> > <datasrc>1</datasrc>\n> > <extid>68291</extid>\n> > </ROW>\n> > <ROW ROWID=\"1\">\n> > <acd>P</acd>\n> > <muzenbr>67655</muzenbr>\n> > <cat2>Collection</cat2>\n> > <cat3>Jazz Instrument</cat3>\n> > <cat4></cat4>\n> > <performer>Various Artists</performer>\n> > <performer2>Various Artists</performer2>\n> > <title>Metropolitan Opera House Jam Session</title>\n> > <artist1></artist1>\n> > <engineer></engineer>\n> > <producer></producer>\n> > <labelname>Jazz Anthology</labelname>\n> > <catalog>550212</catalog>\n> > <distribut>n/a</distribut>\n> > <released>1992</released>\n> > <origrel>n/a</origrel>\n> > <nbrdiscs>1</nbrdiscs>\n> > <spar>n/a</spar>\n> > <minutes></minutes>\n> > <seconds></seconds>\n> > <monostereo>Mono</monostereo>\n> > <studiolive>Live</studiolive>\n> > <available>N</available>\n> > <previews></previews>\n> > <pnotes></pnotes>\n> > <artistid>100050450</artistid>\n> > <datasrc>1</datasrc>\n> > <extid>67655</extid>\n> > </ROW>\n> > </ROWSET>\n> > </mwssql:Body>\n> > </mwssql:Envelope>\n> --\n\n",
"msg_date": "Tue, 1 Apr 2003 13:08:45 -0500",
"msg_from": "\"Andrew Dunstan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL and SOAP, suggestions?"
},
{
"msg_contents": "> \n> I can certainly imagine cases for processing where having the field\n> names and other metadata up front (maybe add type info, nullable, etc\n> instead of just \"undefined\") would be useful.\n> \n> here's another question:\n> \n> If the intention is to use field names as (local) tag names, how will\n> you handle the case where the field name isn't a valid XML name? Of\n> course, one could do some sort of mapping (replace illegal chars with\n> \"_\", for example) but then you can't be 100% certain that you haven't\n> generated a collision, I should think.\n> \n\nI'm not sure, I have to really research how to handle that case. I have been\nsimply doing a %hex translation on characters that do not conform to XML,\nthat may actually be \"good enough(tm).\" \n\nAs for the field names being undefined, if you can find a way to get the\nfield types without having to specify a binary cursor I'd like that.\nAdmitedly, I have not looked very hard. This is a small part of a bigger\nproject. The SQL/XML provider currently supports PG and ODBC.\n\nThe web services project, which contains the SQL/XML provider, has a bunch\nof other services.\n\n",
"msg_date": "Tue, 1 Apr 2003 13:55:26 -0500 (EST)",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL and SOAP, suggestions?"
},
{
"msg_contents": "mlw writes:\n\n> Given a HTTP formatted query:\n> GET \"http://localhost:8181/pgmuze?query=select+*+from+zsong+limit+2\"\n>\n> The output is entered below.\n\nThat looks a lot like the SQL/XML-style output plus a SOAP header. Below\nis the output that I get from the SQL/XML function that I wrote. A simple\nXSLT stylesheet should do the trick for you.\n\nBtw., I also have an XSLT stylesheet that can make an HTML table out of\nthis output and I have a table function that can generate a virtual table\nfrom this output.\n\n\n=> select table2xml('select * from products');\n\n<?xml version='1.0'?>\n<table\n xmlns:xsi='http://www.w3.org/2001/XMLSchema-instance'\n xsi:noNamespaceSchemaLocation='#'> <!-- XXX this needs to be fixed -->\n\n<xsd:schema\n xmlns:xsd='http://www.w3.org/2001/XMLSchema'\n xmlns:sqlxml='http://www.iso-standards.org/mra/9075/2001/12/sqlxml'>\n\n <xsd:import\n namespace='http://www.iso-standards.org/mra/9075/2001/12/sqlxml'\n schemaLocation='http://www.iso-standards.org/mra/9075/2001/12/sqlxml.xsd' />\n\n<xsd:simpleType name='peter.pg_catalog.text'>\n <xsd:restriction base='xsd:string'>\n <xsd:maxLength value='MLIT' /> <!-- XXX needs actual value -->\n </xsd:restriction>\n</xsd:simpleType>\n\n<xsd:simpleType name='INTEGER'>\n <xsd:restriction base='xsd:integer'>\n <xsd:maxInclusive value='2147483647'/>\n <xsd:minInclusive value='-2147483648'/>\n </xsd:restriction>\n</xsd:simpleType>\n\n<xsd:simpleType name='NUMERIC'>\n <xsd:restriction base='xsd:decimal'>\n <xsd:totalDigits value='PLIT'/> <!-- XXX needs actual values -->\n <xsd:fractionDigits value='SLIT'/>\n </xsd:restriction>\n</xsd:simpleType>\n\n<xsd:complexType name='RowType'>\n <xsd:sequence>\n <xsd:element name='name' type='peter.pg_catalog.text' nillable='true'></xsd:element>\n <xsd:element name='category' type='INTEGER' nillable='true'></xsd:element>\n <xsd:element name='price' type='NUMERIC' nillable='true'></xsd:element>\n </xsd:sequence>\n</xsd:complexType>\n\n<xsd:complexType name='TableType'>\n <xsd:sequence>\n <xsd:element name='row' type='RowType' minOccurs='0' maxOccurs='unbounded' />\n </xsd:sequence>\n</xsd:complexType>\n\n<xsd:element name='table' type='TableType' />\n\n</xsd:schema>\n\n <row>\n <name>screwdriver</name>\n <category>3</category>\n <price>7.99</price>\n </row>\n\n <row>\n <name>drill</name>\n <category>9</category>\n <price>12.49</price>\n </row>\n\n</table>\n\n-- \nPeter Eisentraut [email protected]\n\n",
"msg_date": "Tue, 1 Apr 2003 23:41:36 +0200 (CEST)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: PostgreSQL and SOAP, suggestions?"
},
{
"msg_contents": "That function looks great, but what happens if you need to return 1 \nmillion records? Wouldn't you exhaust all the memory in the server? Or \ncan you stream it somehow?\n\nI have an actual libpq program which performs a query against a server, \nand will stream out the XML, so the number of records has very little \naffect on efficiency. I think the table2xml function is great for 99% of \nall the queries, but for those huge resultsets, I think it may be \nproblematic.\n\nWhat do you think?\n\nBTW, I routinely have queries that return millions of rows.\n\n\nPeter Eisentraut wrote:\n\n>mlw writes:\n>\n> \n>\n>>Given a HTTP formatted query:\n>>GET \"http://localhost:8181/pgmuze?query=select+*+from+zsong+limit+2\"\n>>\n>>The output is entered below.\n>> \n>>\n>\n>That looks a lot like the SQL/XML-style output plus a SOAP header. Below\n>is the output that I get from the SQL/XML function that I wrote. A simple\n>XSLT stylesheet should do the trick for you.\n>\n>Btw., I also have an XSLT stylesheet that can make an HTML table out of\n>this output and I have a table function that can generate a virtual table\n>from this output.\n>\n>\n>=> select table2xml('select * from products');\n>\n><?xml version='1.0'?>\n><table\n> xmlns:xsi='http://www.w3.org/2001/XMLSchema-instance'\n> xsi:noNamespaceSchemaLocation='#'> <!-- XXX this needs to be fixed -->\n>\n><xsd:schema\n> xmlns:xsd='http://www.w3.org/2001/XMLSchema'\n> xmlns:sqlxml='http://www.iso-standards.org/mra/9075/2001/12/sqlxml'>\n>\n> <xsd:import\n> namespace='http://www.iso-standards.org/mra/9075/2001/12/sqlxml'\n> schemaLocation='http://www.iso-standards.org/mra/9075/2001/12/sqlxml.xsd' />\n>\n><xsd:simpleType name='peter.pg_catalog.text'>\n> <xsd:restriction base='xsd:string'>\n> <xsd:maxLength value='MLIT' /> <!-- XXX needs actual value -->\n> </xsd:restriction>\n></xsd:simpleType>\n>\n><xsd:simpleType name='INTEGER'>\n> <xsd:restriction base='xsd:integer'>\n> <xsd:maxInclusive value='2147483647'/>\n> <xsd:minInclusive value='-2147483648'/>\n> </xsd:restriction>\n></xsd:simpleType>\n>\n><xsd:simpleType name='NUMERIC'>\n> <xsd:restriction base='xsd:decimal'>\n> <xsd:totalDigits value='PLIT'/> <!-- XXX needs actual values -->\n> <xsd:fractionDigits value='SLIT'/>\n> </xsd:restriction>\n></xsd:simpleType>\n>\n><xsd:complexType name='RowType'>\n> <xsd:sequence>\n> <xsd:element name='name' type='peter.pg_catalog.text' nillable='true'></xsd:element>\n> <xsd:element name='category' type='INTEGER' nillable='true'></xsd:element>\n> <xsd:element name='price' type='NUMERIC' nillable='true'></xsd:element>\n> </xsd:sequence>\n></xsd:complexType>\n>\n><xsd:complexType name='TableType'>\n> <xsd:sequence>\n> <xsd:element name='row' type='RowType' minOccurs='0' maxOccurs='unbounded' />\n> </xsd:sequence>\n></xsd:complexType>\n>\n><xsd:element name='table' type='TableType' />\n>\n></xsd:schema>\n>\n> <row>\n> <name>screwdriver</name>\n> <category>3</category>\n> <price>7.99</price>\n> </row>\n>\n> <row>\n> <name>drill</name>\n> <category>9</category>\n> <price>12.49</price>\n> </row>\n>\n></table>\n>\n> \n>\n\n",
"msg_date": "Tue, 01 Apr 2003 19:47:55 -0500",
"msg_from": "mlw <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL and SOAP, suggestions?"
},
{
"msg_contents": "mlw kirjutas T, 01.04.2003 kell 15:29:\n> Hannu Krosing wrote:\n> \n> >[email protected] kirjutas E, 31.03.2003 kell 19:52:\n> > \n> >\n> >>Actually, as far as I am aware, the header is for metadata, i.e. it is the\n> >>place to describe the data being returned.\n> >> \n> >>\n> >\n> >Did you read the SOAP spec ?\n> >\n> yes\n\n???\n\n\nWhat you have come up with _is_not_ a SOAP v1.1 message at all. It does\nuse some elements with similar names but from different namespace.\n\nthe SOAP Envelope, Header and Body elemants must be from namespace\nhttp://schemas.xmlsoap.org/soap/envelope/\n\nPer section 3 paragraph 2 of SOAP spec a conforming SOAP processor MUST\ndiscard a message that has incorrect namespace.\n\n> <?xml version = \"1.0\"?>\n> <mwssql:Envelope xmlns:mwssql=\"http://www.mohawksoft.com/mwssql/envelope\">\n> <mwssql:Header>\n\nThe <SOAP-ENV:Header> \"is a generic mechanism for adding features to a\nSOAP message in a decentralized manner without prior agreement between\nthe communicating parties. SOAP defines a few attributes that can be\nused to indicate who should deal with a feature and whether it is\noptional or mandatory (see section 4.2)\".\n\nThe <SOAP-ENV:Body> \"is a container for mandatory information intended\nfor the ultimate recipient of the message (see section 4.3). SOAP\ndefines one element for the body, which is the Fault element used for\nreporting errors.\"\n\n\nThe Header element is encoded as the first immediate child element of\nthe SOAP Envelope XML element. All immediate child elements of the\nHeader element are called header entries.\n\nThe encoding rules for header entries are as follows: \n\n 1. A header entry is identified by its fully qualified element\n name, which consists of the namespace URI and the local name.\n All immediate child elements of the SOAP Header element MUST be\n namespace-qualified.\n\n...\n\nAn example is a header with an element identifier of \"Transaction\", a\n\"mustUnderstand\" value of \"1\", and a value of 5. This would be encoded\nas follows:\n\n<SOAP-ENV:Header>\n <t:Transaction\n xmlns:t=\"some-URI\" SOAP-ENV:mustUnderstand=\"1\">\n 5\n </t:Transaction>\n</SOAP-ENV:Header>\n\n> <exec:sql>update cgrpairs set ratio=0 where srcitem=100098670</exec:sql>\n> <exec:affected>2657</exec:affected>\n> <qry:sql>select * from ztitles limit 2</qry:sql>\n> <qry:ROWSET>\n> <qry:ROW columns=\"28\">\n\nwhere are namespaces exec:, qry: abd t: defined ?\n\n----------------\nHannu\n\n",
"msg_date": "02 Apr 2003 13:29:09 +0300",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL and SOAP, suggestions?"
},
{
"msg_contents": "\n\nHannu Krosing wrote:\n\n>mlw kirjutas T, 01.04.2003 kell 15:29:\n> \n>\n>>Hannu Krosing wrote:\n>>\n>> \n>>\n>>>[email protected] kirjutas E, 31.03.2003 kell 19:52:\n>>> \n>>>\n>>> \n>>>\n>>>>Actually, as far as I am aware, the header is for metadata, i.e. it is the\n>>>>place to describe the data being returned.\n>>>> \n>>>>\n>>>> \n>>>>\n>>>Did you read the SOAP spec ?\n>>>\n>>> \n>>>\n>>yes\n>> \n>>\n>\n>???\n>\n>\n>What you have come up with _is_not_ a SOAP v1.1 message at all. It does\n>use some elements with similar names but from different namespace.\n>\n>the SOAP Envelope, Header and Body elemants must be from namespace\n>http://schemas.xmlsoap.org/soap/envelope/\n>\n[snip]\nHmm, I read \"SHOULD\" and \"MAY\" in the spec, assuming that it was not \n\"MUST\" are you saying it is invalid if I do not use the SOAP URIs for \nthe name spaces? If so, no big deal, I'll change them.\n\nAs for defining the namespaces, yea that's easy enough, just tack on an \nattribute.\n\nI still don't see where putting the field definitions in the soap header \nis an invalid use of that space.\n\n",
"msg_date": "Wed, 02 Apr 2003 07:56:49 -0500",
"msg_from": "mlw <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL and SOAP, suggestions?"
},
{
"msg_contents": "Andrew Dunstan writes:\n\n> If the intention is to use field names as (local) tag names, how will you\n> handle the case where the field name isn't a valid XML name? Of course, one\n> could do some sort of mapping (replace illegal chars with \"_\", for example)\n> but then you can't be 100% certain that you haven't generated a collision,\n> I should think.\n\nThe SQL/XML draft specifies an reversible escape mechanism. Basically,\nwhen mapping an SQL identifier to an XML name you replace problematic\ncharacters with an escape sequence based on the Unicode code point, like\n_x2A3B_.\n\n-- \nPeter Eisentraut [email protected]\n\n",
"msg_date": "Wed, 2 Apr 2003 23:40:57 +0200 (CEST)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: PostgreSQL and SOAP, suggestions?"
},
{
"msg_contents": "mlw writes:\n\n> That function looks great, but what happens if you need to return 1\n> million records?\n\nThe same thing that happens with any set-returning function: memory\nexhaustion.\n\n> I have an actual libpq program which performs a query against a server,\n> and will stream out the XML, so the number of records has very little\n> affect on efficiency. I think the table2xml function is great for 99% of\n> all the queries, but for those huge resultsets, I think it may be\n> problematic.\n>\n> What do you think?\n\nClearly, my approach is not sufficient if you need to handle big result\nsets. But perhaps a compromise based on cursors could be designed so that\nlarge parts of the format can be managed centrally. Such as:\n\nDECLARE foo CURSOR FOR SELECT ... ;\n\n-- gives you the XML Schema for the result set\nSELECT xmlschema_from_cursor(foo);\n\n-- gives you ones row (<row>...</row>)\nSELECT xmldata_from_cursor(foo);\n\n-- \nPeter Eisentraut [email protected]\n\n",
"msg_date": "Wed, 2 Apr 2003 23:42:55 +0200 (CEST)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: PostgreSQL and SOAP, suggestions?"
},
{
"msg_contents": "mlw kirjutas K, 02.04.2003 kell 15:56:\n> Hannu Krosing wrote:\n> \n> >What you have come up with _is_not_ a SOAP v1.1 message at all. It does\n> >use some elements with similar names but from different namespace.\n> >\n> >the SOAP Envelope, Header and Body elemants must be from namespace\n> >http://schemas.xmlsoap.org/soap/envelope/\n> >\n> [snip]\n> Hmm, I read \"SHOULD\" and \"MAY\" in the spec, assuming that it was not \n> \"MUST\" are you saying it is invalid if I do not use the SOAP URIs for \n> the name spaces? If so, no big deal, I'll change them.\n\nAFAICS you can _leave_out_ the namespace, but not put in another,\nnonconforming namespace.\n\n> As for defining the namespaces, yea that's easy enough, just tack on an \n> attribute.\n> \n> I still don't see where putting the field definitions in the soap header \n> is an invalid use of that space.\n\nIt is not strictly nonconforming, just not the intended use of\n\"transparently adding\" new info:\n\n 4.2 SOAP Header\n\n SOAP provides a flexible mechanism for extending a message in a\n decentralized and modular way without prior knowledge between the\n communicating parties. Typical examples of extensions that can be\n implemented as header entries are authentication, transaction\n management, payment etc.\n\nI.e. the intended use of *SOAP* Header is *not* defining the structure\nof the message but is rather something similar to e-mail (rfc822)\nHeaders.\n\nThe XML way of defining a message is using a DTD, XML-schema, Relax NG\nschema or somesuch, either embedded (forbidden for DTD's in SOAP) or\nreferenced.\n\nAlso for me the following:\n\n The Header element is encoded as the first immediate child element of\n the SOAP Envelope XML element. All immediate child elements of the\n Header element are called header entries.\n\n The encoding rules for header entries are as follows: \n\n 1. A header entry is identified by its fully qualified element\n name, which consists of the namespace URI and the local name.\n All immediate child elements of the SOAP Header element MUST be\n namespace-qualified.\n\ndescribes an element with a full embedded URI, not just\nnamespace-qualified tagname, but I may be reading it wrong and the\nnamespace could be defined at outer level. But defining namespace at the\nouter level is counterintuitive for cases where the header element is to\nbe processed and removed by some \"SOAP intermediary\".\n\nAlso this seems to support *not* using Header for essensial structure\ndefinitions:\n\n 4.3.1 Relationship between SOAP Header and Body\n\n While the Header and Body are defined as independent elements, they\n are in fact related. The relationship between a body entry and a\n header entry is as follows: A body entry is semantically equivalent to\n a header entry intended for the default actor and with a SOAP\n mustUnderstand attribute with a value of \"1\". The default actor is\n indicated by not using the actor attribute (see section 4.2.2).\n\nThis suggests that putting the structure definition as 1-st Body element\nand data as second would be equivalent to putting structure in Header\n\n-----------------\nHannu\n\n",
"msg_date": "03 Apr 2003 00:51:56 +0300",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL and SOAP, suggestions?"
},
{
"msg_contents": "\n\nHannu Krosing wrote:\n\n>mlw kirjutas K, 02.04.2003 kell 15:56:\n> \n>\n>>Hannu Krosing wrote:\n>>\n>> \n>>\n>>>What you have come up with _is_not_ a SOAP v1.1 message at all. It does\n>>>use some elements with similar names but from different namespace.\n>>>\n>>>the SOAP Envelope, Header and Body elemants must be from namespace\n>>>http://schemas.xmlsoap.org/soap/envelope/\n>>>\n>>> \n>>>\n>>[snip]\n>>Hmm, I read \"SHOULD\" and \"MAY\" in the spec, assuming that it was not \n>>\"MUST\" are you saying it is invalid if I do not use the SOAP URIs for \n>>the name spaces? If so, no big deal, I'll change them.\n>> \n>>\n>\n>AFAICS you can _leave_out_ the namespace, but not put in another,\n>nonconforming namespace.\n>\n[snip]\n\nI think you are interpreting the spec a bit too restrictively. The \nsyntax is fairly rigid, but the spec has a great degree of flexibility. \nI agree that, syntactically, it must work through a parser, but there is \nlots of room to be flexible.\n\n",
"msg_date": "Wed, 02 Apr 2003 17:32:10 -0500",
"msg_from": "mlw <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL and SOAP, suggestions?"
},
{
"msg_contents": "mlw wrote:\n> I think you are interpreting the spec a bit too restrictively. The \n> syntax is fairly rigid, but the spec has a great degree of flexibility. \n> I agree that, syntactically, it must work through a parser, but there is \n> lots of room to be flexible.\n\nThis is /exactly/ the standard problem with SOAP.\n\nThere is enough \"flexibility\" that there are differing approaches\nassociated, generally speaking, with \"IBM versus Microsoft\" whereby it's\neasy to generate SOAP requests that work fine with one that break with\nthe other.\n\nFor a pretty simple example of a longstanding bug that has never been\nfixed, see:\n<http://sourceforge.net/tracker/index.php?func=detail&aid=559324&group_id=26590&atid=387667>\n\nThe precis:\n\nThe SOAP implementation used by the XMethods folks to publish stock\nprices is buggy, rejecting perfectly legitimate messages submitted using\nZSI (a Python SOAP implementation).\n\nThe bug isn't with ZSI; it is quite clearly with the server, apparently\nimplemented in Java using one of the EJB frameworks. \n\nIn practice, what happens is that since that service is fairly popular,\nparticularly for sample applications, the implementors of SOAP libraries\nwind up coding around the bugs.\n\nThe problem is that it gets difficult to tell the difference between\nbugs and variations in interpretations of the standards.\n\nIf the specs were more strictly defined, it would be a lot easier to use\nSOAP, because you wouldn't be left puzzling over whether the\ninteroperability problems you're having are:\n\n a) Problems with the client;\n b) Problems with the server;\n c) Problems with interpretation of specs;\n d) ...\n\nThe vast degree to which messages can get rewritten behind your back\nadds to the fun.\n\nOf course, it's only fun if you *enjoy* having interoperability\nproblems...\n--\nIf this was helpful, <http://svcs.affero.net/rm.php?r=cbbrowne> rate me\nhttp://www.ntlug.org/~cbbrowne/soap.html\nHe who laughs last thinks slowest. \n\n",
"msg_date": "Wed, 02 Apr 2003 18:01:05 -0500",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL and SOAP, suggestions? "
},
{
"msg_contents": "[email protected] kirjutas N, 03.04.2003 kell 02:01:\n> mlw wrote:\n> > I think you are interpreting the spec a bit too restrictively. The \n> > syntax is fairly rigid, but the spec has a great degree of flexibility. \n> > I agree that, syntactically, it must work through a parser, but there is \n> > lots of room to be flexible.\n> \n> This is /exactly/ the standard problem with SOAP.\n> \n> There is enough \"flexibility\" that there are differing approaches\n> associated, generally speaking, with \"IBM versus Microsoft\" whereby it's\n> easy to generate SOAP requests that work fine with one that break with\n> the other.\n\nDo you know of some:\n\na) standard conformance tests\n\nb) recommended best practices for being compatible with all mainstream\nimplementations (I'd guess a good approach would be to generate very\nstrictly conformant code but accept all that you can, even if against\npedantic reading of the spec)\n\n-----------------\nHannu\n\n",
"msg_date": "03 Apr 2003 11:20:32 +0300",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL and SOAP, suggestions?"
},
{
"msg_contents": "\n\nHannu Krosing wrote:\n\n>[email protected] kirjutas N, 03.04.2003 kell 02:01:\n> \n>\n>>mlw wrote:\n>> \n>>\n>>>I think you are interpreting the spec a bit too restrictively. The \n>>>syntax is fairly rigid, but the spec has a great degree of flexibility. \n>>>I agree that, syntactically, it must work through a parser, but there is \n>>>lots of room to be flexible.\n>>> \n>>>\n>>This is /exactly/ the standard problem with SOAP.\n>>\n>>There is enough \"flexibility\" that there are differing approaches\n>>associated, generally speaking, with \"IBM versus Microsoft\" whereby it's\n>>easy to generate SOAP requests that work fine with one that break with\n>>the other.\n>> \n>>\n>\n>Do you know of some:\n>\n>a) standard conformance tests\n>\nOff the top of my head, no, but I bet it is a goole away. If you know \nany good links, I'd love to know. I have been working off the W3C spec.\n\n>\n>b) recommended best practices for being compatible with all mainstream\n>implementations (I'd guess a good approach would be to generate very\n>strictly conformant code but accept all that you can, even if against\n>pedantic reading of the spec)\n>\nI have been planning to \"test\" the whole thing with a few .NET \napplications. I am currently using expat to parse the output to ensure \nthat it all works correcty.\n\n>\n> \n>\n\n> \n>\n\n",
"msg_date": "Thu, 03 Apr 2003 07:38:53 -0500",
"msg_from": "mlw <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL and SOAP, suggestions?"
},
{
"msg_contents": "> [email protected] kirjutas N, 03.04.2003 kell 02:01:\n> > mlw wrote:\n> > > I think you are interpreting the spec a bit too restrictively. The \n> > > syntax is fairly rigid, but the spec has a great degree of flexibility. \n> > > I agree that, syntactically, it must work through a parser, but there is \n> > > lots of room to be flexible.\n> > \n> > This is /exactly/ the standard problem with SOAP.\n> > \n> > There is enough \"flexibility\" that there are differing approaches\n> > associated, generally speaking, with \"IBM versus Microsoft\" whereby it's\n> > easy to generate SOAP requests that work fine with one that break with\n> > the other.\n> \n> Do you know of some:\n> \n> a) standard conformance tests\n> \n> b) recommended best practices for being compatible with all mainstream\n> implementations (I'd guess a good approach would be to generate very\n> strictly conformant code but accept all that you can, even if against\n> pedantic reading of the spec)\n\nThe problem with a) is that SOAP, unlike CORBA, doesn't have the notion of \nstandardized language bindings. That makes it tough to be sure that your \nimplementation is \"standard\" in any meaningful way in the first place.\n\nThe \"best practices\" have involved scripting up interoperability tests where \nthey construct sets of functions with varying data types and verify that \"my \nclient implementation can talk to your server implementation,\" and vice-versa.\n\nAnd when you run into problems, you chip off bits of code until the block of \nstone starts looking like an elephant.\n\nIn order to have confidence of interoperability, you have to test your client \nlibrary against all the servers you care about, or vice-versa. That's \ndefinitely not the same thing as being a \"conformance\" test.\n\nTrying to be \"really strict\" doesn't seem to be a viable strategy, as far as I \ncan see...\n--\n(concatenate 'string \"cbbrowne\" \"@ntlug.org\")\nhttp://www3.sympatico.ca/cbbrowne/wp.html\n\"The cost of living has just gone up another dollar a quart.\" \n-- W.C. Fields\n\n",
"msg_date": "Thu, 03 Apr 2003 07:44:50 -0500",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL and SOAP, suggestions? "
},
{
"msg_contents": "> I have been planning to \"test\" the whole thing with a few .NET \n> applications. I am currently using expat to parse the output to ensure \n> that it all works correcty.\n\nThat, unfortunately, probably implies that your implementation is almost \ntotally non-interoperable.\n\nYou should put out of your mind the notion of being \"correct.\" Being \n\"correct\" is pretty irrelevant if 80% of the requests that come from a VB.NET \nclient fail because Microsoft implemented part of their request differently \nthan what you interpreted as \"correct.\"\n\nThe point is that \"correctness\" isn't the thing you need to aim for; what you \nshould aim for is interoperability with the important client implementations.\n\nSOAP::Lite, .NET, probably some Java ones, C++ ones, and such.\n\nNobody does \"correctness\" testing; they do interoperability tests where they \ntry to submit requests to Apache AXIS, .NET, WebSphere, and the lot of other \nimportant implementations. If you're testing a server (as is the case here), \nthen the point is to run tests with a bunch of clients.\n\nHead to the SOAP::Lite and Axis projects; you'll see matrices describing this \nsort of thing...\n--\n(reverse (concatenate 'string \"ac.notelrac.teneerf@\" \"454aa\"))\nhttp://www.ntlug.org/~cbbrowne/advocacy.html\n\"Fear leads to anger. Anger leads to hate. Hate leads to using Windows\nNT for mission-critical applications.\" --- What Yoda *meant* to say\n\n",
"msg_date": "Thu, 03 Apr 2003 07:54:13 -0500",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL and SOAP, suggestions? "
},
{
"msg_contents": "On Thu, Apr 03, 2003 at 07:54:13AM -0500, [email protected] wrote:\n> > I have been planning to \"test\" the whole thing with a few .NET \n> > applications. I am currently using expat to parse the output to ensure \n> > that it all works correcty.\n> \n> That, unfortunately, probably implies that your implementation is almost \n> totally non-interoperable.\n> \n> You should put out of your mind the notion of being \"correct.\" Being \n> \"correct\" is pretty irrelevant if 80% of the requests that come from a VB.NET \n> client fail because Microsoft implemented part of their request differently \n> than what you interpreted as \"correct.\"\n> \n> The point is that \"correctness\" isn't the thing you need to aim for; what you \n> should aim for is interoperability with the important client implementations.\n> \n> SOAP::Lite, .NET, probably some Java ones, C++ ones, and such.\n> \n> Nobody does \"correctness\" testing; they do interoperability tests where they \n> try to submit requests to Apache AXIS, .NET, WebSphere, and the lot of other \n> important implementations. If you're testing a server (as is the case here), \n> then the point is to run tests with a bunch of clients.\n> \n> Head to the SOAP::Lite and Axis projects; you'll see matrices describing this \n> sort of thing...\n\nHmmm. Can I reiterate my support of XML-RPC here? <g>\n\n-Jay 'Eraserhead' Felice\n\n> --\n> (reverse (concatenate 'string \"ac.notelrac.teneerf@\" \"454aa\"))\n> http://www.ntlug.org/~cbbrowne/advocacy.html\n> \"Fear leads to anger. Anger leads to hate. Hate leads to using Windows\n> NT for mission-critical applications.\" --- What Yoda *meant* to say\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to [email protected])\n\n",
"msg_date": "Thu, 3 Apr 2003 10:19:41 -0500",
"msg_from": "\"Jason M. Felice\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL and SOAP, suggestions?"
}
] |
[
{
"msg_contents": "Okay, here's the ugly and dirty truth. Before you complain, please keep in\nmind that I didn't write the standard, although I tend to give it the\nbenefit of the doubt after ignoring the truly silly stuff. :)\n\n\n* How to create inheritance hierarchies\n\n** CREATE TABLE syntax\n\nIgnoring the parts that do no not pertain to inheritance, the CREATE\nTABLE syntax in SQL3 looks like this:\n\nCREATE TABLE <table name> { <table element list> | <subtable clause> }\n\nwhere <table element list> is the usual `(colname type, colname type,\n...)' variety and <subtable clause> looks like this:\n\nUNDER <supertable>, <supertable>, ...\n\nor optionally like\n\nUNDER <supertable> WITH (<old colname> AS <new colname>, <old\ncolname2> AS <new colname2>, ...), ...\n\nBut notice how the syntax { <table element list> | <subtable clause> }\nwould force you to either create new columns in the table or inherit\nall columns from one or more supertables. That evidently cannot be\nright. Reinforcing this believe is that the standard at several places\ntalks about \"inherited columns\" vs \"originally defined columns\", which\nwould of course not be possible under this scheme. Let's therefore\nassume that the syntax should really be more something like this:\n\nCREATE TABLE <table name> <table element list> <subtable clause>\n| CREATE TABLE <table name> <table element list>\n| CREATE TABLE <table name> <subtable clause>\n\nThis is really not any different from the current INHERITS syntax;\nperhaps in a fit of purity someone is willing to complement it\naccordingly. One key element here that is new is the column renaming\noption.\n\n** Column naming and ordering\n\nThe ordering of the columns has apparently been a problem even for the\nauthors of the standard. The rules for CREATE TABLE merely say that\nthe columns are ordered according the order in which the supertables\nare listed in the the UNDER clause. It does not say anything about\nwhether the originally defined columns come before or after the\ninherited ones.\n\nThis does not make the issue of adding columns easier. The rules say:\n\n\"If [the table being altered] is a supertable, then an <add column\ndefinition>, without further Access Rule checking, is effectively\nperformed for each of its subtables, thereby adding the column as an\ninherited column in these subtables.\"\n\nThis would make some sort of sense if the originally defined columns\ncame before the inherited ones, but the way it stands it doesn't help\na lot.\n\nThe resolution of names is done as follows. First, a list of all\ninherited columns is created. If one column is \"replicated\", that is,\nmore than one supertable inherited it from the same super-supertable,\nall but the first occurrence is dropped. Then the column renaming\nclauses are applied. The resulting list must not contain a duplicate\ncolumn name.\n\nThis scheme is quite different from the current PostgreSQL\nimplementation, which merges columns of equal name and datatype. It\nhas been mentioned before during OO discussions that the association\nof inherited columns by name alone should probably be dropped and\nreplaced by pg_attribute.oid references. This would seem like a good\nthing to do because it would allow us to detect replicated columns\nreliably and give a chance to the column renaming option.\n\n** OID, Identity, et al.\n\n\"An object identifier OID is a value generated when an object is\ncreated, to give that object an immutable identity. It is unique in\nthe known universe of objects that are instances of abstract data\ntypes, and is conceptually separate from the value, or state, of the\ninstance.\"\n\nSince the way I understand it a table definition also defines an\nabstract data type in some way or other, and rows are instantiations\nof that data type, this definition of OID matches ours pretty good.\n\n\"The OID value is materialized as a character string with an\nimplementation-defined length and character set SQL_TEXT.\"\n\n... or maybe not. :-)\n\nWhat exactly IDENTITY is is still a bit unclear to me but it is\ndefinitely not the proposed identification of the table a row came\nfrom. The implicit column IDENTITY contains a `row identifier'.\n\n\"The value of a row identifier for a given base table row is equal to\nitself and it not equal to the value of a row identifier for any other\nbase table row within the database.\"\n\n(Note: `base table' is the opposite of `derived table' (a view), and\nis unrelated to whether a table is a sub- or supertable.)\n\nThere is no literal for row identifiers and they do not interact with\nother data types. The only manifestation is through the API, where\nthey are mapped to unique row \"handles\" in a language specific\nfashion.\n\nNot every table has row identifiers, you must ask for them when\ncreating the table. This all relates to inheritance because\n\n\"A row identifier is implicitly defined for [the table to be created].\nFor every table ST named in the <subtable clause>, a row identifier is\nimplicitly defined for ST.\"\n\nRow identifiers were ANSI-only at the time of the draft, ISO simply\nsays that any supertable must have a primary key. I can't quite put my\nfinger on either of these requirements, though.\n\nIn any case I'd advise against overloading IDENTITY in the manner that\nwas proposed.\n\n* Cloning\n\nOne thing that often comes up in `various ways of looking at\ninheritance' threads is the idea of cloning the definition of a given\ntable as part of a newly created table. There's a syntax for that as\nwell in SQL3:\n\nCREATE TABLE name (\n colname type,\n colname type,\n LIKE other_table,\n colname type,\n ...\n);\n\nThis effectively pastes whatever you wrote when you created the\n\"other_table\" at the place of the LIKE. After the create table is\ndone, the new and the \"old\" table are completely unrelated. Of course\nif you want to clone the data as well in one shot you could use CREATE\nTABLE AS. In any case, this has really very little to do with the\ninheritance we're discussing here, other than that it `feels' the\nsame.\n\n* Operating on data\n\n** SELECT\n\nTo make a long story short: subtables are automatically examined,\nunless ONLY is specified.\n\nTo make the story longer, phrasing and viewing it like this is really\nquite incorrect. Instead:\n\n\"Any row of a subtable must correspond to one and only one row of each\ndirect supertable. Any row of a supertable corresponds to at most one\nrow of a direct subtable.\"\n\nThe key word I see here is `correspondence', namely that a given row\nis always a member of both the sub- and the supertable (probably\nhaving more columns in the subtable obviously) and doesn't belong to\neither of them more then to the other. In other words, the row is\nconceptually shared. Then what the ONLY table reference really does is\nselect all rows of a supertable that do not have any corresponding row\nin any subtable. (This is the wording the standard chose.)\nImplementation-wise this would be the easier thing to do (which is\nprobably why it's done this way now), but conceptually it is really\nthe unnatural situation because it's similar to an `except' query.\n\n** Insert, Update, Delete\n\nThese commands have no special notion of inheritance. Since all rows\nare effectively shared between sub- and supertables you cannot update\nthem in one of them \"only\" without some copy-on-write concept. Of\ncourse some rows in a supertable may have no corresponding rows in any\nsubtable, but that's nothing the row knows about or cares about. In\nsophisticated inheritance hierarchies, rows and parts of rows may be\nshared in very involved ways, so I foresee some issues with Update\nOnly.\n\n(This sounds stranger than it really is: There is no requirement that\nthe `corresponding row' is physically stored at the supertable, it is\nonly required that it effectively exists there as well, which is\nsatisfied if SELECT retrieves it by default. In some sense the\napparently `favoured' storage model here is that all the inherited\nattributes and their values are stored in the supertable heap and only\nthe originally defined attributes of subtables are in the subtable\nheap. This method favours the SELECT semantics on supertables because\nit doesn't have to worry about subclasses at all. But queries on\nsubtables effectively become joins.)\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Sat, 20 May 2000 15:37:52 +0200 (CEST)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Thus spoke SQL3 (on OO)"
},
{
"msg_contents": "On Sat, 20 May 2000, Peter Eisentraut wrote:\n> \n> UNDER <supertable>, <supertable>, ...\n\nThe standard is very confusing, I am probably wrong, but I didn't see the\nsyntax for allowing more than one <supertable clause> after the UNDER keyword.\n\nDid Oracle approve ANSI-ISO-9075? I didn't notice them listed anywhere. 9075\nis the SQL3 standard right? (or else I'm reading the wrong stuff!!) :)\n\n -- \nRobert B. Easter\[email protected]\n",
"msg_date": "Sat, 20 May 2000 18:25:57 -0400",
"msg_from": "\"Robert B. Easter\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Thus spoke SQL3 (on OO)"
},
{
"msg_contents": "Peter Eisentraut wrote:\n\nExcellent research Peter....\n\n\n> Let's therefore\n> assume that the syntax should really be more something like this:\n> \n> CREATE TABLE <table name> <table element list> <subtable clause>\n> | CREATE TABLE <table name> <table element list>\n> | CREATE TABLE <table name> <subtable clause>\n\nI agree.\n\n> ** Column naming and ordering\n<snip> \n> This would make some sort of sense if the originally defined columns\n> came before the inherited ones, but the way it stands it doesn't help\n> a lot.\n\nYes, super-class first seems reasonable.\n\n> The resolution of names is done as follows. First, a list of all\n> inherited columns is created. If one column is \"replicated\", that is,\n> more than one supertable inherited it from the same super-supertable,\n> all but the first occurrence is dropped. Then the column renaming\n> clauses are applied. The resulting list must not contain a duplicate\n> column name.\n> \n> This scheme is quite different from the current PostgreSQL\n> implementation, which merges columns of equal name and datatype.\n\nIn the absense of renaming, I don't think this amounts to anything\ndifferent to PostgreSQL.\n\n> It\n> has been mentioned before during OO discussions that the association\n> of inherited columns by name alone should probably be dropped and\n> replaced by pg_attribute.oid references. This would seem like a good\n> thing to do because it would allow us to detect replicated columns\n> reliably and give a chance to the column renaming option.\n\nThat would mean I guess sharing pg_attributes between different classes\nin a hierarchy, which I assume doesn't happen now. But sounds good.\n\n> \"The OID value is materialized as a character string with an\n> implementation-defined length and character set SQL_TEXT.\"\n> \n> ... or maybe not. :-)\n\nHow does character set affect this? It is different to data type isn't\nit?\n\n> What exactly IDENTITY is is still a bit unclear to me but it is\n> definitely not the proposed identification of the table a row came\n> from. \n\nHow do you know?\n\n> * Cloning\n> CREATE TABLE name (\n> colname type,\n> colname type,\n> LIKE other_table,\n> colname type,\n> ...\n> );\n\nHmm. Fairly useless feature IMO.\n\n> ** SELECT\n> \n> To make a long story short: subtables are automatically examined,\n> unless ONLY is specified.\n> \n> To make the story longer, phrasing and viewing it like this is really\n> quite incorrect. Instead:\n> \n> \"Any row of a subtable must correspond to one and only one row of each\n> direct supertable. Any row of a supertable corresponds to at most one\n> row of a direct subtable.\"\n\nThey've chosen this model to describe how things work. Unless there is\nsome subtlety I'm missing the model can equally be described by the\n\"subtables are automatically examined\" model. Maybe they thought it was\neasier to describe in those terms (I don't, I think it's lame), but it\nshouldn't affect implementation. In particular I think implementing it\nthis way would be about as silly a thing I've ever heard. pgsql has it\nright.\n\n> The key word I see here is `correspondence', namely that a given row\n> is always a member of both the sub- and the supertable (probably\n> having more columns in the subtable obviously) and doesn't belong to\n> either of them more then to the other. In other words, the row is\n> conceptually shared. Then what the ONLY table reference really does is\n> select all rows of a supertable that do not have any corresponding row\n> in any subtable. (This is the wording the standard chose.)\n> Implementation-wise this would be the easier thing to do (which is\n> probably why it's done this way now), \n\nUmm no, it is not the way it is done now (in pgsql). When ONLY is\nspecified (or rather when \"*\" is not specified in current pgsql syntax),\nit just queries the base class table. It doesn't check \"rows of a super\ntable that do not have any corresponding row in any subtable\". The\nsubtable just doesn't come into it.\n\nIf it were implemented that way, then a complex inheritance hierarchy\ncould result in a join across a dozen tables. Avoiding that is well\nworth having to examine several tables in a query. (Most of the time\nanyway). Put another way, a UNION is much cheaper than a join.\n\n> ** Insert, Update, Delete\n> \n> These commands have no special notion of inheritance. Since all rows\n> are effectively shared between sub- and supertables you cannot update\n> them in one of them \"only\" without some copy-on-write concept.\n\nONLY in the case of update and insert would refer to any row which is\ndoes not correspond to any row in a sub-table. Boy I hate talking in\nterms of this model, because it's really lame.\n\nI suspect even in this lame model, it doesn't imply copy-on-write. It\njust means delete cascades to sub-class tables, delete only doesn't need\nto and update ignores sub-classes and update only updates only when it\ndoesn't correspond to sub-class tuples.\n\nBack to the real world however and it means delete ONLY doesn't go off\nsearching subclasses and update ONLY doesn't go off searching\nsub-classes.\n\nOf course insert isn't affected for the same reason C++ constructors are\nnot polymorphic.\n\n> Of\n> course some rows in a supertable may have no corresponding rows in any\n> subtable, but that's nothing the row knows about or cares about. In\n> sophisticated inheritance hierarchies, rows and parts of rows may be\n> shared in very involved ways, so I foresee some issues with Update\n> Only.\n\nWhich is why it is totally insane to implement it that way. The way\npostgres implements it now is very simple and works.\n\n> (This sounds stranger than it really is: There is no requirement that\n> the `corresponding row' is physically stored at the supertable, it is\n> only required that it effectively exists there as well, which is\n> satisfied if SELECT retrieves it by default.\n> In some sense the\n> apparently `favoured' storage model here is that all the inherited\n> attributes and their values are stored in the supertable heap and only\n> the originally defined attributes of subtables are in the subtable\n> heap. This method favours the SELECT semantics on supertables because\n> it doesn't have to worry about subclasses at all. But queries on\n> subtables effectively become joins.)\n\nBoy. I think we should look to how other people have implemented object\nmodels rather than how SQL3 describes the concept. This sounds like a\nnightmare.\n",
"msg_date": "Sun, 21 May 2000 11:28:13 +1000",
"msg_from": "Chris <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Thus spoke SQL3 (on OO)"
},
{
"msg_contents": "On Sat, 20 May 2000, Chris wrote:\n> Boy. I think we should look to how other people have implemented object\n> models rather than how SQL3 describes the concept. This sounds like a\n> nightmare.\n\nSQL3 does not appear to really have an object model. Rather, it appears to be\na hierarchial model on top of the relational model. Section 4.16.2 (in 9075-2)\n\"Referenceable tables, subtables, and supertables\" talks about \"a leaf table\",\nif that clarifies anything.\n\nHere is a quote (9075-2 4.16):\n\n Let T(a) be a maximal supertable and T be a subtable of T(a). The\n set of all subtables of T(a) (which includes T(a) itself) is called\n the subtable family of T or (equivalently) of T(a). Every subtable\n family has exactly one maximal supertable.\n\n A leaf table is a table that does not have any proper subtables.\n\n\nDefinitions (my interpretations anyway):\n* Every table is a subtable and supertable of itself.\n\n* A proper subtable (of a supertable) is a table that was CREATEd with an UNDER\nclause that references its supertable - i.e, a proper subtable is just a\nsubtable that is not the supertable itself.\n\n* A proper supertable (of a subtable) is a table that was specified in an\nUNDER clause during creation of a subtable.\n\n* A maximal supertable is a table that is not a subtable of any other table.\n\n\nSo, it says that every subtable family (or member of) has exactly one maximal\n(root) supertable. This seems to make clear that multiple inheritance is not\nallowed. The picture of this hierarchy is inverted trees with the roots\nat maximal supertables with subtables branching down, EXTENDing the\nsupertable.\n\nAnother quote (9075-2 4.16):\n\n Users must have the UNDER privilege on a table before they can use\n the table in a subtable definition. A table can have more than one\n proper subtable. Similarly, a table can have more than one proper\n supertable.\n\n\nOk, it can have more than one (proper) supertable. This means that a chain\nof inheritance is allowed: maximal supertable -> subtable1 -> (sub)subtable2\netc, where (sub)subtable2 has two supertables: maximal supertable and subtable1.\n\nOnly one table can be specified in the UNDER clause, which prevents the\nfollowing possibility:\n\n(1)\tsubtable_a UNDER maximal_supertable\n(2)\tsubtable_b UNDER maximal_supertable\n(3)\tsubtable_abc UNDER subtable_a, subtable_b\n\n(3) is not allowed, but if it where, then subtable_abc would still have had only one\nmaximal supertable. If allowed, it would have inherited maximal supertable\ntwice.\n\nAnother quote (9075-2 4.16):\n\n The secondary effects of table updating operations on T on proper\n supertables and subtables of T are as follows:\n\n - When row R is deleted from T, for every table ST that is a\n proper supertable or proper subtable of T, the corresponding\n superrow or subrow SR of R in ST is deleted from ST.\n\n - When row R is replaced in T, for every table ST that is a proper\n supertable or a proper subtable of T the corresponding superrow\n or subrow SR of R in ST is replaced in ST.\n\n - When row R is inserted into T, for every proper supertable ST of\n T the corresponding superrow SR of R is inserted into ST.\n\n\nThese effects describe a sharing of properties (columns) among the super and\nsubtables. A row in a supertable may be part of a single row in 0 or 1 of its\nsubtables (if I got it right) - a 1:1 relationship if any. The subtable and\nsupertable are linked together in the tree hierarchy and are not independent\nafter creation. The subtable extends additional attributes onto the supertable.\n\nSumming this up a little now, SQL3's UNDER clause appears to allow an EXTENDS\ntype of inheritance, which is like a hierarchial (tree) model. It does not have\na general-pupose object-oriented capability. It does not provide for the\nCLONE and ASSIMILATE types of inheritance that I decribed in an earlier message\nto this list. As other messages have stated, UNDER is not too different than\nwhat INHERITS currently does. Actually, INHERITS allows multiple inheritance\ntoo, so it does more right now (I guess).\n\nSince INHERITS, as it is implemented now, is like SQL3's UNDER, maybe it should\nNOT allow multiple inheritance and should strive to become UNDER if SQL3 is a\ngood idea.\n\nIf the other object-oriented methods, like CLONES and ASSIMILATES (or whatever\nyou want to call them), is ever wanted in PostgreSQL, then looks like some other\nstandard(s) will have to be drawn from. I have not looked at the ODMG 3.0\n(standard) yet. But maybe it has the missing capabilities. Is ODMG 3.0 an\ninternational standard? I'd like to just download it and read it, but looks\nlike you have to buy it for $39.95.\n\nI hope my comments are useful. :)\n\n-- \nRobert B. Easter\[email protected]\n",
"msg_date": "Sun, 21 May 2000 01:57:27 -0400",
"msg_from": "\"Robert B. Easter\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Thus spoke SQL3 (on OO)"
},
{
"msg_contents": "On Sun, 21 May 2000, Chris wrote:\n> \"Robert B. Easter\" wrote:\n> \n> > SQL3 does not appear to really have an object model. Rather, it \n> > appears to be a hierarchial model on top of the relational model. \n> \n> It seems like it amounts to the same thing to me. A bit like me saying\n> \"A circle is the set of points equi-distant from a point\", and someone\n> else arguing \"No, a circle is the graph of points represented by the\n> formula x^2 + y^2 = n\". At the end of the day they amount to the same\n> thing I think.\n\nI guess the major difference is that the hierarchial-model does not support\nmultiple inheritance. Again it is basically a tree going from one parent\nbranch to many children branches where the children basically ARE the parent\ntable, just adding some more column leaves so to speak (adding nodes to a\ntree, its still all one tree). Object-oriented allows both one parent to yield\nmany children and many parents to combine to yield a single child. The\ninstances are not dependent on each other, just the declarations are so that\nyou can delete a parent instance and it has no effect on a child since they are\nnot part of a data tree together. A tree generally never allows two branches to\nmerge into a single branch. Something like that. OO lets you do more\nwithout the instances having to be part of a dependent data tree.\n\nI was thinking that maybe this hierarchial model over relational-model in SQL3\n(as I see it) was designed that way to allow easier transitions of legacy\nhierarchy databases to the new SQL3 relational systems.\n\nI'm no OO expert, and again I may have this ALL wrong! But, I don't see the\nOO features of C++ being comparable to OO in databases all the time. C++\ngenerally uses only the CLONES type of inheritance, is procedural, and\nallows a derived class to be passed anywhere the parent might normally be\npassed.\n\nWhat follows is my attempt to compare C++ OO and database OO: In C++, a\nfunction programmed to take a parent class as input, is only programmed to\nuse/access the attributes that parent has. If you pass a derived class, the\nfunction still only uses the attributes that the parent has too. I think it is\nabusing C++ if a function that takes a parent arg but is also aware of derived\nclasses in advance and does a test to see what is being passed. The idea of C++\ninheritance was that you could make parent, and functions that use parent, then\nlater someday derive a child from parent that you never thought you'd need. \nThen, the functions that work on parent still do their thing even on the child. \nA function that is programmed for a parent class and that has advance knowledge\nof some derived class is not good OOP. Passing different row types to the\nclient from one select, forces that client to be like the C++ function\nprogrammed in advance to deal with some derived class too. The SQL declaration\n\"SELECT * FROM parent\" is like the C++ function declaration \"void\nuseparent(parent *p)\". Both only know about parent objects in the definition\n(what uses the data obtained). The definitions of what to do with the data\nlies inside the client that issues the SQL declaration, and within the C++\nfunction definition, respectively. Both should only be expected to understand\nhow to process what they declare. Sending back unpredictable numbers of and\ntypes of columns might break the procedural/definition part that is outside\nSQL's declaritive domain which is to precisely declare what data to get. I\nthink that single-type rows should still be returned from selects. Its the\nrelational way too, and the database still is a relational database. Returning\nthe additional child columns just seems to be a waste of processing and\nbandwidth when selecting parent. C++ will just send a pointer, so there is no\npenalty, but in the database, there is a speed penalty for sending those other\ncolumns that the parent doesn't have.\n\nAnother, more difficult point about why not to send variable row types, has to\ndo with the ISA relationship in inheritance. Child ISA parent. Parent is not\nnecessarily a child. A child can be used anywhere a parent can be used. A\nparent cannot be used anywhere a child can. By sending the differing row\ntypes, the procedure that processes the rows might end up expecting child rows\nmore than parent rows, even though you are using SELECT * FROM parent. The\ndifferent rows are processed differently. If programmers become accoustomed\nto obtaining the child-type rows by selecting parent, they might eventually\nmistake a parent to be ISA child when one is received. Say that mostly\nchild-type rows are returned by SELECT * FROM parent, only an occasional\nparent-type actually comes through. A procedure might sample the input (or a\nhuman might) and decide that the procedure should be different since it looks\nlike only child-type rows are returning. Then, when a parent-type appears, it\nis processed wrong. Maybe this argument is weak. Comments? :-)\n\nRobert B. Easter\n",
"msg_date": "Sun, 21 May 2000 05:27:08 -0400",
"msg_from": "\"Robert B. Easter\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Thus spoke SQL3 (on OO)"
},
{
"msg_contents": "On Sun, 21 May 2000, Chris Bitmead wrote:\n> \"Robert B. Easter\" wrote:\n> \n> > I guess the major difference is that the hierarchial-model does not \n> > support multiple inheritance. \n> \n> I don't agree. From SQL3...\n> \n> \"To avoid name clashes, a subtype can rename selected components of the\n> representation inherited from its direct supertypes.\"\n> \n> and if that doesn't clinch it...\n> \n> \"Let the term replicated column mean a column appearing in more than one\n> direct supertable of T that is inherited by at least one of those direct\n> supertables from the same column of a single higher-level supertable.\"\n> \n> That sounds like multiple repeated inheritance to me.\n> \n\nWhat is the date on the copy of the SQL/Foundation you are reading? My copy is\ndated September 23, 1999 ISO/IEC 9075-2 SQL3_ISO. I tried searching for the\nquotes above and could not find them. Do I have the correct version?\n\n\n-- \nRobert B. Easter\[email protected]\n",
"msg_date": "Sun, 21 May 2000 10:11:31 -0400",
"msg_from": "\"Robert B. Easter\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Thus spoke SQL3 (on OO)"
},
{
"msg_contents": "Chris wrote:\n> \n> \n> > What exactly IDENTITY is is still a bit unclear to me but it is\n> > definitely not the proposed identification of the table a row came\n> > from.\n> \n> How do you know?\n> \n> > * Cloning\n> > CREATE TABLE name (\n> > colname type,\n> > colname type,\n> > LIKE other_table,\n> > colname type,\n> > ...\n> > );\n> \n> Hmm. Fairly useless feature IMO.\n\nThe main use would be for those users who are using INHERITS with\ncurrent\nPostgreSQL and need to port from it.\n\n---------\nHannu\n",
"msg_date": "Sun, 21 May 2000 21:42:03 +0300",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Thus spoke SQL3 (on OO)"
},
{
"msg_contents": "Chris Bitmead wrote:\n> \n> \"Robert B. Easter\" wrote:\n> \n> > I guess the major difference is that the hierarchial-model does not\n> > support multiple inheritance.\n> \n> I don't agree. From SQL3...\n> \n> \"To avoid name clashes, a subtype can rename selected components of the\n> representation inherited from its direct supertypes.\"\n> \n> and if that doesn't clinch it...\n\nChris, what is your position on having a single primary key for all \ninherited columns ?\n\nIt seems right for single inheritance (tree-like), but generally \nimpossible for multiple inheritance, unless we will allow multiple\n\"primary\" keys (which we could allow anyhow, as they seem useful even in \nseveral non-OO situations). For purity we could set the syntax to be \nALTERNATE KEY or ALTERNATE PRIMARY KEY, but they would really be \nstill primary keys ;) \n\n> > Passing different row types to the\n> > client from one select, forces that client to be like the C++ function\n> > programmed in advance to deal with some derived class too.\n> \n> You are assuming that the client application will be responsible for\n> dealing with these differences. What really happens is that a query is\n> more like a List<Baseclass> in C++. As long as you only call methods\n> contained in Baseclass on each element of the List, you are ok.\n\nFor more dynamic client languages you could even first ask each object \nto enumerate methods it knows about and then perhaps make a separate\nmenu \n(combobox) from them for client to choose from for each instance.\n\n------------\nHannu\n",
"msg_date": "Sun, 21 May 2000 21:50:48 +0300",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Thus spoke SQL3 (on OO)"
},
{
"msg_contents": "\"Robert B. Easter\" wrote:\n\n> SQL3 does not appear to really have an object model. Rather, it \n> appears to be a hierarchial model on top of the relational model. \n\nIt seems like it amounts to the same thing to me. A bit like me saying\n\"A circle is the set of points equi-distant from a point\", and someone\nelse arguing \"No, a circle is the graph of points represented by the\nformula x^2 + y^2 = n\". At the end of the day they amount to the same\nthing I think.\n\nThe other thing is that some SQL3 statements seem to revert to the\nobject model - \n\"If [the table being altered] is a supertable, then an <add column\ndefinition>, without further Access Rule checking, is effectively\nperformed for each of its subtables, thereby adding the column as an\ninherited column in these subtables.\"\n\n\n> So, it says that every subtable family (or member of) has exactly one \n> maximal (root) supertable. This seems to make clear that multiple \n> inheritance is not allowed. The picture of this hierarchy is inverted \n> trees with the roots at maximal supertables with subtables branching \n> down, EXTENDing the supertable.\n\n\"Effectively, components of all direct supertype representations are\ncopied to the subtype's representation with same name and data type. To\navoid name clashes, a subtype can rename selected components of the\nrepresentation inherited from its direct supertypes\"\n\nNotice it says \"all direct supertype\", which says to me you can multiple\n_direct_ supertypes. Also note the \"name clashes\". How can you have name\nclashes without multiple inheritance?\n\n> Only one table can be specified in the UNDER clause, which prevents the\n> following possibility:\n> \n> (1) subtable_a UNDER maximal_supertable\n> (2) subtable_b UNDER maximal_supertable\n> (3) subtable_abc UNDER subtable_a, subtable_b\n> \n> (3) is not allowed, but if it where, then subtable_abc would still have \n> had only one maximal supertable. If allowed, it would have inherited \n> maximal supertable twice.\n\nIf allowed, it doesn't mean it would inherit \"maximal_supertable\" twice.\nIt would have inherited it once through two routes. Like virtual\ninheritance in C++. It some cases it could mean though that there is not\none maximal supertable though. If A inherits from B and C, you can't say\nwhich is the maximal supertable. Don't know what those guys were\nsmoking, but whatever it is I want some.\n\n> These effects describe a sharing of properties (columns) among the super and\n> subtables. A row in a supertable may be part of a single row in 0 or 1 of its\n> subtables (if I got it right) - a 1:1 relationship if any. The subtable and\n> supertable are linked together in the tree hierarchy and are not independent\n> after creation. The subtable extends additional attributes onto the supertable.\n \ni.e. The object model expressed in a convoluted way?\n\n\n> Since INHERITS, as it is implemented now, is like SQL3's UNDER, maybe \n> it should NOT allow multiple inheritance and should strive to become \n> UNDER if SQL3 is a good idea.\n\nRenaming it UNDER might be ok. Breaking multiple inheritance would be\npretty silly, even if this is what SQL3 says (which I doubt).\n\n> If the other object-oriented methods, like CLONES and ASSIMILATES (or whatever\n> you want to call them), is ever wanted in PostgreSQL, then looks like some other\n> standard(s) will have to be drawn from. I have not looked at the ODMG 3.0\n> (standard) yet. But maybe it has the missing capabilities. Is ODMG 3.0 an\n> international standard? I'd like to just download it and read it, but \n> looks like you have to buy it for $39.95.\n\nThe best way to get an overview of ODMG is probably do go to the poet\ndatabase web site and download their documentation. I say poet because\nthey are one of the few with an OQL implementation. But ODMG is not so\nmuch focused on query language and you could go to other ODBMS web sites\nlike Versant and look at documentation for the interfaces.\n",
"msg_date": "Mon, 22 May 2000 04:54:18 +1000",
"msg_from": "Chris <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Thus spoke SQL3 (on OO)"
},
{
"msg_contents": "\"Robert B. Easter\" wrote:\n\n> I guess the major difference is that the hierarchial-model does not \n> support multiple inheritance. \n\nI don't agree. From SQL3...\n\n\"To avoid name clashes, a subtype can rename selected components of the\nrepresentation inherited from its direct supertypes.\"\n\nand if that doesn't clinch it...\n\n\"Let the term replicated column mean a column appearing in more than one\ndirect supertable of T that is inherited by at least one of those direct\nsupertables from the same column of a single higher-level supertable.\"\n\nThat sounds like multiple repeated inheritance to me.\n\n> Passing different row types to the\n> client from one select, forces that client to be like the C++ function\n> programmed in advance to deal with some derived class too. \n\nYou are assuming that the client application will be responsible for\ndealing with these differences. What really happens is that a query is\nmore like a List<Baseclass> in C++. As long as you only call methods\ncontained in Baseclass on each element of the List, you are ok.\n\nBut those \"virtual\" methods you call need real objects to work with.\nThey need ALL the attributes in other words. The piece of language\ninfrastructure that behind the scenes instantiates all the C++ objects\nas they fall out of the database can't create abstract Baseclass\nobjects. It needs all the attributes to instantiate Subclass objects, so\nthat the application code needn't know about different classes.\n\nI suggest you download an evaluation copy of an ODBMS and have a play,\nit will probably become clear.\n\n> By sending the differing row\n> types, the procedure that processes the rows might end up expecting child rows\n> more than parent rows, even though you are using SELECT * FROM parent. The\n> different rows are processed differently. If programmers become accoustomed\n> to obtaining the child-type rows by selecting parent, they might eventually\n> mistake a parent to be ISA child when one is received. \n\nWhether a programmer is likely to make such a mistake depends more on\nthe programming language used. In an ODBMS situation almost all object\nretrievals are not via an explicit query, but rather by object\nnavigation. You might have\n\nclass Purchase {\n Link<Customer> buyer;\n List<StockItem> items;\n}\n\nResult<Purchase> r = Query<Purchase>::select(\"select * from purchase\");\nIterator<Purchase> i = r.iterator();\nwhile (i.hasNext()) {\n Purchase *p = i.next();\n Customer *c = p.buyer;\n Iterator<StockItem> = p.items.iterator();\n // etc.\n}\n\nAny one of Purchase, Customer or StockItem might really be some\nsub-class of those classes for all the application knows. But the behind\nthe scenes infrastructure needs to have all the attributes so that the\napplication code need not know.\n",
"msg_date": "Mon, 22 May 2000 09:03:27 +1000",
"msg_from": "Chris Bitmead <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Thus spoke SQL3 (on OO)"
},
{
"msg_contents": "On Sun, 21 May 2000, Chris Bitmead wrote:\n> It is from \n> ftp://gatekeeper.dec.com/pub/standards/sql\n> and dated 1994. Is there something more recent?\n\nI believe so! 1994 is an old draft. From what I understand, SQL3 is an\nofficial ISO standard as of sometime back in 1999. It may be that the\nofficial standard cut out the things you quoted.\n\nTry downloading the stuff at:\nftp://jerry.ece.umassd.edu/isowg3/x3h2/Standards/\n\n> \n> > What is the date on the copy of the SQL/Foundation you are reading? My copy is\n> > dated September 23, 1999 ISO/IEC 9075-2 SQL3_ISO. I tried searching for the\n> > quotes above and could not find them. Do I have the correct version?\n-- \nRobert B. Easter\[email protected]\n",
"msg_date": "Sun, 21 May 2000 21:03:25 -0400",
"msg_from": "\"Robert B. Easter\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Thus spoke SQL3 (on OO)"
},
{
"msg_contents": "\"Robert B. Easter\" wrote:\n> \n> On Sun, 21 May 2000, Chris Bitmead wrote:\n> > It is from\n> > ftp://gatekeeper.dec.com/pub/standards/sql\n> > and dated 1994. Is there something more recent?\n> \n> I believe so! 1994 is an old draft. From what I understand, SQL3 is an\n> official ISO standard as of sometime back in 1999. It may be that the\n> official standard cut out the things you quoted.\n> \n> Try downloading the stuff at:\n> ftp://jerry.ece.umassd.edu/isowg3/x3h2/Standards/\n\nOh I see you are right. The latest draft has removed multiple\ninheritance. I wonder why, the 1994 draft for multiple inheritance\nactually looked ok. Maybe they couldn't agree on details and wanted to\nget it out the door? Pretty sad decicision if you ask me. Or maybe when\nthey went back in 1999, they couldn't figure out their own 1994 document\nany more :-).\n",
"msg_date": "Mon, 22 May 2000 11:28:51 +1000",
"msg_from": "Chris Bitmead <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Thus spoke SQL3 (on OO)"
},
{
"msg_contents": "On Sun, 21 May 2000, Chris Bitmead wrote:\n> Hannu Krosing wrote:\n> \n> > Chris, what is your position on having a single primary key for all\n> > inherited columns ?\n> \n> What is the significance of a primary key compared to any old unique\n> key?\n\nFor referential integrity, the REFERENCES or FOREIGN KEY clauses specify a\ntable name. That table is expected to have one and only one PRIMARY KEY. You\ncan't select which unque column you want to reference within a table - it must\nbe the one and only PRIMARY KEY.\n\nI hope this is answering the question. :)\n\nMultiple inheritance and referential integrity are a complex mix. It becomes\nhard for the database to maintain data integrity and uphold the relational\nmodel that is based on functional dependency where one set of atomic values\nare determined by a key or composite key containing attributes that are not\nfunctionally dependent on each other. With multiple inheritance, it is easy to\nend up with two separate keys determining the same data, which is a conflict if\nthey are not a composite key. Something like that.\n\nThe EXTENDS type of inheritance is single-inheritance compatible. This is what\nI think SQL3 is allowing. It allows you to make a hierarchy tree out of\ntables. Only one primary key can possibly be inherited. A subtable is\nforbidden from specifying its own primary key - it must inherit one.\n\nThe CLONES and ASSIMILATES stuff that I mentioned before, would require some\nrestrictions to ensure they don't break the relational model data intergrity\nenforcement infrastructure (primary keys/foreign keys etc). For example, to\nmultiple inherit, it would maybe be required that the inherited table have an\ninherited primary key consisting of the composite of all inherited keys. If it\ninherits no primary key, then it is free to specify one for itself. But\nremember, that CLONE just branches off from its parents,\nwho are not connected to child so that eases things a litte for that case. \nASSIMILATES is more complicated, and is not possible to compare it even with\nanything you can do in a programming language since a child can exist after a\nparent class has been dropped. I'd have to think about CLONES and ASSIMILATES\nmore since they multiple inherit.\n\n\nI looks like SQL3 has taken care of the EXTENDS type for us.\n\nAttached is a diagram of the way UNDER appears to work in SQL3. The second\ngif is a rough idea of how I think INHERITS/CLONES might work. The 3rd pic is\nabout the assimilate inheritance idea, which is half baked but maybe somwhat\ninteresting.\n\n-- \nRobert B. Easter\[email protected]",
"msg_date": "Sun, 21 May 2000 21:50:34 -0400",
"msg_from": "\"Robert B. Easter\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Thus spoke SQL3 (on OO)"
},
{
"msg_contents": "\nIt is from \nftp://gatekeeper.dec.com/pub/standards/sql\nand dated 1994. Is there something more recent?\n\n> What is the date on the copy of the SQL/Foundation you are reading? My copy is\n> dated September 23, 1999 ISO/IEC 9075-2 SQL3_ISO. I tried searching for the\n> quotes above and could not find them. Do I have the correct version?\n",
"msg_date": "Mon, 22 May 2000 12:05:18 +1000",
"msg_from": "Chris Bitmead <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Thus spoke SQL3 (on OO)"
},
{
"msg_contents": "Chris Bitmead wrote:\n> \n> Hannu Krosing wrote:\n> \n> > Chris, what is your position on having a single primary key for all\n> > inherited columns ?\n> \n> What is the significance of a primary key compared to any old unique\n> key?\n\nI don't know ;) Some theorists seem to think it important, and PG allows \nonly one PK per table.\n\nI just meant that primary key (as well as any other uniqe key) should be \ninherited from parent table\n\n------------------\nHannu\n",
"msg_date": "Mon, 22 May 2000 11:30:19 +0300",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Thus spoke SQL3 (on OO)"
},
{
"msg_contents": "Chris Bitmead wrote:\n> \n> Hannu Krosing wrote:\n> >\n> > Chris Bitmead wrote:\n> > >\n> > > Hannu Krosing wrote:\n> > >\n> > > > Chris, what is your position on having a single primary key for all\n> > > > inherited columns ?\n> > >\n> > > What is the significance of a primary key compared to any old unique\n> > > key?\n> >\n> > I don't know ;) Some theorists seem to think it important, and PG allows\n> > only one PK per table.\n> >\n> > I just meant that primary key (as well as any other uniqe key) should be\n> > inherited from parent table\n> \n> What object theory would say is that oid uniquely identifies an object.\n> Other unique keys should usually be inherited.\n\nit would be hard to define RI by just saying that some field references \"an\nOID\",\noften you want to be able do define something more specific.\n\nIt would be too much for most users to require that all primary and foreign\nkeys \nmust be of type OID.\n\nIt about flexibility, much much like the situation with SERIAL vs.\nINT DEFAULT NEXTVAL('SOME_SEQUENCE')\n\n------------\nHannu\n",
"msg_date": "Mon, 22 May 2000 12:00:56 +0300",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Thus spoke SQL3 (on OO)"
},
{
"msg_contents": "Chris Bitmead wrote:\n> \n> Hannu Krosing wrote:\n> \n> > it would be hard to define RI by just saying that some field references \"an\n> > OID\",\n> > often you want to be able do define something more specific.\n> >\n> > It would be too much for most users to require that all primary and foreign\n> > keys must be of type OID.\n> \n> Since it would be object and relational, you could do either. But all\n> pure object databases _always_ rely on oid to define relationships, and\n> that is likely to be all an ODMG inteface would support.\n\nIs the ODMG interface available on the net, or is the plan to do a Poet clone\n?\n\n> Unless we want to break new ground anyway.\n\nWe would need some syntax to distinguish between REFERENCES (primary key) and\nREFERENCES (oid).\n\nOf course we would also need fast lookups by oid and oid->object lookup\ntables(s)/function(s) but that's another part of the story.\n\n--------------\nHannu\n",
"msg_date": "Mon, 22 May 2000 12:38:57 +0300",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Thus spoke SQL3 (on OO)"
},
{
"msg_contents": "Hannu Krosing wrote:\n\n> Is the ODMG interface available on the net, or is the plan to do a Poet > clone\n\nLooking at database vendor specs will get us a fair way. Or you can\nshell the $35 for a spec.\n\n> > Unless we want to break new ground anyway.\n> \n> We would need some syntax to distinguish between REFERENCES (primary \n> key) and REFERENCES (oid).\n\nThe trouble is the client cache code is generally all set up to cache by\noid. If you want to start referencing objects by various criteria, the\nclient cache becomes a lot more complex. More inefficient too because\nyou would have to set up hash tables on multiple criteria and jump\nbetween them.\n\nIt's not such a big deal really. When you do an OO model you don't need\nto think about your own primary key.\n\n> Of course we would also need fast lookups by oid and oid->object lookup\n> tables(s)/function(s) but that's another part of the story.\n\nAn index on oid will be a start.\n\n-- \nChris Bitmead\nmailto:[email protected]\nhttp://www.techphoto.org - Photography News, Stuff that Matters\n",
"msg_date": "Mon, 22 May 2000 21:18:25 +1000",
"msg_from": "Chris Bitmead <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Thus spoke SQL3 (on OO)"
},
{
"msg_contents": "> Hannu Krosing wrote:\n> \n> It's not such a big deal really. When you do an OO model you don't need\n> to think about your own primary key.\n> \n\n Hmm, I see here more and more postings, that do say, the OID (or the\nresult of a SEQUENCE) is usable for a key to identify an object stored\nwithin a database.\n\n Though it's true, that SEQUENCE can be used to create unique\nidentifiers, the function is simply a hack - nothing more for greater\nOO software systems and worse than software solutions, which provide\nmore power and lower traffic.\n\n The identification of an object has to be based on a unique key and\nit does not matter of which type it is.\n\n The foreign key is of course not useful for the oo-model, but for the\nprogrammer, which produces the object-relational wrapper this is VERY\nurgent !\n\n And here again: if you use SEQUENCE for the OID you use a special\nfeature of the database ... and that is bad.\n\n Marten\n\n\n\n \n\n",
"msg_date": "Mon, 22 May 2000 18:56:02 +0200 (CEST)",
"msg_from": "Marten Feldtmann <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Thus spoke SQL3 (on OO)"
},
{
"msg_contents": "Marten Feldtmann wrote:\n> \n> > Hannu Krosing wrote:\n> >\n> > It's not such a big deal really. When you do an OO model you don't need\n> > to think about your own primary key.\n> >\n\nI don't remember saying that, must have been someone else.\n\nBut it is true, you don't need anything but OID if you don't want to \ndistinguish your objects yourself but only need them to be distinct \nfor your program, i.e. yo have two cheques absolutely similar, except\nthat \nthere are two of them ;)\n\n----------\nHannu\n",
"msg_date": "Mon, 22 May 2000 22:18:26 +0300",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Thus spoke SQL3 (on OO)"
},
{
"msg_contents": "Hannu Krosing wrote:\n> \n> Chris Bitmead wrote:\n> >\n> > Hannu Krosing wrote:\n> >\n> > > Chris, what is your position on having a single primary key for all\n> > > inherited columns ?\n> >\n> > What is the significance of a primary key compared to any old unique\n> > key?\n> \n> I don't know ;) Some theorists seem to think it important, and PG allows\n> only one PK per table.\n> \n> I just meant that primary key (as well as any other uniqe key) should be\n> inherited from parent table\n\nWhat object theory would say is that oid uniquely identifies an object.\nOther unique keys should usually be inherited.\n",
"msg_date": "Tue, 23 May 2000 05:44:41 +1000",
"msg_from": "Chris Bitmead <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Thus spoke SQL3 (on OO)"
},
{
"msg_contents": "Hannu Krosing wrote:\n\n> it would be hard to define RI by just saying that some field references \"an\n> OID\",\n> often you want to be able do define something more specific.\n> \n> It would be too much for most users to require that all primary and foreign\n> keys\n> must be of type OID.\n\nSince it would be object and relational, you could do either. But all\npure object databases _always_ rely on oid to define relationships, and\nthat is likely to be all an ODMG inteface would support. Unless we want\nto break new ground anyway.\n",
"msg_date": "Tue, 23 May 2000 06:04:19 +1000",
"msg_from": "Chris Bitmead <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Thus spoke SQL3 (on OO)"
}
] |
[
{
"msg_contents": "Alex Pilosov wrote:\n\n> Corba IS a performance dog compared to everything else in existance.\n> Almost every ORB in existance is dog-slow. There are some opensource ORBs\n> which are getting better, but its still a ways off. \n\nHave you tried ORBit? Supposedly those guys found other ORBs slow and\nthey wrote their own to be fast.\n",
"msg_date": "Sun, 21 May 2000 01:55:39 +1000",
"msg_from": "Chris <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: OO Patch"
}
] |
[
{
"msg_contents": "Well, I've read SDB code/doc for a few hours...\n\n1. Using RECNO db for heap.\nFor relational DB over-writing smgr means ability to re-use space after\nDELETE/UPDATE operations (without vacuum -:)). RECNO (btree by nature,\nwith record number as key) will not give us this ability. To insert record\ninto RECNO db one has either to provide \"put\" method with record number\n(where to store new record) or specify DB_APPEND in flags, to add new record\nto the end of db (without space re-using). So, the problem (one of two base\nproblems of over-writing smgr for us) \"where to store new tuple\" (ie - where\nin data file there is free space for new tuple) is not resolved.\n=> we can't use SDB smgr: there are no required features - space re-using\nand MVCC support.\n\n2. SDB' btree-s support only one key, but we have multi-key btree-s...\n\n3. How can we implement gist, rtree AND (multi-key) BTREE access methods\nusing btree and hash access methods provided by SDB?!\n\n1,2,3 => we have to preserve our access methods (and ability to add new!).\n\nNow, about WAL. What is WAL? WAL *mostly* is set of functions to \nwrite/read log (90% implemented) + *access method specific* redo/undo\nfunctions... to be implemented anyway, because of conclusion above.\n\nComments?\n\nVadim\n",
"msg_date": "Sat, 20 May 2000 18:43:37 -0700",
"msg_from": "\"Mikheev, Vadim\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Berkeley DB..."
},
{
"msg_contents": "At 06:43 PM 5/20/00 -0700, Vadim Mikheev wrote:\n\n> 1. Using RECNO db for heap.\n> For relational DB over-writing smgr means ability to re-use space after\n> DELETE/UPDATE operations (without vacuum -:)). RECNO (btree by nature,\n> with record number as key) will not give us this ability. To insert record\n> into RECNO db one has either to provide \"put\" method with record number\n> (where to store new record) or specify DB_APPEND in flags, to add new record\n> to the end of db (without space re-using). So, the problem (one of two base\n> problems of over-writing smgr for us) \"where to store new tuple\" (ie - where\n> in data file there is free space for new tuple) is not resolved.\n> => we can't use SDB smgr: there are no required features - space re-using\n> and MVCC support.\n\nAll of the Berkeley DB access methods reuse space. We return free space\nto a pool and allocate from the pool in the ordinary course of operation.\nWe have no notion of vacuum.\n\nEmpty pages get appended to a free list, and will be reused on next page\nallocation. Empty space on pages (from deleted tuples) where the rest\nof the page isn't empty will get reused the next time the page is\nvisited. So you do get space reuse. We don't return blocks to the\nfile system automatically (requires reorg, which is hard). \"Appending\"\nmeans appending in key space; that may or may not be physically at the\nend of the file.\n\nWe do, however, do reverse splits of underfull nodes, so we're aggressive\nat getting empty pages back on the free list.\n\nIn short, I think the space reuse story of Berkeley DB is better than\nthe current space reuse story in PostgreSQL, even for heaps. This is\nbecause the current heap AM doesn't opportunistically coalesce pages\nto make free pages available for reuse by new inserts.\n\nWe don't have multi-version concurrency control. It's a feature we'd like\nto see added, but it certainly represents a substantial new engineering\neffort. As I've said before, we'd be glad to support you in that project\nif you decide to undertake it.\n\n> 2. SDB' btree-s support only one key, but we have multi-key btree-s...\n\nThis is a misunderstanding. Berkeley DB allows you to use arbitrary\n\ndata structures as keys. You define your own comparison function, which\nunderstands your key structure and is capable of doing comparisons between\nkeys. It's precisely equivalent to the support you've got in PostgreSQL\nnow, since your comparator has to understand key schema (including the\npresence or absence of nulls).\n\nYou'd define your own comparator and your own key type. You'd hand\n(key, value) pairs to Berkeley DB, and we'd call your comparator to\ncompare keys during tree descent. The key you hand us is an arbitrarily\ncomplex data structure, but we don't care.\n\nYou get another benefit from Berkeley DB -- we eliminate the 8K limit\non tuple size. For large records, we break them into page-sized\nchunks for you, and we reassemble them on demand. Neither PostgreSQL\nnor the user needs to worry about this, it's a service that just works.\n\nA single record or a single key may be up to 4GB in size.\n\n> 3. How can we implement gist, rtree AND (multi-key) BTREE access methods\n> using btree and hash access methods provided by SDB?!\n\nYou'd build gist and rtree on top of the current buffer manager, much\nas rtree is currently implemented on top of the lower-level page manager\nin PostgreSQL. Multi-key btree support is there already, as is multi-\nkey extended linear hashing. In exchange for having to build a new\nrtree AM, you'd get high-performance persistent queues for free.\n\nI'd argue that queues are more generally useful than rtrees. I understand\nthat you have users who need rtrees. I wrote that access method in\nPostgres, and used it extensively for geospatial indexing during the\nSequoia 2000 project. I'm a big fan. Nevertheless, there are more\ndatabase customers looking for fast queues than are looking for spatial\nindices.\n\n> 1,2,3 => we have to preserve our access methods (and ability to add new!).\n\nAgain, you can add new access methods in Berkeley DB in the same way\nthat you do for PostgreSQL now.\n\n\n> Now, about WAL. What is WAL? WAL *mostly* is set of functions to \n> write/read log (90% implemented) + *access method specific* redo/undo\n> functions... to be implemented anyway, because of conclusion above.\n\nYou wouldn't need to rewrite the current access-method undo and redo\nfunctions in Berkeley DB; they're there, and they work. You'd need to\ndo that work for the new access methods you want to define, but as you\nnote, that work is required whether you roll your own or use Berkeley\nDB.\n\nI encourage you to think hard about the amount of work that's really\nrequired to produce a commercial-grade recovery and transaction system.\nThis stuff is extremely hard to get right -- you need to design, code,\nand test for very high-concurrency, complex workloads. The log is a\nnew source of contention, and will be a gate to performance. The log\nis also a new way to consume space endlessly, so you'll want to think\nabout backup and checkpoint support. With Berkeley DB, you get both\ntoday. Our backup support permits you to do on-line backups. Backups\ndon't acquire locks and don't force a shutdown.\n\nTesting this stuff is tricky. For example, you need to prove that you're\nable to survive a crash that interrupts the three internal page writes\nthat you do in the btree access method on a page split. Postgres (when\nI wrote the Btree AM) carefully ordered those writes to guarantee no\nloss of data, but it was possible to crash with the children written and\nthe parent lost. The result is an inefficiency in the tree structure\nthat you'll never recover, but that you can deal with at read time. This\nis an example of a case that Berkeley DB gets right.\n\nThe advantage of Berkeley DB is that we've got five years of commercial\ndeployment behind us. The code is in production use, and is known to\nsupport terabyte-sized databases, hundreds of concurrent threads with\narbitrary read/write mixes, and system and application crashes. By\nputting the log and the data on separate spindles, we are able to survive\nloss of either device without losing committed data. Big companies have\nmade significant bets on the software by deploying it in mission-critical\napplications. It works.\n\nPlus, we're continuing to work on the code, and we're paid real money to\ndo that. We're able to deliver significant new features and performance\nimprovements about three times a year.\n\nAll of that said, I'd boil Vadim's message down to this:\n\n\t+ With Berkeley DB, you'd need to reimplement multi-version\n\t concurrency control, and that's an opportunity to introduce\n\t new bugs.\n\n\t+ With PostgreSQL, you'll need to implement logging and recovery,\n\t and that's an opportunity to introduce new bugs.\n\nI don't think that either alternative presents insurmountable difficulties.\nWhich you choose depends on the technical issues and on your willingness\nto integrate code from outside the project into PostreSQL's internals, to\na degree that you've never done before.\n\nRegards,\n\t\t\t\t\tmike\n\n",
"msg_date": "Sun, 21 May 2000 11:36:59 -0700",
"msg_from": "\"Michael A. Olson\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Berkeley DB..."
},
{
"msg_contents": "\"Michael A. Olson\" wrote:\n\n> You get another benefit from Berkeley DB -- we eliminate the 8K limit\n> on tuple size. For large records, we break them into page-sized\n> chunks for you, and we reassemble them on demand. Neither PostgreSQL\n> nor the user needs to worry about this, it's a service that just works.\n> \n> A single record or a single key may be up to 4GB in size.\n\nThat's certainly nice. But if you don't access a BIG column, you have to\nretrieve the whole record? A very nice idea of the Postgres TOAST idea\nis that you don't. You can have...\nCREATE TABLE image (name TEXT, size INTEGER, giganticTenMegImage GIF);\nAs long as you don't select the huge column you don't lift it off disk.\nThat's pretty nice. In other databases I've had to do some annoying\nrefactoring of data models to avoid this.\n",
"msg_date": "Mon, 22 May 2000 10:10:48 +1000",
"msg_from": "Chris Bitmead <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Berkeley DB..."
},
{
"msg_contents": "Can I ask you a simple question? Does Berkeley DB support encodings\nother than ASCII?\n--\nTatsuo Ishii\n\n",
"msg_date": "Mon, 22 May 2000 10:14:31 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Berkeley DB..."
},
{
"msg_contents": "At 10:14 AM 5/22/00 +0900, Tatsuo Ishii wrote:\n\n> Can I ask you a simple question? Does Berkeley DB support encodings\n> other than ASCII?\n\nBerkeley DB is entirely agnostic on data types. We store and retrieve\nkeys and values; you define the types and assign semantics to them.\nWe've got a number of customers storing wide character data in various\nencodings and character sets in Berkeley DB.\n\nOur default btree comparator and hash function are simple bit string\noperators. You'd need to write a comparison function for btrees that\nunderstood the collating sequence of the character set you store.\n\n\t\t\t\t\tmike\n\n",
"msg_date": "Sun, 21 May 2000 18:25:07 -0700",
"msg_from": "\"Michael A. Olson\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Berkeley DB..."
},
{
"msg_contents": "I'm responding in a single message to several questions prompted by my\nmessage of this morning.\n\nChris Bitmead asked:\n\n> > A single record or a single key may be up to 4GB in size.\n> \n> That's certainly nice. But if you don't access a BIG column, you have to\n> retrieve the whole record?\n\nYou can do partial reads, but only at the expense of complicating the\ncode in the PostgreSQL server. We provide some reasonable interfaces\nfor fetching only part of large records. However, you need to know\nwhether you should call them or not. As a result, you'd have to\nrecord in the system catalog somewhere that a particular table contained\nbig tuples, and you'd have to write your fetch code to read only the\nbyte range you care about.\n\nThat would complicate the server, since you'd want to do simple\nfetches for the simple case, too.\n\nVadim Mikheev made some good points on space reuse. Unless a page is\nempty space on the page for keys in the right range. For append-only \nworkloads (like increasing heap tids), that's not what you want.\n\nVadim then asked:\n\n> You can't merge two 49% empty pages in one. So, how to reuse\n> this 49%? How will we able to implement feature that good\n> databases have: one can specify while table creation -\n> \"insert new tuples on pages which N% empty\"?\n\n\nWe already recognize the special case of in-order insersions (as in\nthe case of increasing heap tids). We split pages so that the right\nchild is nearly empty and left is nearly full. That gives you close\nto 100% space utilization at build time. Adding a fill factor to\nthe initialization code would be very easy.\n\n> And, while we are on heap subject - using index (RECNO) for heap\n> means that all our secondary-index scans will performe TWO\n> index scans - first, to find recno in secondary-index, and\n> second, to find heap tuple using recno (now indices give us\n> TID, which is physical address).\n\nWe're not going to resolve this question without building both\nsystems and measuring their performance. The non-leaf levels of\nbtrees are pretty much always in the cache because they're hot.\nWhether your fetch-a-tuple code path is shorter than my fetch-\na-tuple code path is undecided.\n\nFrankly, based on my experience with Berkeley DB, I'd bet on mine.\nI can do 2300 tuple fetches per CPU per second, with linear scale-\nup to at least four processors (that's what we had on the box we\nused). That's 9200 fetches a second. Performance isn't going\nto be the deciding issue.\n\n(The test system was a mid-range Solaris box -- reasonable, but not\nextravagant, clock speed, memory, and disk.)\n\nOn testing failure at critical points in the code, Vadim wrote:\n\n> Oh, testing of this case is very easy - I'll just stop backend\n> using gdb in critical points and will turn power off -:))\n> I've run 2-3 backends under gdb to catch some concurrency-related\n> bug in buffer manager - this technique works very well -:)\n\nFor limited concurrency and fairly simple problems, that technique\nworks well. You should plan to test PostgreSQL with hundreds of\nconcurrent backends with a complex workload for days in order to\nconvince people that the system works correctly. This is what the\ncommercial vendors (including Sleepycat!) do. Your testing\nstrategy should include randomly killing the system to demonstrate\nthat you recover correctly.\n\nI'm only warning you to be careful and to take this seriously. It's\nvery hard to do the kind of testing you should. The recovery system\nis generally the most poorly-exercised part of the system, but it's\nthe one piece that absolutely has to work flawlessly. It only runs\nafter your system has crashed, and your customer is already angry.\n\nFinally, Vadim makes the point that switching to Berkeley DB forces\nyou to stop working on code you understand, and to pick up a new\npackage altogether. Worse, you'd need to do some pretty serious\nengineering to get multi-version concurrency control into Berkeley\nDB before you could use it.\n\nThis is a pretty compelling argument to me. I've been busy\nexplaining how you *could* make the switch, but the real question\nis whether you *should*. I can answer all of Vadim's questions\nreasonably. Frankly, though, if I were in charge of the engineering\neffort for PostgreSQL, I'd be disinclined to use Berkeley DB on the\nstrength of the interface changes it requires and the effort that\nwould be required to implement MVCC.\n\nI say this in the spirit of complete disclosure -- we'd like to\nsee you use our software, but you need to make a business decision\nhere. If you hadn't already done MVCC, I'd be arguing the other\nside, but you have.\n\nRegards,\n\t\t\t\t\tmike\n\n",
"msg_date": "Sun, 21 May 2000 21:09:27 -0700",
"msg_from": "\"Michael A. Olson\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Berkeley DB..."
}
] |
[
{
"msg_contents": "> > We can't read data from the index. It would be nice if we could, but we\n> > can't. I think we believe that there are very few cases where this\n> > would be win. Usually you need non-indexed data too.\n> \n> I have used other databases where this _is_ possible in the past, and\n> the win is big when the programmer codes for it. Sure, most cases don't\n> just use indexed data, but if the programmer knows that the database\n> supports index-only scans then sometimes an extreme performance\n> requirement can be met.\n> \n\nYes, totally true. It is an extreme optimization. In Ingres, you could\nactually SELECT on the index and use that when needed.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 20 May 2000 21:59:04 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: More Performance"
}
] |
[
{
"msg_contents": "The 7.0-2 RPMset fixes the following:\n1.)\tSPI headers are now the 7.0 set, and not the 6.5.3 set;\n2.)\tpg_options default to NOT enable syslog, or extended query logging, as\nsyslogd has some issues with long queries (such as issued by psql's \\d\ncommand!);\n3.)\tAlpha patches have returned!\n\nAs usual, read '/usr/doc/postgresql-7.0/README.rpm' for more information.\n\n******initdb required for those still running releases prior to 7.0RC5!*******\n\nUsers running 6.5.x (or earlier!) need to thoroughly read and understand the\nREADME.rpm before installing (it is available on the ftp site as README in the\nRPM distribution directory, as well as in the 'unpacked' subdirectory).\n\nThe spec file for this release, as well as all patches and supplemental\nprograms are available in the 'unpacked' subdirectory.\n\nRPMset's are available at:\nftp://ftp.postgresql.org/pub/binary/v7.0/redhat-RPM\n\nFurther information available at http://www.ramifordistat.net/postgres, as\nusual; or by e-mail at [email protected] (i prefer RPM questions to go\nto the list instead of directly to me....).\n\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Sat, 20 May 2000 23:07:36 -0400",
"msg_from": "Lamar Owen <[email protected]>",
"msg_from_op": true,
"msg_subject": "PostgreSQL 7.0-2 RPMset released."
},
{
"msg_contents": "On Sat, 20 May 2000, Lamar Owen wrote:\n\n> The 7.0-2 RPMset fixes the following:\n> 1.)\tSPI headers are now the 7.0 set, and not the 6.5.3 set;\n> 2.)\tpg_options default to NOT enable syslog, or extended query logging, as\n> syslogd has some issues with long queries (such as issued by psql's \\d\n> command!);\n> 3.)\tAlpha patches have returned!\n> \n> As usual, read '/usr/doc/postgresql-7.0/README.rpm' for more information.\n\nLamar,\n\nOn the GENERAL list the issue of firing up a server, and the silent flag\nused by the default redhatter 'postgresql' script in init.d came up.\n\nI redirect output to /var/lib/pgsql/postlog after I rm the -S from the\ncall to the server...not having pg complain when I screw up my CGI\nscripts is no good to me.\n\nIf I were to have a vote, I'd urge whomever to add a comment to 'postgresql\nthe script' to offer logging in the manner described above.\n\nI do this on BSD, UnixWare and Linux (a few flavours) and have never had\na problem. Other than with my own code!\n\nTo be a bit clearer (tough this early): I rm the -S muzzle, >> stdout and\nstderr to /var/lib/pgsql/postlog, then run the whole enchilada in the \nbackground ( 2>&1 &' ). This works well.\n\nI call the script from rc.local and I still get the [OK] in brilliant green\nfollowed by the pid. Nothing appears broken *and* I get a log full of\ninsensitive complaints about my programming skills. Who could ask for\nmore? Here is what I do:\n\nsu -l postgres -c '/usr/bin/postmaster -i -D/var/lib/pgsql >> \n /var/lib/pgsql/postlog 2>&1 &'\n\nCheers,\nTom\n\n> ******initdb required for those still running releases prior to 7.0RC5!*******\n> \n> Users running 6.5.x (or earlier!) need to thoroughly read and understand the\n> README.rpm before installing (it is available on the ftp site as README in the\n> RPM distribution directory, as well as in the 'unpacked' subdirectory).\n> \n> The spec file for this release, as well as all patches and supplemental\n> programs are available in the 'unpacked' subdirectory.\n> \n> RPMset's are available at:\n> ftp://ftp.postgresql.org/pub/binary/v7.0/redhat-RPM\n> \n> Further information available at http://www.ramifordistat.net/postgres, as\n> usual; or by e-mail at [email protected] (i prefer RPM questions to go\n> to the list instead of directly to me....).\n> \n> --\n> Lamar Owen\n> WGCR Internet Radio\n> 1 Peter 4:11\n> \n\n\n\n---------------------------------------------------------------------------\n North Richmond Community Mental Health Center \n---------------------------------------------------------------------------\nThomas Good, MIS Coordinator tomg@ { admin | q8 } .nrnet.org\n Phone: 718-354-5528 \n Fax: 718-354-5056 \n---------------------------------------------------------------------------\n North Richmond Systems PostgreSQL s l a c k w a r e \n Are Powered By: RDBMS |---------- linux \n---------------------------------------------------------------------------\n\n\n",
"msg_date": "Sun, 21 May 2000 07:55:07 -0400 (EDT)",
"msg_from": "Thomas Good <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL 7.0-2 RPMset released."
},
{
"msg_contents": "On Sun, 21 May 2000, Thomas Good wrote:\n> On the GENERAL list the issue of firing up a server, and the silent flag\n> used by the default redhatter 'postgresql' script in init.d came up.\n\nYes, I read the thread. I didn't write the original initscript -- but\nhopefully have changed it to more your liking (see below).\n \n> I redirect output to /var/lib/pgsql/postlog after I rm the -S from the\n> call to the server...not having pg complain when I screw up my CGI\n> scripts is no good to me.\n \n> If I were to have a vote, I'd urge whomever to add a comment to 'postgresql\n> the script' to offer logging in the manner described above.\n\nThe 7.0 RPM's /etc/rc.d/init.d/postgresql script uses pg_ctl, rather than\ndirectly starting postmaster (and has since 7.0beta2, IIRC) -- and the\nPGDATA/postmaster.opts.default (which, by default, only has '-i' -- no -S) file\nis used for postmaster startup options, rather than passing them on the command\nline. The changelog notice for this was buried back in the beta cycle release\nannouncements -- I should have duplicated all notices for the 7.0-1 release\nannouncement. \n\nMore documentation will be written as I have time (or input to README.rpm, or\npatches to README.rpm).....\n\nLook at the new initscript, then let me know about possible improvements (of\nwhich I am sure improvements can be made!). Currently stderr and stdout \nfrom pg_ctl are piped to /dev/null, but that is easy enough to change. And, by\nchanging the PGDATA/pg_options file's contents, you can turn on syslog -- edit\n/etc/syslog.conf to get syslogging working -- just watch out for long queries!\n\nLogging is one of the hot issues in the RPMset right now, as the comments about\nsyslog in the -2 release announcement show. The real problem with redirecting\nthe postmaster output is the issue of log rolling, which is impossible\nto do in the 'classic' stderr/stdout redirect UNLESS you throw down postmaster\nwhen rolling the log (unless you know a trick I don't).\n\nI am trying to get _real_ logging, by way of syslog, rather than with redirects\n-- however, the redhat syslog dies under long queries (such as the one issued\nby psql in response to a \\d directive).\n\nSince some things were missed in the beta cycle's announcements (which only\nwere sent to pgsql-hackers), notice that the new 7.0 RPMset will create a new\nPGDATA in /var/lib/pgsql/data instead of /var/lib/pgsql. There are other\nchanges -- read /usr/doc/postgresql-7.0/README.rpm and the pgsql-hackers\narchives on the subject.\n\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Sun, 21 May 2000 22:26:54 -0400",
"msg_from": "Lamar Owen <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: PostgreSQL 7.0-2 RPMset released."
},
{
"msg_contents": "Lamar Owen <[email protected]> writes:\n> The real problem with redirecting the postmaster output is the issue\n> of log rolling, which is impossible to do in the 'classic'\n> stderr/stdout redirect UNLESS you throw down postmaster when rolling\n> the log (unless you know a trick I don't).\n\nYes. I think ultimately we will have to do some logging support code of\nour own to make this work the way we want. My thought at the moment is\nthere's nothing wrong with logging to stderr, as long as there's some\ncode somewhere that periodically closes stderr and reopens it to a new\nlog file. There needn't be a lot of code involved, we just need a\nwell-thought-out spec for how it should work. Comments anyone?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 22 May 2000 00:19:45 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Logging (was Re: PostgreSQL 7.0-2 RPMset released.)"
},
{
"msg_contents": "\nOn Mon, 22 May 2000 00:19:45 -0400 Tom Lane wrote:\n\n> There needn't be a lot of code involved, we just need a\n> well-thought-out spec for how it should work. Comments anyone?\n\nI run postmaster under Dan Bernstein's \"daemontools\", which include\nlogging facilities:\n\n\thttp://cr.yp.to/daemontools.html\n\nThe summary of this setup is that postmaster runs in the forground\nwriting error messages to standard error, and standard error is a pipe\nto another process. The second process is responsible for selecting\nmessages to write, writing them, and rotating the log file.\n\nMore traditional Unix solutions would involve teaching postmaster what\nthe name of its log file is, and to reopen it on receipt of some\nsignal. Usually SIGHUP is used since SIGHUP is unlikely to be useful\nto a daemon running in the background.\n\nThere are issues for logging errors that many applications handle\nbadly. What happens when:\n\no there is an I/O error writing to a log file?\no the log file is at maximum size?\no the filesystem the log file is in is full?\no a write to a log file blocks?\n\nTo take a not random example, syslogd is OK for log file rotation but\nmakes a mess and a muddle of things otherwise including the points I\nlist.\n\nRegards,\n\nGiles\n",
"msg_date": "Mon, 22 May 2000 16:43:00 +1000",
"msg_from": "Giles Lean <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Logging (was Re: PostgreSQL 7.0-2 RPMset released.) "
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Lamar Owen <[email protected]> writes:\n> > The real problem with redirecting the postmaster output is the issue\n> > of log rolling, which is impossible to do in the 'classic'\n> > stderr/stdout redirect UNLESS you throw down postmaster when rolling\n> > the log (unless you know a trick I don't).\n\nI think I do ;-) read on...\n\n> Yes. I think ultimately we will have to do some logging support code of\n> our own to make this work the way we want. My thought at the moment is\n> there's nothing wrong with logging to stderr, as long as there's some\n> code somewhere that periodically closes stderr and reopens it to a new\n> log file. There needn't be a lot of code involved, we just need a\n> well-thought-out spec for how it should work. Comments anyone?\n> \n> regards, tom lane\n\nI really enjoy using apache's rotatelogs program. stderr is\nredirected through a pipe to a very small and robust C program,\nrotatelogs, that takes as arguments number of seconds between\nlog rotates and the log filename. Logs are rotated every\nargv[2] seconds. The rotatelogs program takes care of closing\nand reopening, and nothing has to done from the application,\njust start postmaster with '2>&1 | rotatelogs ...' at the end,\nand log to stderr.\n\nAlso, BSD license! :)\n\nFor reference, I enclose the program as an attachment; it's\nless than 100 lines. Also, here's the man page:\n\nName\n rotatelogs - rotate Apache logs without having to kill\nthe\n server\n\nSynopsis\n rotatelogs logfile rotationtime\n\nDescription\n rotatelogs is a simple program for use in conjunction\nwith\n Apache's piped logfile feature which can be used\nlike\n this:\n\n TransferLog \"|rotatelogs \n/path/to/logs/access_log\n 86400\"\n\n This creates the files /path/to/logs/access_log.nnnn\nwhere\n nnnn is the system time at which the log nominally \nstarts\n (this time will always be a multiple of the rotation\ntime,\n so you can synchronize cron scripts with it). At the \nend\n of each rotation time (here after 24 hours) a new log\nis\n started.\n\nOptions\n logfile\n The path plus basename of the logfile. The \nsuffix\n .nnnn is automatically added.\n\n rotationtime\n The rotation time in seconds.\n\nSee Also\n httpd(8)",
"msg_date": "Mon, 22 May 2000 17:02:37 +0200",
"msg_from": "Palle Girgensohn <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PORTS] Logging (was Re: PostgreSQL 7.0-2 RPMset\n released.)"
},
{
"msg_contents": "Palle Girgensohn writes:\n\n> > > The real problem with redirecting the postmaster output is the issue\n> > > of log rolling,\n\n> I really enjoy using apache's rotatelogs program. stderr is\n> redirected through a pipe to a very small and robust C program,\n> rotatelogs, that takes as arguments number of seconds between\n> log rotates and the log filename. Logs are rotated every\n> argv[2] seconds. The rotatelogs program takes care of closing\n> and reopening, and nothing has to done from the application,\n> just start postmaster with '2>&1 | rotatelogs ...' at the end,\n> and log to stderr.\n\nNow there's a good idea. Why don't we abduct that program and teach pg_ctl\nabout it. (After all, we abducted that one as well. :)\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Mon, 22 May 2000 23:58:33 +0200 (CEST)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PORTS] Logging (was Re: PostgreSQL 7.0-2 RPMset\n released.)"
},
{
"msg_contents": "On Mon, 22 May 2000, Peter Eisentraut wrote:\n\n> Now there's a good idea. Why don't we abduct that program and teach pg_ctl\n> about it. (After all, we abducted that one as well. :)\n\nImitation is the sincerest form of flattery. Not to mention this is the\nbazar.\n\n\nRod\n--\nRoderick A. Anderson\[email protected] Altoplanos Information Systems, Inc.\nVoice: 208.765.6149 212 S. 11th Street, Suite 5\nFAX: 208.664.5299 Coeur d'Alene, ID 83814\n\n",
"msg_date": "Mon, 22 May 2000 16:13:31 -0700 (PDT)",
"msg_from": "\"Roderick A. Anderson\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PORTS] Logging (was Re: PostgreSQL 7.0-2 RPMset\n released.)"
},
{
"msg_contents": "Hi,\n\nGlad to hear you like the idea.\n\nMay I also suggest my favourite patch for rotatelogs\n(enclosed). It creates a \"hard link\" to the latest log using\nthe base logfilename. i.e:\n\n-rw-r--r-- 1 root wheel 8901 May 19 04:45\nlocalhost-access.log.0958694400\n-rw-r--r-- 2 root wheel 18430 May 21 17:05\nlocalhost-access.log.0958867200\n-rw-r--r-- 2 root wheel 18430 May 21 17:05 localhost-access.log\n\nThis is very nice when developing and debugging, since you\ndon't need to check for the latest log's filename, but can just\nissue \"tail -f localhost-access.log\". FreeBSD'er can enjoy tail\n-F, which will follow the log even after a rotation...\n\nThe function should probably be optional?\n\nCheers,\nPalle\n\n\"Roderick A. Anderson\" wrote:\n> \n> On Mon, 22 May 2000, Peter Eisentraut wrote:\n> \n> > Now there's a good idea. Why don't we abduct that program and teach pg_ctl\n> > about it. (After all, we abducted that one as well. :)\n> \n> Imitation is the sincerest form of flattery. Not to mention this is the\n> bazar.\n> \n> Rod\n> --\n> Roderick A. Anderson\n> [email protected] Altoplanos Information Systems, Inc.\n> Voice: 208.765.6149 212 S. 11th Street, Suite 5\n> FAX: 208.664.5299 Coeur d'Alene, ID 83814\n\n-- \nPalle",
"msg_date": "Tue, 23 May 2000 01:57:05 +0200",
"msg_from": "Palle Girgensohn <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PORTS] Logging (was Re: PostgreSQL 7.0-2 RPMset\n released.)"
}
] |
[
{
"msg_contents": "Hi,\n\nI have now desperately tried to use the 'use' keyword within my plperl\nscripts without success.\n\nDoes plperl not load libraries dynamically?\n\nI am trying to import a .pm module from plperl and I get the following error\nmessage:\n\tcreation of function failed : require trapped by operation mask at (eval\n28) line 2.\n\nWhat I want to be able to do is to access my own defined perl libraries from\nplperl.\n\nI have looked at the plperl.c file and I noticed that the dynamic loader is\nnot initialised.\n\nI would be most grateful for any help. Also, is there any documentation\nonline for plperl?\n\nRegards,\nRagnar\n\n\n",
"msg_date": "Sun, 21 May 2000 15:58:18 +0100",
"msg_from": "\"Ragnar Hakonarson\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "plperl and the dynamic loader"
},
{
"msg_contents": "\"Ragnar Hakonarson\" <[email protected]> writes:\n> Does plperl not load libraries dynamically?\n\n> I am trying to import a .pm module from plperl and I get the following error\n> message:\n> \tcreation of function failed : require trapped by operation mask at (eval 28) line 2.\n\nMakes sense to me. plperl runs in a \"safe\" Perl interpreter, and if you\ncould load arbitrary perl code then the safety would be bypassed.\n\nThe reason for this restriction is that whatever the Perl code does will\nbe done with the permissions of the Postgres user (since it's running in\na Postgres-owned backend). It'd be a huge security hole if users could\ninvoke arbitrary Perl code in that environment. So, we only permit\n\"safe\" operations.\n\nIf you like living dangerously you could weaken the protection to suit\nyour taste --- read about the Safe and Opcode perl modules, and then\ntwiddle the plperl source code to select whatever operator mask you\nfeel comfortable with.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 21 May 2000 13:34:30 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: plperl and the dynamic loader "
}
] |
[
{
"msg_contents": "On the road to sanitary lex files I finally found a simple answer for the\nnon-portable <<EOF>>. Patch attached. Any objections/concerns/comments?\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden",
"msg_date": "Sun, 21 May 2000 18:45:04 +0200 (CEST)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "No more <<EOF>>"
},
{
"msg_contents": "Peter Eisentraut <[email protected]> writes:\n> On the road to sanitary lex files I finally found a simple answer for the\n> non-portable <<EOF>>. Patch attached. Any objections/concerns/comments?\n\nSeems reasonable --- but is it worth worrying about? I had pretty\nmuch concluded that we have no hope of working with non-flex lexers\nanyway...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 21 May 2000 13:41:28 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: No more <<EOF>> "
}
] |
[
{
"msg_contents": "Hi Tom,\n\nThanks for you help.\n\nI have not got a great deal of experience with C.\n\nCould you be so kind to tell me how I directly link the DynaLoader into\nplperl.so.\n\nOnce I got the DynaLoader in place my task is complete.\n\nYou might wonder what I am doing. I am implementing a function in Postures\nthat will act as a stored procedure over ODBC. I need to connect to many\ndatabases from the stored procedure and I also need to connect to my own\nperl .pm modules.\n\nRegards,\nRagnar\n\n-----Original Message-----\nFrom: Tom Lane [mailto:[email protected]]\nSent: 21 May 2000 20:36\nTo: Ragnar Hakonarson\nSubject: Re: [HACKERS] plperl and the dynamic loader\n\n\n\"Ragnar Hakonarson\" <[email protected]> writes:\n> I get the following error from the backend:\n> \tLoad of file /..../plperl.so: undefined symbol: boot_DynaLoader\n\n> What else do I have to do to enable this?\n\nIIRC, DynaLoader is a static library not dynamic, so you might have to\nlink it directly into plperl.so. Not sure about that. I recall that\nMark Hollomon and I had some troubles getting plperl to build portably\nwhen it itself depended on DynaLoader, so he rewrote it to avoid needing\nDynaLoader ...\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Sun, 21 May 2000 22:49:47 +0100",
"msg_from": "\"Ragnar Hakonarson\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: plperl and the dynamic loader "
}
] |
[
{
"msg_contents": "I attach the current draft of my document explaining the upcoming\nfunction manager interface changes. This has been modified from the\npreviously circulated version on the basis of comments from Jan and\nothers. There is also a preliminary fmgr.h showing actual code for\nthe proposed interface macros.\n\nCurrent implementation status: the core fmgr routines have been\nrewritten and tested, also the executor's function-call code and the\nfunction handler routines for the three PL languages. I haven't yet\nchanged over any pg_proc entries to new-style except the PL handlers.\nHowever, passing nulls into and out of SQL, plpgsql, and plperl\nfunctions works properly now, and it would work in pltcl if pltcl had a\nconvention for distinguishing nulls from empty strings (Jan, do you want\nto do something about that?). I still need to change trigger handling\nso we can get rid of the CurrentTriggerData global variable, and then\ncan start working on updating individual function routines to the new\nconventions. I will probably not start on that work until after we make\nthe 7.1 branch and I can commit what I have.\n\n\t\t\tregards, tom lane\n\n\n\nProposal for function-manager redesign\t\t\t21-May-2000\n--------------------------------------\n\nWe know that the existing mechanism for calling Postgres functions needs\nto be redesigned. It has portability problems because it makes\nassumptions about parameter passing that violate ANSI C; it fails to\nhandle NULL arguments and results cleanly; and \"function handlers\" that\nsupport a class of functions (such as fmgr_pl) can only be done via a\nreally ugly, non-reentrant kluge. (Global variable set during every\nfunction call, forsooth.) Here is a proposal for fixing these problems.\n\nIn the past, the major objections to redoing the function-manager\ninterface have been (a) it'll be quite tedious to implement, since every\nbuilt-in function and everyplace that calls such functions will need to\nbe touched; (b) such wide-ranging changes will be difficult to make in\nparallel with other development work; (c) it will break existing\nuser-written loadable modules that define \"C language\" functions. While\nI have no solution to the \"tedium\" aspect, I believe I see an answer to\nthe other problems: by use of function handlers, we can support both old\nand new interfaces in parallel for both callers and callees, at some\nsmall efficiency cost for the old styles. That way, most of the changes\ncan be done on an incremental file-by-file basis --- we won't need a\n\"big bang\" where everything changes at once. Support for callees\nwritten in the old style can be left in place indefinitely, to provide\nbackward compatibility for user-written C functions.\n\nNote that neither the old function manager nor the redesign are intended\nto handle functions that accept or return sets. Those sorts of functions\nneed to be handled by special querytree structures.\n\n\nChanges in pg_proc (system data about a function)\n-------------------------------------------------\n\nA new column \"proisstrict\" will be added to the system pg_proc table.\nThis is a boolean value which will be TRUE if the function is \"strict\",\nthat is it always returns NULL when any of its inputs are NULL. The\nfunction manager will check this field and skip calling the function when\nit's TRUE and there are NULL inputs. This allows us to remove explicit\nNULL-value tests from many functions that currently need them. A function\nthat is not marked \"strict\" is responsible for checking whether its inputs\nare NULL or not. Most builtin functions will be marked \"strict\".\n\nAn optional WITH parameter will be added to CREATE FUNCTION to allow\nspecification of whether user-defined functions are strict or not. I am\ninclined to make the default be \"not strict\", since that seems to be the\nmore useful case for functions expressed in SQL or a PL language, but\nam open to arguments for the other choice.\n\n\nThe new function-manager interface\n----------------------------------\n\nThe core of the new design is revised data structures for representing\nthe result of a function lookup and for representing the parameters\npassed to a specific function invocation. (We want to keep function\nlookup separate from function call, since many parts of the system apply\nthe same function over and over; the lookup overhead should be paid once\nper query, not once per tuple.)\n\n\nWhen a function is looked up in pg_proc, the result is represented as\n\ntypedef struct\n{\n PGFunction fn_addr; /* pointer to function or handler to be called */\n Oid fn_oid; /* OID of function (NOT of handler, if any) */\n short fn_nargs; /* 0..FUNC_MAX_ARGS, or -1 if variable arg count */\n bool fn_strict; /* function is \"strict\" (NULL in => NULL out) */\n void *fn_extra; /* extra space for use by handler */\n} FmgrInfo;\n\nFor an ordinary built-in function, fn_addr is just the address of the C\nroutine that implements the function. Otherwise it is the address of a\nhandler for the class of functions that includes the target function.\nThe handler can use the function OID and perhaps also the fn_extra slot\nto find the specific code to execute. (fn_oid = InvalidOid can be used\nto denote a not-yet-initialized FmgrInfo struct. fn_extra will always\nbe NULL when an FmgrInfo is first filled by the function lookup code, but\na function handler could set it to avoid making repeated lookups of its\nown when the same FmgrInfo is used repeatedly during a query.) fn_nargs\nis the number of arguments expected by the function, and fn_strict is\nits strictness flag.\n\nFmgrInfo already exists in the current code, but has fewer fields. This\nchange should be transparent at the source-code level.\n\n\nDuring a call of a function, the following data structure is created\nand passed to the function:\n\ntypedef struct\n{\n FmgrInfo *flinfo; /* ptr to lookup info used for this call */\n Node *context; /* pass info about context of call */\n Node *resultinfo; /* pass or return extra info about result */\n bool isnull; /* function must set true if result is NULL */\n short nargs; /* # arguments actually passed */\n Datum arg[FUNC_MAX_ARGS]; /* Arguments passed to function */\n bool argnull[FUNC_MAX_ARGS]; /* T if arg[i] is actually NULL */\n} FunctionCallInfoData;\ntypedef FunctionCallInfoData* FunctionCallInfo;\n\nflinfo points to the lookup info used to make the call. Ordinary functions\nwill probably ignore this field, but function class handlers will need it\nto find out the OID of the specific function being called.\n\ncontext is NULL for an \"ordinary\" function call, but may point to additional\ninfo when the function is called in certain contexts. (For example, the\ntrigger manager will pass information about the current trigger event here.)\nIf context is used, it should point to some subtype of Node; the particular\nkind of context can then be indicated by the node type field. (A callee\nshould always check the node type before assuming it knows what kind of\ncontext is being passed.) fmgr itself puts no other restrictions on the use\nof this field.\n\nresultinfo is NULL when calling any function from which a simple Datum\nresult is expected. It may point to some subtype of Node if the function\nreturns more than a Datum. Like the context field, resultinfo is a hook\nfor expansion; fmgr itself doesn't constrain the use of the field.\n\nnargs, arg[], and argnull[] hold the arguments being passed to the function.\nNotice that all the arguments passed to a function (as well as its result\nvalue) will now uniformly be of type Datum. As discussed below, callers\nand callees should apply the standard Datum-to-and-from-whatever macros\nto convert to the actual argument types of a particular function. The\nvalue in arg[i] is unspecified when argnull[i] is true.\n\nIt is generally the responsibility of the caller to ensure that the\nnumber of arguments passed matches what the callee is expecting; except\nfor callees that take a variable number of arguments, the callee will\ntypically ignore the nargs field and just grab values from arg[].\n\nThe isnull field will be initialized to \"false\" before the call. On\nreturn from the function, isnull is the null flag for the function result:\nif it is true the function's result is NULL, regardless of the actual\nfunction return value. Note that simple \"strict\" functions can ignore\nboth isnull and argnull[], since they won't even get called when there\nare any TRUE values in argnull[].\n\nFunctionCallInfo replaces FmgrValues plus a bunch of ad-hoc parameter\nconventions, global variables (fmgr_pl_finfo and CurrentTriggerData at\nleast), and other uglinesses.\n\n\nCallees, whether they be individual functions or function handlers,\nshall always have this signature:\n\nDatum function (FunctionCallInfo fcinfo);\n\nwhich is represented by the typedef\n\ntypedef Datum (*PGFunction) (FunctionCallInfo fcinfo);\n\nThe function is responsible for setting fcinfo->isnull appropriately\nas well as returning a result represented as a Datum. Note that since\nall callees will now have exactly the same signature, and will be called\nthrough a function pointer declared with exactly that signature, we\nshould have no portability or optimization problems.\n\n\nFunction coding conventions\n---------------------------\n\nAs an example, int4 addition goes from old-style\n\nint32\nint4pl(int32 arg1, int32 arg2)\n{\n return arg1 + arg2;\n}\n\nto new-style\n\nDatum\nint4pl(FunctionCallInfo fcinfo)\n{\n /* we assume the function is marked \"strict\", so we can ignore\n * NULL-value handling */\n\n return Int32GetDatum(DatumGetInt32(fcinfo->arg[0]) +\n DatumGetInt32(fcinfo->arg[1]));\n}\n\nThis is, of course, much uglier than the old-style code, but we can\nimprove matters with some well-chosen macros for the boilerplate parts.\nI propose below macros that would make the code look like\n\nDatum\nint4pl(PG_FUNCTION_ARGS)\n{\n int32 arg1 = PG_GETARG_INT32(0);\n int32 arg2 = PG_GETARG_INT32(1);\n\n PG_RETURN_INT32( arg1 + arg2 );\n}\n\nThis is still more code than before, but it's fairly readable, and it's\nalso amenable to machine processing --- for example, we could probably\nwrite a script that scans code like this and extracts argument and result\ntype info for comparison to the pg_proc table.\n\nFor the standard data types float4, float8, and int8, these macros should\nhide the indirection and space allocation involved, so that the function's\ncode is not explicitly aware that these types are pass-by-reference. This\nwill offer a considerable gain in readability, and it also opens up the\nopportunity to make these types be pass-by-value on machines where it's\nfeasible to do so. (For example, on an Alpha it's pretty silly to make int8\nbe pass-by-ref, since Datum is going to be 64 bits anyway. float4 could\nbecome pass-by-value on all machines...)\n\nHere are the proposed macros and coding conventions:\n\nThe definition of an fmgr-callable function will always look like\n\nDatum\nfunction_name(PG_FUNCTION_ARGS)\n{\n\t...\n}\n\n\"PG_FUNCTION_ARGS\" just expands to \"FunctionCallInfo fcinfo\". The main\nreason for using this macro is to make it easy for scripts to spot function\ndefinitions. However, if we ever decide to change the calling convention\nagain, it might come in handy to have this macro in place.\n\nA nonstrict function is responsible for checking whether each individual\nargument is null or not, which it can do with PG_ARGISNULL(n) (which is\njust \"fcinfo->argnull[n]\"). It should avoid trying to fetch the value\nof any argument that is null.\n\nBoth strict and nonstrict functions can return NULL, if needed, with\n\tPG_RETURN_NULL();\nwhich expands to\n\t{ fcinfo->isnull = true; return (Datum) 0; }\n\nArgument values are ordinarily fetched using code like\n\tint32\tname = PG_GETARG_INT32(number);\n\nFor float4, float8, and int8, the PG_GETARG macros will hide the pass-by-\nreference nature of the data types; for example PG_GETARG_FLOAT4 expands to\n\t(* (float64) DatumGetPointer(fcinfo->arg[number]))\nand would typically be called like this:\n\tfloat4 arg = PG_GETARG_FLOAT4(0);\nNote that \"float4\" and \"float8\" are the recommended typedefs to use, not\n\"float32data\" and \"float64data\", and the macros are named accordingly.\nBut 64-bit ints should be declared as \"int64\".\n\nNon-null values are returned with a PG_RETURN_XXX macro of the appropriate\ntype. For example, PG_RETURN_INT32 expands to\n\treturn Int32GetDatum(x)\nand PG_RETURN_FLOAT8 expands to\n\t{ float8 *retval = palloc(sizeof(float8));\n\t *retval = (x);\n\t return PointerGetDatum(retval); }\nwhich again hides the pass-by-reference nature of the datatype.\n\nfmgr.h will provide PG_GETARG and PG_RETURN macros for all the basic data\ntypes. Modules or header files that define specialized SQL datatypes\n(eg, timestamp) should define appropriate macros for those types, so that\nfunctions manipulating the types can be coded in the standard style.\n\nFor non-primitive data types (particularly variable-length types) it\nprobably won't be very practical to hide the pass-by-reference nature of\nthe data type, so the PG_GETARG and PG_RETURN macros for those types\nprobably won't do more than DatumGetPointer/PointerGetDatum plus the\nappropriate typecast. Functions returning such types will need to\npalloc() their result space explicitly. I recommend naming the GETARG\nand RETURN macros for such types to end in \"_P\", as a reminder that they\nproduce or take a pointer. For example, PG_GETARG_TEXT_P yields \"text *\".\n\nFor TOAST-able data types, the PG_GETARG macro will deliver a de-TOASTed\ndata value. There might be a few cases where the still-toasted value is\nwanted, but I am having a hard time coming up with examples. For the\nmoment I'd say that any such code could use a lower-level macro that is\njust ((struct varlena *) DatumGetPointer(fcinfo->arg[n])).\n\nNote: the above examples assume that arguments will be counted starting at\nzero. We could have the ARG macros subtract one from the argument number,\nso that arguments are counted starting at one. I'm not sure if that would be\nmore or less confusing. Does anyone have a strong feeling either way about\nit?\n\nWhen a function needs to access fcinfo->flinfo or one of the other auxiliary\nfields of FunctionCallInfo, it should just do it. I doubt that providing\nsyntactic-sugar macros for these cases is useful.\n\n\nCall-site coding conventions\n----------------------------\n\nThere are many places in the system that call either a specific function\n(for example, the parser invokes \"textin\" by name in places) or a\nparticular group of functions that have a common argument list (for\nexample, the optimizer invokes selectivity estimation functions with\na fixed argument list). These places will need to change, but we should\ntry to avoid making them significantly uglier than before.\n\nPlaces that invoke an arbitrary function with an arbitrary argument list\ncan simply be changed to fill a FunctionCallInfoData structure directly;\nthat'll be no worse and possibly cleaner than what they do now.\n\nWhen invoking a specific built-in function by name, we have generally\njust written something like\n\tresult = textin ( ... args ... )\nwhich will not work after textin() is converted to the new call style.\nI suggest that code like this be converted to use \"helper\" functions\nthat will create and fill in a FunctionCallInfoData struct. For\nexample, if textin is being called with one argument, it'd look\nsomething like\n\tresult = DirectFunctionCall1(textin, PointerGetDatum(argument));\nThese helper routines will have declarations like\n\tDatum DirectFunctionCall2(PGFunction func, Datum arg1, Datum arg2);\nNote it will be the caller's responsibility to convert to and from\nDatum; appropriate conversion macros should be used.\n\nThe DirectFunctionCallN routines will not bother to fill in\nfcinfo->flinfo (indeed cannot, since they have no idea about an OID for\nthe target function); they will just set it NULL. This is unlikely to\nbother any built-in function that could be called this way. Note also\nthat this style of coding cannot pass a NULL input value nor cope with\na NULL result (it couldn't before, either!). We can make the helper\nroutines elog an error if they see that the function returns a NULL.\n\n(Note: direct calls like this will have to be changed at the same time\nthat their called routines are changed to the new style. But that will\nstill be a lot less of a constraint than a \"big bang\" conversion.)\n\nWhen invoking a function that has a known argument signature, we have\nusually written either\n\tresult = fmgr(targetfuncOid, ... args ... );\nor\n\tresult = fmgr_ptr(FmgrInfo *finfo, ... args ... );\ndepending on whether an FmgrInfo lookup has been done yet or not.\nThis kind of code can be recast using helper routines, in the same\nstyle as above:\n\tresult = OidFunctionCall1(funcOid, PointerGetDatum(argument));\n\tresult = FunctionCall2(funcCallInfo,\n\t PointerGetDatum(argument),\n\t Int32GetDatum(argument));\nAgain, this style of coding does not allow for expressing NULL inputs\nor receiving a NULL result.\n\nAs with the callee-side situation, I propose adding argument conversion\nmacros that hide the pass-by-reference nature of int8, float4, and\nfloat8, with an eye to making those types relatively painless to convert\nto pass-by-value. For the value-to-pointer direction a little bit of\na trick is needed: these macros will take the address of their argument,\nmeaning that the argument must be a variable not an expression or a\ncompiler error will result. So it's not *completely* transparent,\nbut the notational ugliness is minimal.\n\nThe existing helper functions fmgr(), fmgr_c(), etc will be left in\nplace until all uses of them are gone. Of course their internals will\nhave to change in the first step of implementation, but they can\ncontinue to support the same external appearance.\n\n\nNotes about function handlers\n-----------------------------\n\nHandlers for classes of functions should find life much easier and\ncleaner in this design. The OID of the called function is directly\nreachable from the passed parameters; we don't need the global variable\nfmgr_pl_finfo anymore. Also, by modifying fcinfo->flinfo->fn_extra,\nthe handler can cache lookup info to avoid repeat lookups when the same\nfunction is invoked many times. (fn_extra can only be used as a hint,\nsince callers are not required to re-use an FmgrInfo struct.\nBut in performance-critical paths they normally will do so.)\n\nIssue: in what context should a handler allocate memory that it intends\nto use for fn_extra data? The current palloc context when the handler\nis actually called might be considerably shorter-lived than the FmgrInfo\nstruct, which would lead to dangling-pointer problems at the next use\nof the FmgrInfo. Perhaps FmgrInfo should also store a memory context\nidentifier that the handler could use to allocate space of the right\nlifespan. (Having fmgr_info initialize this to CurrentMemoryContext\nshould work in nearly all cases, though a few places might have to\nset it differently.) At the moment I have not done this, since the\nexisting PL handlers only need to set fn_extra to point at long-lived\nstructures (data in their own caches) and don't really care which\ncontext the FmgrInfo is in anyway.\n\nAre there any other things needed by the call handlers for PL/pgsql and\nother languages?\n\nDuring the conversion process, support for old-style builtin functions\nand old-style user-written C functions will be provided by appropriate\nfunction handlers. For example, the handler for old-style builtins\nlooks roughly like fmgr_c() used to.\n\n\nSystem table updates\n--------------------\n\nIn the initial phase, two new entries will be added to pg_language\nfor language types \"newinternal\" and \"newC\", corresponding to\nbuiltin and dynamically-loaded functions having the new calling\nconvention.\n\nThere will also be a change to pg_proc to add the new \"proisstrict\"\ncolumn.\n\nThen pg_proc entries will be changed from language code \"internal\" to\n\"newinternal\" piecemeal, as the associated routines are rewritten.\n(This will imply several rounds of forced initdbs as the contents of\npg_proc change, but I think we can live with that.)\n\nThe old language names \"internal\" and \"C\" will continue to refer to\nfunctions with the old calling convention. We should deprecate\nold-style functions because of their portability problems, but the\nsupport for them will only be one small function handler routine,\nso we can leave them in place for as long as necessary.\n\nThe expected calling convention for PL call handlers will need to change\nall-at-once, but fortunately there are not very many of them to fix.\n\n/*-------------------------------------------------------------------------\n *\n * fmgr.h\n * Definitions for the Postgres function manager and function-call\n * interface.\n *\n * This file must be included by all Postgres modules that either define\n * or call fmgr-callable functions.\n *\n *\n * Portions Copyright (c) 1996-2000, PostgreSQL, Inc\n * Portions Copyright (c) 1994, Regents of the University of California\n *\n * $Id: fmgr.h,v 1.12 2000/01/26 05:58:38 momjian Exp $\n *\n *-------------------------------------------------------------------------\n */\n#ifndef\tFMGR_H\n#define FMGR_H\n\n\n/*\n * All functions that can be called directly by fmgr must have this signature.\n * (Other functions can be called by using a handler that does have this\n * signature.)\n */\n\ntypedef struct FunctionCallInfoData *FunctionCallInfo;\n\ntypedef Datum (*PGFunction) (FunctionCallInfo fcinfo);\n\n/*\n * This struct holds the system-catalog information that must be looked up\n * before a function can be called through fmgr. If the same function is\n * to be called multiple times, the lookup need be done only once and the\n * info struct saved for re-use.\n */\ntypedef struct\n{\n PGFunction fn_addr; /* pointer to function or handler to be called */\n Oid fn_oid; /* OID of function (NOT of handler, if any) */\n short fn_nargs; /* 0..FUNC_MAX_ARGS, or -1 if variable arg count */\n bool fn_strict; /* function is \"strict\" (NULL in => NULL out) */\n void *fn_extra; /* extra space for use by handler */\n} FmgrInfo;\n\n/*\n * This struct is the data actually passed to an fmgr-called function.\n */\ntypedef struct FunctionCallInfoData\n{\n FmgrInfo *flinfo;\t\t\t/* ptr to lookup info used for this call */\n struct Node *context;\t\t/* pass info about context of call */\n struct Node *resultinfo;\t/* pass or return extra info about result */\n bool isnull; /* function must set true if result is NULL */\n\tshort\t\tnargs; /* # arguments actually passed */\n Datum arg[FUNC_MAX_ARGS];\t/* Arguments passed to function */\n bool argnull[FUNC_MAX_ARGS];\t/* T if arg[i] is actually NULL */\n} FunctionCallInfoData;\n\n/*\n * This routine fills a FmgrInfo struct, given the OID\n * of the function to be called.\n */\nextern void fmgr_info(Oid functionId, FmgrInfo *finfo);\n\n/*\n * This macro invokes a function given a filled-in FunctionCallInfoData\n * struct. The macro result is the returned Datum --- but note that\n * caller must still check fcinfo->isnull! Also, if function is strict,\n * it is caller's responsibility to verify that no null arguments are present\n * before calling.\n */\n#define FunctionCallInvoke(fcinfo) ((* (fcinfo)->flinfo->fn_addr) (fcinfo))\n\n\n/*-------------------------------------------------------------------------\n *\t\tSupport macros to ease writing fmgr-compatible functions\n *\n * A C-coded fmgr-compatible function should be declared as\n *\n *\t\tDatum\n *\t\tfunction_name(PG_FUNCTION_ARGS)\n *\t\t{\n *\t\t\t...\n *\t\t}\n *\n * It should access its arguments using appropriate PG_GETARG_xxx macros\n * and should return its result using PG_RETURN_xxx.\n *\n *-------------------------------------------------------------------------\n */\n\n/* Standard parameter list for fmgr-compatible functions */\n#define PG_FUNCTION_ARGS\tFunctionCallInfo fcinfo\n\n/* If function is not marked \"proisstrict\" in pg_proc, it must check for\n * null arguments using this macro. Do not try to GETARG a null argument!\n */\n#define PG_ARGISNULL(n) (fcinfo->argnull[n])\n\n/* Macros for fetching arguments of standard types */\n\n#define PG_GETARG_INT32(n) DatumGetInt32(fcinfo->arg[n])\n#define PG_GETARG_INT16(n) DatumGetInt16(fcinfo->arg[n])\n#define PG_GETARG_CHAR(n) DatumGetChar(fcinfo->arg[n])\n#define PG_GETARG_BOOL(n) DatumGetBool(fcinfo->arg[n])\n#define PG_GETARG_OID(n) DatumGetObjectId(fcinfo->arg[n])\n#define PG_GETARG_POINTER(n) DatumGetPointer(fcinfo->arg[n])\n/* these macros hide the pass-by-reference-ness of the datatype: */\n#define PG_GETARG_FLOAT4(n) (* DatumGetFloat32(fcinfo->arg[n]))\n#define PG_GETARG_FLOAT8(n) (* DatumGetFloat64(fcinfo->arg[n]))\n#define PG_GETARG_INT64(n) (* (int64 *) PG_GETARG_POINTER(n))\n/* use this if you want the raw, possibly-toasted input datum: */\n#define PG_GETARG_RAW_VARLENA_P(n) ((struct varlena *) PG_GETARG_POINTER(n))\n/* use this if you want the input datum de-toasted: */\n#define PG_GETARG_VARLENA_P(n) \\\n\t(VARATT_IS_EXTENDED(PG_GETARG_RAW_VARLENA_P(n)) ? \\\n\t (struct varlena *) heap_tuple_untoast_attr((varattrib *) PG_GETARG_RAW_VARLENA_P(n)) : \\\n\t PG_GETARG_RAW_VARLENA_P(n))\n/* GETARG macros for varlena types will typically look like this: */\n#define PG_GETARG_TEXT_P(n) ((text *) PG_GETARG_VARLENA_P(n))\n\n/* To return a NULL do this: */\n#define PG_RETURN_NULL() \\\n\tdo { fcinfo->isnull = true; return (Datum) 0; } while (0)\n\n/* Macros for returning results of standard types */\n\n#define PG_RETURN_INT32(x) return Int32GetDatum(x)\n#define PG_RETURN_INT16(x) return Int16GetDatum(x)\n#define PG_RETURN_CHAR(x) return CharGetDatum(x)\n#define PG_RETURN_BOOL(x) return BoolGetDatum(x)\n#define PG_RETURN_OID(x) return ObjectIdGetDatum(x)\n#define PG_RETURN_POINTER(x) return PointerGetDatum(x)\n/* these macros hide the pass-by-reference-ness of the datatype: */\n#define PG_RETURN_FLOAT4(x) \\\n\tdo { float4 *retval_ = (float4 *) palloc(sizeof(float4)); \\\n\t\t *retval_ = (x); \\\n\t\t return PointerGetDatum(retval_); } while (0)\n#define PG_RETURN_FLOAT8(x) \\\n\tdo { float8 *retval_ = (float8 *) palloc(sizeof(float8)); \\\n\t\t *retval_ = (x); \\\n\t\t return PointerGetDatum(retval_); } while (0)\n#define PG_RETURN_INT64(x) \\\n\tdo { int64 *retval_ = (int64 *) palloc(sizeof(int64)); \\\n\t\t *retval_ = (x); \\\n\t\t return PointerGetDatum(retval_); } while (0)\n/* RETURN macros for other pass-by-ref types will typically look like this: */\n#define PG_RETURN_TEXT_P(x) PG_RETURN_POINTER(x)\n\n\n/*-------------------------------------------------------------------------\n *\t\tSupport routines and macros for callers of fmgr-compatible functions\n *-------------------------------------------------------------------------\n */\n\n/* These are for invocation of a specifically named function with a\n * directly-computed parameter list. Note that neither arguments nor result\n * are allowed to be NULL.\n */\nextern Datum DirectFunctionCall1(PGFunction func, Datum arg1);\nextern Datum DirectFunctionCall2(PGFunction func, Datum arg1, Datum arg2);\nextern Datum DirectFunctionCall3(PGFunction func, Datum arg1, Datum arg2,\n\t\t\t\t\t\t\t\t Datum arg3);\nextern Datum DirectFunctionCall4(PGFunction func, Datum arg1, Datum arg2,\n\t\t\t\t\t\t\t\t Datum arg3, Datum arg4);\nextern Datum DirectFunctionCall5(PGFunction func, Datum arg1, Datum arg2,\n\t\t\t\t\t\t\t\t Datum arg3, Datum arg4, Datum arg5);\nextern Datum DirectFunctionCall6(PGFunction func, Datum arg1, Datum arg2,\n\t\t\t\t\t\t\t\t Datum arg3, Datum arg4, Datum arg5,\n\t\t\t\t\t\t\t\t Datum arg6);\nextern Datum DirectFunctionCall7(PGFunction func, Datum arg1, Datum arg2,\n\t\t\t\t\t\t\t\t Datum arg3, Datum arg4, Datum arg5,\n\t\t\t\t\t\t\t\t Datum arg6, Datum arg7);\nextern Datum DirectFunctionCall8(PGFunction func, Datum arg1, Datum arg2,\n\t\t\t\t\t\t\t\t Datum arg3, Datum arg4, Datum arg5,\n\t\t\t\t\t\t\t\t Datum arg6, Datum arg7, Datum arg8);\nextern Datum DirectFunctionCall9(PGFunction func, Datum arg1, Datum arg2,\n\t\t\t\t\t\t\t\t Datum arg3, Datum arg4, Datum arg5,\n\t\t\t\t\t\t\t\t Datum arg6, Datum arg7, Datum arg8,\n\t\t\t\t\t\t\t\t Datum arg9);\n\n/* These are for invocation of a previously-looked-up function with a\n * directly-computed parameter list. Note that neither arguments nor result\n * are allowed to be NULL.\n */\nextern Datum FunctionCall1(FmgrInfo *flinfo, Datum arg1);\nextern Datum FunctionCall2(FmgrInfo *flinfo, Datum arg1, Datum arg2);\nextern Datum FunctionCall3(FmgrInfo *flinfo, Datum arg1, Datum arg2,\n\t\t\t\t\t\t Datum arg3);\nextern Datum FunctionCall4(FmgrInfo *flinfo, Datum arg1, Datum arg2,\n\t\t\t\t\t\t Datum arg3, Datum arg4);\nextern Datum FunctionCall5(FmgrInfo *flinfo, Datum arg1, Datum arg2,\n\t\t\t\t\t\t Datum arg3, Datum arg4, Datum arg5);\nextern Datum FunctionCall6(FmgrInfo *flinfo, Datum arg1, Datum arg2,\n\t\t\t\t\t\t Datum arg3, Datum arg4, Datum arg5,\n\t\t\t\t\t\t Datum arg6);\nextern Datum FunctionCall7(FmgrInfo *flinfo, Datum arg1, Datum arg2,\n\t\t\t\t\t\t Datum arg3, Datum arg4, Datum arg5,\n\t\t\t\t\t\t Datum arg6, Datum arg7);\nextern Datum FunctionCall8(FmgrInfo *flinfo, Datum arg1, Datum arg2,\n\t\t\t\t\t\t Datum arg3, Datum arg4, Datum arg5,\n\t\t\t\t\t\t Datum arg6, Datum arg7, Datum arg8);\nextern Datum FunctionCall9(FmgrInfo *flinfo, Datum arg1, Datum arg2,\n\t\t\t\t\t\t Datum arg3, Datum arg4, Datum arg5,\n\t\t\t\t\t\t Datum arg6, Datum arg7, Datum arg8,\n\t\t\t\t\t\t Datum arg9);\n\n/* These are for invocation of a function identified by OID with a\n * directly-computed parameter list. Note that neither arguments nor result\n * are allowed to be NULL. These are essentially FunctionLookup() followed\n * by FunctionCallN(). If the same function is to be invoked repeatedly,\n * do the FunctionLookup() once and then use FunctionCallN().\n */\nextern Datum OidFunctionCall1(Oid functionId, Datum arg1);\nextern Datum OidFunctionCall2(Oid functionId, Datum arg1, Datum arg2);\nextern Datum OidFunctionCall3(Oid functionId, Datum arg1, Datum arg2,\n\t\t\t\t\t\t\t Datum arg3);\nextern Datum OidFunctionCall4(Oid functionId, Datum arg1, Datum arg2,\n\t\t\t\t\t\t\t Datum arg3, Datum arg4);\nextern Datum OidFunctionCall5(Oid functionId, Datum arg1, Datum arg2,\n\t\t\t\t\t\t\t Datum arg3, Datum arg4, Datum arg5);\nextern Datum OidFunctionCall6(Oid functionId, Datum arg1, Datum arg2,\n\t\t\t\t\t\t\t Datum arg3, Datum arg4, Datum arg5,\n\t\t\t\t\t\t\t Datum arg6);\nextern Datum OidFunctionCall7(Oid functionId, Datum arg1, Datum arg2,\n\t\t\t\t\t\t\t Datum arg3, Datum arg4, Datum arg5,\n\t\t\t\t\t\t\t Datum arg6, Datum arg7);\nextern Datum OidFunctionCall8(Oid functionId, Datum arg1, Datum arg2,\n\t\t\t\t\t\t\t Datum arg3, Datum arg4, Datum arg5,\n\t\t\t\t\t\t\t Datum arg6, Datum arg7, Datum arg8);\nextern Datum OidFunctionCall9(Oid functionId, Datum arg1, Datum arg2,\n\t\t\t\t\t\t\t Datum arg3, Datum arg4, Datum arg5,\n\t\t\t\t\t\t\t Datum arg6, Datum arg7, Datum arg8,\n\t\t\t\t\t\t\t Datum arg9);\n\n/* The parameters and results of FunctionCallN() and friends should be\n * converted to and from Datum using the XXXGetDatum and DatumGetXXX\n * macros of c.h, plus these additional macros (perhaps these should be\n * moved to c.h?). These macros exist to hide the pass-by-reference\n * nature of a few of our basic datatypes, with the thought that these\n * types might someday become pass-by-value. Pass-by-reference is not\n * completely hidden, because you can only hand a variable of the right\n * type to these XXXGetDatum macros; no constants or expressions!\n */\n#define DatumGetFloat4(x) (* ((float4 *) DatumGetPointer(x)))\n#define DatumGetFloat8(x) (* ((float8 *) DatumGetPointer(x)))\n#define DatumGetInt64(x) (* ((int64 *) DatumGetPointer(x)))\n#define Float4GetDatum(x) PointerGetDatum((Pointer) &(x))\n#define Float8GetDatum(x) PointerGetDatum((Pointer) &(x))\n#define Int64GetDatum(x) PointerGetDatum((Pointer) &(x))\n\n\n/*\n * Routines in fmgr.c\n */\nextern Oid fmgr_internal_language(const char *proname);\n\n/*\n * Routines in dfmgr.c\n */\nextern PGFunction fmgr_dynamic(Oid functionId);\nextern PGFunction load_external_function(char *filename, char *funcname);\nextern void load_file(char *filename);\n\n\n/*-------------------------------------------------------------------------\n *\n * !!! OLD INTERFACE !!!\n *\n * All the definitions below here are associated with the old fmgr API.\n * They will go away as soon as we have converted all call points to use\n * the new API. Note that old-style callee functions do not depend on\n * these definitions, so we don't need to have converted all of them before\n * dropping the old API ... just all the old-style call points.\n *\n *-------------------------------------------------------------------------\n */\n\n/* ptr to func returning (char *) */\n#if defined(__mc68000__) && defined(__ELF__)\n/* The m68k SVR4 ABI defines that pointers are returned in %a0 instead of\n * %d0. So if a function pointer is declared to return a pointer, the\n * compiler may look only into %a0, but if the called function was declared\n * to return return an integer type, it puts its value only into %d0. So the\n * caller doesn't pink up the correct return value. The solution is to\n * declare the function pointer to return int, so the compiler picks up the\n * return value from %d0. (Functions returning pointers put their value\n * *additionally* into %d0 for compability.) The price is that there are\n * some warnings about int->pointer conversions...\n */\ntypedef int32 ((*func_ptr) ());\n#else\ntypedef char *((*func_ptr) ());\n#endif\n\ntypedef struct {\n char *data[FUNC_MAX_ARGS];\n} FmgrValues;\n\n/*\n * defined in fmgr.c\n */\nextern char *fmgr(Oid procedureId, ... );\nextern char *fmgr_faddr_link(char *arg0, ...);\n\n/*\n *\tMacros for calling through the result of fmgr_info.\n */\n\n/* We don't make this static so fmgr_faddr() macros can access it */\nextern FmgrInfo *fmgr_pl_finfo;\n\n#define fmgr_faddr(finfo) (fmgr_pl_finfo = (finfo), (func_ptr) fmgr_faddr_link)\n\n#define\tFMGR_PTR2(FINFO, ARG1, ARG2) ((*(fmgr_faddr(FINFO))) (ARG1, ARG2))\n\n/*\n *\tFlags for the builtin oprrest selectivity routines.\n * XXX These do not belong here ... put 'em in some planner/optimizer header.\n */\n#define\tSEL_CONSTANT \t1\t\t/* operator's non-var arg is a constant */\n#define\tSEL_RIGHT\t2\t\t\t/* operator's non-var arg is on the right */\n\n#endif\t/* FMGR_H */",
"msg_date": "Sun, 21 May 2000 19:11:52 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Last call for comments: fmgr rewrite [LONG]"
},
{
"msg_contents": "Tom Lane wrote:\n\n> typedef struct\n> {\n> FmgrInfo *flinfo; /* ptr to lookup info used for this call */\n> Node *context; /* pass info about context of call */\n> Node *resultinfo; /* pass or return extra info about result */\n> bool isnull; /* function must set true if result is NULL */\n> short nargs; /* # arguments actually passed */\n> Datum arg[FUNC_MAX_ARGS]; /* Arguments passed to function */\n> bool argnull[FUNC_MAX_ARGS]; /* T if arg[i] is actually NULL */\n> } FunctionCallInfoData;\n\nJust wondering what the implications of FUNC_MAX_ARGS is, and whether\nsomething like...\n\nstruct FuncArg \n{\n Datum arg;\n bool argnull;\n};\n\ntypedef struct\n{\n FmgrInfo *flinfo; /* ptr to lookup info used for this call\n*/\n Node *context; /* pass info about context of call */\n Node *resultinfo; /* pass or return extra info about\nresult */\n bool isnull; /* function must set true if result is\nNULL */\n short nargs; /* # arguments actually passed */\n struct FuncArg args[];\n} FunctionCallInfoData;\n\nmight remove an arbitrary argument limit?\n\n> int32\n> int4pl(int32 arg1, int32 arg2)\n> {\n> return arg1 + arg2;\n> }\n> to new-style\n> Datum\n> int4pl(FunctionCallInfo fcinfo)\n> {\n> /* we assume the function is marked \"strict\", so we can ignore\n> * NULL-value handling */\n> \n> return Int32GetDatum(DatumGetInt32(fcinfo->arg[0]) +\n> DatumGetInt32(fcinfo->arg[1]));\n> }\n\n\nWondering if some stub code generator might be appropriate so that\nfunctions can can continue to look as readable as before?\n",
"msg_date": "Mon, 22 May 2000 09:54:46 +1000",
"msg_from": "Chris Bitmead <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Last call for comments: fmgr rewrite [LONG]"
},
{
"msg_contents": "> I attach the current draft of my document explaining the upcoming\n> function manager interface changes. This has been modified from the\n> previously circulated version on the basis of comments from Jan and\n> others. There is also a preliminary fmgr.h showing actual code for\n> the proposed interface macros.\n\nFrankly, everything is very quiet. I have no problem branching the CVS\ntree and getting started soon, if people want that.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 21 May 2000 20:03:57 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Last call for comments: fmgr rewrite [LONG]"
},
{
"msg_contents": "Chris Bitmead <[email protected]> writes:\n> Just wondering what the implications of FUNC_MAX_ARGS is, and whether\n> something like...\n\n> struct FuncArg \n> {\n> Datum arg;\n> bool argnull;\n> };\n\nI did consider that but it's probably not worth near-doubling the size\nof the struct (think about how that will pack, especially if Datum\nbecomes 8 bytes). The average callee will probably not be looking at\nthe argnull array at all, so it won't have a dependency on the offset to\nargnull in the first place. Furthermore FUNC_MAX_ARGS is not going to\nvanish in the foreseeable future; we have fixed-size arrays in places\nlike pg_proc and there's just not enough reason to go to the pain of\nmaking those variable-size. So the only possible win would be to make\ndynamically loaded functions binary-compatible across installations with\nvarying FUNC_MAX_ARGS values ... and since that'd matter only if they\nlooked at argnull *and* not at any other structure that depends on\nFUNC_MAX_ARGS, it's probably not worth it.\n\n> Wondering if some stub code generator might be appropriate so that\n> functions can can continue to look as readable as before?\n\nEr, did you read to the end of the proposal?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 21 May 2000 20:16:17 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Last call for comments: fmgr rewrite [LONG] "
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n> Frankly, everything is very quiet. I have no problem branching the CVS\n> tree and getting started soon, if people want that.\n\nYeah, it seems like we could do a 7.0.1 and make the 7.1 CVS branch\nsooner than the end of the month. Maybe sometime this week?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 21 May 2000 20:17:57 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Last call for comments: fmgr rewrite [LONG] "
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Chris Bitmead <[email protected]> writes:\n> > Just wondering what the implications of FUNC_MAX_ARGS is, and whether\n> > something like...\n> \n> > struct FuncArg\n> > {\n> > Datum arg;\n> > bool argnull;\n> > };\n> \n> I did consider that but it's probably not worth near-doubling the size\n> of the struct (think about how that will pack, especially if Datum\n> becomes 8 bytes). \n\nBut FUNC_MAX_ARGS is currently 16. 98% of functions are probably 1 or 2\narguments. So your way you always use 144 bytes. With my proposal most\nwill use 16 or 32 bytes because of the variable struct size and you\nwon't have an arbitrary limit of 16 args.\n\n> Furthermore FUNC_MAX_ARGS is not going to\n> vanish in the foreseeable future; we have fixed-size arrays in places\n> like pg_proc and there's just not enough reason to go to the pain of\n> making those variable-size.\n\nWell if anybody ever wanted to do it, not having to re-write every\nfunction in the system would be a nice win. Maybe there are other wins\nwe don't see yet in not having a fixed limit?\n\n> So the only possible win would be to make\n> dynamically loaded functions binary-compatible across installations with\n> varying FUNC_MAX_ARGS values ... and since that'd matter only if they\n> looked at argnull *and* not at any other structure that depends on\n> FUNC_MAX_ARGS, it's probably not worth it.\n\nHmm. Looks like a possible future win to me. Anybody who has a library\nof functions might not have to recompile.\n\n> > Wondering if some stub code generator might be appropriate so that\n> > functions can can continue to look as readable as before?\n> \n> Er, did you read to the end of the proposal?\n\nYep. Did I miss your point?\n",
"msg_date": "Mon, 22 May 2000 11:05:22 +1000",
"msg_from": "Chris Bitmead <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Last call for comments: fmgr rewrite [LONG]"
},
{
"msg_contents": "Chris Bitmead <[email protected]> writes:\n> Tom Lane wrote:\n>> I did consider that but it's probably not worth near-doubling the size\n>> of the struct (think about how that will pack, especially if Datum\n>> becomes 8 bytes). \n\n> But FUNC_MAX_ARGS is currently 16. 98% of functions are probably 1 or 2\n> arguments. So your way you always use 144 bytes. With my proposal most\n> will use 16 or 32 bytes because of the variable struct size and you\n> won't have an arbitrary limit of 16 args.\n\nNo, because we aren't ever going to be dynamically allocating these\nthings; they'll be local variables in the calling function. Typical\ncode looks like this:\n\nstatic Datum\nExecMakeFunctionResult(Node *node, List *arguments, ExprContext *econtext,\n bool *isNull, bool *isDone)\n{\n FunctionCallInfoData fcinfo;\n Datum result;\n\n MemSet(&fcinfo, 0, sizeof(fcinfo));\n\n /* ... fill non-defaulted fields of fcinfo here ... */\n\n result = FunctionCallInvoke(&fcinfo);\n *isNull = fcinfo.isnull;\n return result;\n}\n\nTo take advantage of a variable-length struct we'd need to do a palloc,\nwhich is pointless and slow. The only reason I care about the size of\nthe struct at all is that I don't want that MemSet() to take longer\nthan it has to. (While I don't absolutely have to zero the whole\nstruct, it's simple and clean to do that, and it ensures that unused\nfields will have a predictable value.)\n\nBottom line is that there *will* be a FUNC_MAX_ARGS limit. The only\nquestion is whether there's any point in making the binary-level API\nfor called functions be independent of the exact value of FUNC_MAX_ARGS.\nI kinda doubt it. There are a lot of other things that are more likely\nto vary across installations than FUNC_MAX_ARGS; I don't see this as\nbeing the limiting factor for portability.\n\n> Well if anybody ever wanted to do it, not having to re-write every\n> function in the system would be a nice win.\n\nWe already did the legwork of not having to rewrite anything. It's\nonly a config.h twiddle and recompile. I think that's plenty close\nenough...\n\n>>>> Wondering if some stub code generator might be appropriate so that\n>>>> functions can can continue to look as readable as before?\n>> \n>> Er, did you read to the end of the proposal?\n\n> Yep. Did I miss your point?\n\nPossibly, or else I'm missing yours. What would a stub code generator\ndo for us that the proposed GETARG and RETURN macros won't do?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 21 May 2000 23:28:31 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Last call for comments: fmgr rewrite [LONG] "
},
{
"msg_contents": "Tom Lane wrote:\n\n> No, because we aren't ever going to be dynamically allocating these\n> things; they'll be local variables in the calling function. \n\nFair enough then. Although that being the case, I don't see the big deal\nabout using a few more bytes of stack space which costs absolutely\nnothing, even though the binary compatibility is a small but still real\nadvantage.\n\n> >>>> Wondering if some stub code generator might be appropriate so that\n> >>>> functions can can continue to look as readable as before?\n> >>\n> >> Er, did you read to the end of the proposal?\n> \n> > Yep. Did I miss your point?\n> \n> Possibly, or else I'm missing yours. What would a stub code generator\n> do for us that the proposed GETARG and RETURN macros won't do?\n\nOnly that it might be slightly cleaner code, but you're probably right.\nI just have experience doing this sort of thing and know that manually\ngrabbing each argument can be painful with hundreds of functions.\n",
"msg_date": "Mon, 22 May 2000 13:46:22 +1000",
"msg_from": "Chris Bitmead <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Last call for comments: fmgr rewrite [LONG]"
},
{
"msg_contents": "Chris Bitmead <[email protected]> writes:\n>> Possibly, or else I'm missing yours. What would a stub code generator\n>> do for us that the proposed GETARG and RETURN macros won't do?\n\n> Only that it might be slightly cleaner code, but you're probably right.\n> I just have experience doing this sort of thing and know that manually\n> grabbing each argument can be painful with hundreds of functions.\n\nThe conversion is going to be a major pain in the rear, no doubt about\nthat :-(. I suspect it may take us more than one release cycle to get\nrid of all the old-style functions in the distribution, and we perhaps\nwill never be able to drop support for old-style dynamically loaded\nfunctions.\n\nOTOH, I also have experience with code preprocessors and they're no fun\neither in an open-source environment. You gotta port the preprocessor\nto everywhere you intend to run, make it robust against a variety of\ncoding styles, etc etc. Don't really want to go there.\n\nOn the third hand, you've got the germ of an idea: maybe a really\nquick-and-dirty script would be worth writing to do some of the basic\nconversion editing. It wouldn't have to be bulletproof because we\nwould go over the results by hand anyway, but it could help...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 22 May 2000 00:08:20 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Last call for comments: fmgr rewrite [LONG] "
},
{
"msg_contents": "> Tom Lane wrote:\n> \n> > No, because we aren't ever going to be dynamically allocating these\n> > things; they'll be local variables in the calling function. \n> \n> Fair enough then. Although that being the case, I don't see the big deal\n> about using a few more bytes of stack space which costs absolutely\n> nothing, even though the binary compatibility is a small but still real\n> advantage.\n\nI like Tom's clean design better. Flexibility for little payback\nusually just messes up clarity of the code.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 22 May 2000 00:18:20 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Last call for comments: fmgr rewrite [LONG]"
},
{
"msg_contents": "Tom Lane wrote:\n\n> OTOH, I also have experience with code preprocessors and they're no fun\n> either in an open-source environment. You gotta port the preprocessor\n> to everywhere you intend to run, make it robust against a variety of\n> coding styles, etc etc. Don't really want to go there.\n\nI was thinking of something more along the lines of a Corba idl code\ngenerator, only simpler. Maybe as simple as a file like:\n\nint4plus: INT4, INT4\nint4minus: INT4, INT4\netc...\n\nthat gets generated into some stubs that call the real code...\n\nDatum\nint4pl_stub(PG_FUNCTION_ARGS)\n{\n int32 arg1 = PG_GETARG_INT32(0);\n int32 arg2 = PG_GETARG_INT32(1);\n\n return PG_RETURN_INT32(int4pl(arg1, arg2));\n}\n",
"msg_date": "Mon, 22 May 2000 14:25:28 +1000",
"msg_from": "Chris Bitmead <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Last call for comments: fmgr rewrite [LONG]"
},
{
"msg_contents": "Bruce Momjian wrote:\n> \n> > Tom Lane wrote:\n> >\n> > > No, because we aren't ever going to be dynamically allocating these\n> > > things; they'll be local variables in the calling function.\n> >\n> > Fair enough then. Although that being the case, I don't see the big deal\n> > about using a few more bytes of stack space which costs absolutely\n> > nothing, even though the binary compatibility is a small but still real\n> > advantage.\n> \n> I like Tom's clean design better. Flexibility for little payback\n> usually just messes up clarity of the code.\n\nI tend to think grouping data that belongs together as by definition\n\"clean\". Whenever I'm tempted to have concurrent arrays like this I\nalways pull back because it seems to lead to major pain later. For\nexample, I can see situations where I'd like to pass an argument around\ntogether with it's is-null information...\n\n\nstruct FuncArg \n{\n Datum arg;\n bool argnull;\n};\n\ntypedef struct\n{\n struct FuncArg args[];\n} FunctionCallInfoData;\n\nDatum someFunc(FunctionCallInfo fcinfo)\n{\n\treturn INT32(foo(fcinfo.args[0]) +\n\t bar(fcinfo.args[1], fcinfo.args[2]));\n}\n\nint foo(FuncArg a) {\n if (a.argnull && INT32(a.arg) > 0 ||\n (!a.argnull && INT32(a.arg <= 0)\n return 3;\n else\n return 4;\n}\n\nint bar(FuncArg a, FuncArg b) {\n if (a.argnull || !b.argnull)\n return 0\n else\n return INT32(a.arg) ~ INT32(b.arg);\n}\n",
"msg_date": "Mon, 22 May 2000 14:43:18 +1000",
"msg_from": "Chris Bitmead <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Last call for comments: fmgr rewrite [LONG]"
},
{
"msg_contents": "Chris Bitmead <[email protected]> writes:\n> I was thinking of something more along the lines of a Corba idl code\n> generator, only simpler. Maybe as simple as a file like:\n\n> int4plus: INT4, INT4\n> int4minus: INT4, INT4\n> etc...\n\n> that gets generated into some stubs that call the real code...\n\n> Datum\n> int4pl_stub(PG_FUNCTION_ARGS)\n> {\n> int32 arg1 = PG_GETARG_INT32(0);\n> int32 arg2 = PG_GETARG_INT32(1);\n\n> return PG_RETURN_INT32(int4pl(arg1, arg2));\n> }\n\nOK ... but I don't think we want to leave a useless extra level of\nfunction call in the code forever. What I'm starting to visualize\nis a simple editing script that adds the above decoration to an existing\nfunction definition, and then you go back and do any necessary cleanup\nby hand. There is a lot of cruft that we should be able to rip out of\nthe existing code (checks for NULL arguments that are no longer needed\nif the function is declared strict, manipulation of pass-by-ref args\nand results for float4/float8/int8 datatypes, etc etc) so a hand\nediting pass will surely be needed. But maybe we could mechanize\ncreation of the basic GETARG/RETURN decorations...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 22 May 2000 00:46:18 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Last call for comments: fmgr rewrite [LONG] "
},
{
"msg_contents": "Chris Bitmead <[email protected]> writes:\n> Whenever I'm tempted to have concurrent arrays like this I always pull\n> back because it seems to lead to major pain later. For example, I can\n> see situations where I'd like to pass an argument around together with\n> it's is-null information...\n\nThat's not an unreasonable point ... although most of the existing code\nthat needs to do that seems to need additional values as well (the\ndatum's type OID, length, pass-by-ref flag are commonly needed).\nSomething close to the Const node type is what you tend to end up with.\nThe fmgr interface is (and should be, IMHO) optimized for the case where\nthe called code knows exactly what it's supposed to get and doesn't need\nthe overhead info. In particular, the vast majority of C-coded\nfunctions in the backend should be marked 'strict' in pg_proc, and will\nthen not need to bother with argnull at all...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 22 May 2000 01:15:15 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Last call for comments: fmgr rewrite [LONG] "
},
{
"msg_contents": "Tom Lane wrote:\n> \n> ------------------------------------------------------------------------------\n> Proposal for function-manager redesign 21-May-2000\n> --------------------------------------\n> \n> \n> Note that neither the old function manager nor the redesign are intended\n> to handle functions that accept or return sets. Those sorts of functions\n> need to be handled by special querytree structures.\n\nDoes the redesign allow functions that accept/return tuples ?\n\nOn my first reading at least I did not notice it.\n\n-----------\nHannu\n",
"msg_date": "Mon, 22 May 2000 11:16:07 +0300",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Last call for comments: fmgr rewrite [LONG]"
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Chris Bitmead <[email protected]> writes:\n> > Whenever I'm tempted to have concurrent arrays like this I always pull\n> > back because it seems to lead to major pain later. For example, I can\n> > see situations where I'd like to pass an argument around together with\n> > it's is-null information...\n> \n> That's not an unreasonable point ... although most of the existing code\n> that needs to do that seems to need additional values as well (the\n> datum's type OID, length, pass-by-ref flag are commonly needed).\n> Something close to the Const node type is what you tend to end up with.\n> The fmgr interface is (and should be, IMHO) optimized for the case where\n> the called code knows exactly what it's supposed to get and doesn't need\n> the overhead info.\n\nIt may be true for C functions, but functions in higher level languages \noften like to be able to operate on several types of arguments (or at least \nto operate on both NULL and NOT NULL args)\n\n> In particular, the vast majority of C-coded functions in the backend\n> should be marked 'strict' in pg_proc, and will then not need to bother\n> with argnull at all...\n\nBut the main aim of fmgr redesign is imho _not_ to make existing functions \nwork better but to enable a clean way for designing new functions/languages.\n\nI'm probably wrong, but to me it seems that the current proposal solves only \nthe problem with NULLs, and leaves untouched the other problem of arbitrary \nrestrictions on number of arguments (unless argcount > MAX is meant to be \npassed using VARIABLE i.e. -1)\n\n------------------------\nHannu\n",
"msg_date": "Mon, 22 May 2000 11:28:24 +0300",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Last call for comments: fmgr rewrite [LONG]"
},
{
"msg_contents": "I just got my hands on the real SQL99 stuff, dated September 1999, and it\ncontains a function creation syntax that is strikingly similar to ours,\nwhich would make it a shame not to at least try to play along. Below is a\nheavily reduced BNF which should give you some idea -- note in particular\nthe NULL call conventions. Download your copy at\n<ftp://jerry.ece.umassd.edu/isowg3/x3h2/Standards/>.\n\n\n <schema function> ::=\n CREATE FUNCTION <schema qualified name>\n <SQL parameter declaration list>\n RETURNS <data type>\n [ <routine characteristics>... ]\n [ <dispatch clause> ]\n <routine body>\n\n <dispatch clause> ::= STATIC DISPATCH\t\t/* no idea */\n\n <SQL parameter declaration list> ::=\n <left paren>\n [ <SQL parameter declaration> [ { <comma> <SQL parameter declaration> }... ] ]\n <right paren>\n\n <SQL parameter declaration> ::=\n [ <parameter mode> ] [ <SQL parameter name> ]\n <parameter type>\n [ RESULT ]\n\n <parameter mode> ::= IN | OUT | INOUT\n\t\t/* default is IN */\n\n <routine body> ::=\n <SQL routine body>\n | <external body reference>\n \n <SQL routine body> ::= <SQL procedure statement>\n\t\t/* which means a particular subset of SQL statements */\n \n <external body reference> ::=\n EXTERNAL [ NAME <external routine name> ]\n [ <parameter style clause> ]\n [ <external security clause> ]\n\n <routine characteristic> ::=\n LANGUAGE { ADA | C | COBOL | FORTRAN | MUMPS | PASCAL | PLI | SQL }\n | PARAMETER STYLE { SQL | GENERAL }\n | SPECIFIC <specific name>\t/* apparently to disambiguate overloaded functions */\n | { DETERMINISTIC | NOT DETERMINISTIC }\n | { NO SQL | CONTAINS SQL | READS SQL DATA | MODIFIES SQL DATA }\n | { RETURNS NULL ON NULL INPUT | CALLED ON NULL INPUT }\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n | <transform group specification>\n | <dynamic result sets characteristic>\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Mon, 22 May 2000 23:58:21 +0200 (CEST)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Last call for comments: fmgr rewrite [LONG]"
}
] |
[
{
"msg_contents": "> Empty pages get appended to a free list, and will be reused \n> on next page allocation. Empty space on pages (from deleted\n> tuples) where the rest of the page isn't empty will get reused\n> the next time the page is visited.\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\nIf visited for update... but how can I know with what RECNO I have\nto insert new tuple to store it on half-empty page?\n\n> We do, however, do reverse splits of underfull nodes, so \n> we're aggressive at getting empty pages back on the free list.\n\nYou can't merge two 49% empty pages in one. So, how to reuse\nthis 49%? How will we able to implement feature that good\ndatabases have: one can specify while table creation -\n\"insert new tuples on pages which N% empty\"?\n\nMore of that, you can't just re-use empty space (even if you know\nwhere in the tree there is space for new tuple) - you have to assign\n*right* recno to new tuple. What if 2k tuple on a page was updated\nto 0.2k and no tuple was deleted on this page? You can't re-use\nthis empty-space without tree reorg, what 1. requires additional\nwrites; 2. is not always possible at all.\n...Oh, we can work it out, by using somehing like current TID\n(blkno + itemid) as index key - this will require some changes\nin SDB btree code (new RECNO-like AM)... so, this was just\ncalculation of work required -:)\n\nAnd, while we are on heap subject - using index (RECNO) for heap\nmeans that all our secondary-index scans will performe TWO\nindex scans - first, to find recno in secondary-index, and\nsecond, to find heap tuple using recno (now indices give us\nTID, which is physical address).\n\n> > 2. SDB' btree-s support only one key, but we have multi-key \n> btree-s...\n> \n> This is a misunderstanding. Berkeley DB allows you to use\n> arbitrary data structures as keys. You define your own comparison \n> function, which understands your key structure and is capable of\n> doing comparisons between keys.\n\nOh, you're right.\nThough, I'm unhappy with\n\nDB_SET_RANGE\n The DB_SET_RANGE flag is identical to the DB_SET flag, except that\n the key is returned as well as the data item, and, in the case of\n the Btree access method, the returned key/data pair is \n the smallest key greater than or equal to the specified key\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\nwe would need differentiate > and >= : it's not good for key > 2 to read\nfrom disk all items with key == 2. But, this could be done.\n\n (as determined by the comparison function), permitting partial key\n matches and range searches. \n\n> You get another benefit from Berkeley DB -- we eliminate the 8K limit\n> on tuple size. For large records, we break them into page-sized\n> chunks for you, and we reassemble them on demand. Neither PostgreSQL\n> nor the user needs to worry about this, it's a service that \n> just works.\n\nBut what I've seen in code - you *always* copy data to memory,\nwhat is not good for both seq scans (when we have to evaluate\nquery qual) and index ones (with range quals). See also\ncomments from Chris Bitmead about the case when some of data\nare not in select target list...\nProbably, we could change DB->(c_)get methods and provide them\nwith two functions (to check qual and to determine what exactly\nshould be returned)...\n\n> A single record or a single key may be up to 4GB in size.\n> \n> > 3. How can we implement gist, rtree AND (multi-key) BTREE \n> > access methods using btree and hash access methods provided by SDB?!\n> \n> You'd build gist and rtree on top of the current buffer manager, much\n> as rtree is currently implemented on top of the lower-level page manager\n> in PostgreSQL.\n\nOh, so we have to implement redo/undo for them. Ok.\n\nWAL:\n> I encourage you to think hard about the amount of work that's really\n> required to produce a commercial-grade recovery and transaction system.\n> This stuff is extremely hard to get right -- you need to design, code,\n> and test for very high-concurrency, complex workloads. The log is a\n> new source of contention, and will be a gate to performance. The log\n> is also a new way to consume space endlessly, so you'll want to think\n> about backup and checkpoint support. With Berkeley DB, you get both\n> today. Our backup support permits you to do on-line backups. Backups\n> don't acquire locks and don't force a shutdown.\n\nAs for design and coding, 90% is already done (though, with SDB we could\navoid heap/btree/hash redo/undo implementation). As for stability/testing\n- as I already said, - after rewriting ~50% of system to use SDB,\nnothing in PostgreSQL will be \"well tested\".\n\n> Testing this stuff is tricky. For example, you need to prove \n> that you're able to survive a crash that interrupts the three\n> internal page writes that you do in the btree access method on\n> a page split. \n\nOh, testing of this case is very easy - I'll just stop backend\nusing gdb in critical points and will turn power off -:))\nI've run 2-3 backends under gdb to catch some concurrency-related\nbug in buffer manager - this technique works very well -:)\n\n> All of that said, I'd boil Vadim's message down to this:\n> \n> + With Berkeley DB, you'd need to reimplement multi-version\n> concurrency control, and that's an opportunity to introduce\n> new bugs.\n\nUnfortunately, this must be done *before* we could migrate to\nSDB -:( So, some of us will have to stop PG development and\nswitch to SDB code... which is hard for soul -:)\n\n> + With PostgreSQL, you'll need to implement logging \n> and recovery, and that's an opportunity to introduce new bugs.\n\nBut... we know where in our code set up breakpoints under gdb -:))\n\nOh, well.. let's continue to think...\n\nRegards,\n\tVadim\n",
"msg_date": "Sun, 21 May 2000 18:47:03 -0700",
"msg_from": "\"Mikheev, Vadim\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Berkeley DB..."
},
{
"msg_contents": "> And, while we are on heap subject - using index (RECNO) for heap\n> means that all our secondary-index scans will performe TWO\n> index scans - first, to find recno in secondary-index, and\n> second, to find heap tuple using recno (now indices give us\n> TID, which is physical address).\n\nYes, that was one of my questions. Why use recno at all? We already\nhave heap access which is very fast. Why switch to SDB which gives us\na recno ordering of heap that doesn't do us any real good, except to\nallow tuple update without changing indexes.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 21 May 2000 22:22:14 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Berkeley DB..."
}
] |
[
{
"msg_contents": "> > And, while we are on heap subject - using index (RECNO) for heap\n> > means that all our secondary-index scans will performe TWO\n> > index scans - first, to find recno in secondary-index, and\n> > second, to find heap tuple using recno (now indices give us\n> > TID, which is physical address).\n> \n> Yes, that was one of my questions. Why use recno at all? We already\n> have heap access which is very fast. Why switch to SDB which gives us\n> a recno ordering of heap that doesn't do us any real good, except to\n> allow tuple update without changing indexes.\n\nBut if we'll use our heap AM, then we'll have to implement redo/undo\nfor it... no sence to switch to SDB for btree/hash WAL support -:)\n\nVadim\n",
"msg_date": "Sun, 21 May 2000 20:00:01 -0700",
"msg_from": "\"Mikheev, Vadim\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Berkeley DB..."
},
{
"msg_contents": "[Charset iso-8859-1 unsupported, filtering to ASCII...]\n> > > And, while we are on heap subject - using index (RECNO) for heap\n> > > means that all our secondary-index scans will performe TWO\n> > > index scans - first, to find recno in secondary-index, and\n> > > second, to find heap tuple using recno (now indices give us\n> > > TID, which is physical address).\n> > \n> > Yes, that was one of my questions. Why use recno at all? We already\n> > have heap access which is very fast. Why switch to SDB which gives us\n> > a recno ordering of heap that doesn't do us any real good, except to\n> > allow tuple update without changing indexes.\n> \n> But if we'll use our heap AM, then we'll have to implement redo/undo\n> for it... no sence to switch to SDB for btree/hash WAL support -:)\n\nYes, SDB would give us redo/undo in heap, and that would make things\neasier. However, if there is the overhead of a double-index lookup when\nusing indexes, it seems like a very high cost.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 21 May 2000 23:11:00 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Berkeley DB..."
}
] |
[
{
"msg_contents": "> > Yes, that was one of my questions. Why use recno at all? \n> > We already have heap access which is very fast. Why switch\n> > to SDB which gives us a recno ordering of heap that doesn't\n> > do us any real good, except to allow tuple update without\n> > changing indexes.\n> \n> But if we'll use our heap AM, then we'll have to implement redo/undo\n> for it... no sence to switch to SDB for btree/hash WAL support -:)\n\nAlso, I think that our native index logging will require less space\nin log, because of we can do not write *key values* to log!\nIndex tuple insertion will be logged as \"index tuple pointing to\nheap TID was added to page BLKNO at position ITEMID\".\nThe same for index page split...\n\nVadim\n",
"msg_date": "Sun, 21 May 2000 20:09:54 -0700",
"msg_from": "\"Mikheev, Vadim\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Berkeley DB..."
}
] |
[
{
"msg_contents": "In MySQL you can't update on a join. It's a real pain in a well-factored\ndatabase.\n\n\t-Michael Robinson\n\nP.S. When it comes to ROLAP, though, MySQL kicks PostgreSQL's butt. For that\napplication only, I use MySQL.\n",
"msg_date": "Mon, 22 May 2000 14:39:46 +0800 (+0800)",
"msg_from": "Michael Robinson <[email protected]>",
"msg_from_op": true,
"msg_subject": "A test to add to the crashme test"
},
{
"msg_contents": "Michael Robinson <[email protected]> writes:\n> P.S. When it comes to ROLAP, though, MySQL kicks PostgreSQL's butt. For that\n> application only, I use MySQL.\n\nEr ... \"ROLAP\"? Expound, please.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 22 May 2000 02:57:34 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: A test to add to the crashme test "
},
{
"msg_contents": "Tom Lane <[email protected]> writes:\n>> P.S. When it comes to ROLAP, though, MySQL kicks PostgreSQL's butt. For that\n>> application only, I use MySQL.\n>\n>Er ... \"ROLAP\"? Expound, please.\n\nRelational On-Line Analytical Processing. As opposed to Multidimensional\nOnline Analytical Processing (MOLAP), the other kind of OLAP.\n\nThe basic principle of operation is that you put all your data in a big\nstar (or snowflake) schema, and then pare down your \"cube\" by pre-aggregating\nvarious dimensions of interest into various auxillary tables.\n\nIt works much better than MOLAP for big, sparse, high-dimensional data\n(like, for example, six months of log data from an active e-commerce/content\nwebsite).\n\nMySQL is extremely well suited for it: the data is essentially \"read-only\"\nso transactions, locking, etc., are not an issue, the per-row overhead is\nextremely small (important when you have hundreds of millions of short\nrecords), and the speed, especially with prudent indexing and datatype\nselection, is scorching fast.\n\nJust don't ever put any data in it that you can't reconstruct from scratch.\n\n\t-Michael\n\n",
"msg_date": "Mon, 22 May 2000 15:46:39 +0800 (+0800)",
"msg_from": "Michael Robinson <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: A test to add to the crashme test"
},
{
"msg_contents": "On Mon, May 22, 2000 at 03:46:39PM +0800, Michael Robinson wrote:\n> \n> MySQL is extremely well suited for it: the data is essentially \"read-only\"\n> so transactions, locking, etc., are not an issue, the per-row overhead is\n> extremely small (important when you have hundreds of millions of short\n> records), and the speed, especially with prudent indexing and datatype\n> selection, is scorching fast.\n\nPeople keep claiming that applications that are essentially \"read-only\"\ndon't need transactions. I'll agree in the limit, that truly read only\ndatabases don't, but I think a lot of people might be surprised at how\nlittle writing you need before you get into trouble. \n\nCase in point: Mozilla uses a MySQL db to back up their Bugzilla\nbugtracking system. Very popular site, _lots_ of people reading, not\na lot writing (from a developer's point of view, never enough...) The\nproblem they've seen is that if a reader someone fires off a \"stupid\"\nquery, like one that returns essentially every bug in the system, and\na developer then tries to update the status of a bug, every single\nconcurrent access to the system has to wait for the stupid query to\nfinish. Why? Because the writer attempts to aquire an exclusive lock,\nand blocks, waiting for the stupid query. Everyone else blocks, waiting\nfor the writer's lock.\n\nHow many writer's does it take for this to happen? One. I'd call that\nan \"essentially read-only\" system. A note, this is not a made up,\ntheoretical example. We're talking real world here.\n\nRoss\nP.S. here's the entry in bugzilla about this problem:\n\nhttp://bugzilla.mozilla.org/show_bug.cgi?id=27146\n\n-- \nRoss J. Reedstrom, Ph.D., <[email protected]> \nNSBRI Research Scientist/Programmer\nComputer and Information Technology Institute\nRice University, 6100 S. Main St., Houston, TX 77005\n\n",
"msg_date": "Mon, 22 May 2000 10:58:41 -0500",
"msg_from": "\"Ross J. Reedstrom\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: A test to add to the crashme test"
},
{
"msg_contents": "Hi,\n\nRoss J. Reedstrom:\n> People keep claiming that applications that are essentially \"read-only\"\n> don't need transactions. I'll agree in the limit, that truly read only\n> databases don't, but I think a lot of people might be surprised at how\n> little writing you need before you get into trouble. \n> [ Mozilla buchtracking example ]\n> How many writer's does it take for this to happen? One. I'd call that\n> an \"essentially read-only\" system. A note, this is not a made up,\n> theoretical example. We're talking real world here.\n> \nRight. But that's not about transactions; that's about concurrent read\nand write access to a table.\n\nPeople using MySQL in real-world situations usually solve this with one\nread/write database for \"normal\" work, and another one for the\nlong-running multi-record \"let's list every bug in the system\" queries.\n\nThe update from one to the other is set to low-priority so that it won't\nlock out any queries (with a timeout).\n\n\nMind you: I'm not saying this is ideal. A system with concurrent\nread/write access would be better. But it has the benefit of giving\nyou a replicated database which you can fall back to, if the primary\nsystem is down for whatever reason.\n\nBesides, the MySQL people are currently busy integrating Berkeley DB\ninto their code. Voila, instant read/write concurrency, and instant\ntransactions. Well, almost. ;-)\n\n-- \nMatthias Urlichs | noris network GmbH | [email protected] | ICQ: 20193661\nThe quote was selected randomly. Really. | http://smurf.noris.de/\n-- \nAcrophobes go down with little persuasion.\n",
"msg_date": "Tue, 23 May 2000 08:40:54 +0200",
"msg_from": "\"Matthias Urlichs\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: A test to add to the crashme test"
},
{
"msg_contents": "Matthias Urlichs wrote:\n> \n> Besides, the MySQL people are currently busy integrating Berkeley DB\n> into their code. \n\nThen MySQL may become a RDBMS after all ;)\n\n> Voila, instant read/write concurrency, and instant transactions.\n\nBut won't it slow them down ?\n\n-------------\nHannu\n",
"msg_date": "Tue, 23 May 2000 09:51:02 +0300",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: A test to add to the crashme test"
},
{
"msg_contents": "Hi,\n\nHannu Krosing:\n> > Voila, instant read/write concurrency, and instant transactions.\n> But won't it slow them down ?\n> \nOf course it will. That's why they make the Berkeley tables optional.\n\nTheir idea is that you use the Berkeley stuff for the tables which really\nrequire transactions, HEAP tables for in-memory cache/temp/whatever,\nand the standard MyISAM tables otherwise.\n\nReal-world example: Your customers' account balance really should be\ntransaction safe, and all that. But not their address, or their\nclicktrail through your online shop system.\n\n-- \nMatthias Urlichs | noris network GmbH | [email protected] | ICQ: 20193661\nThe quote was selected randomly. Really. | http://smurf.noris.de/\n-- \nIf people think nature is their friend, then they sure don't need an enemy.\n -- Kurt Vonnegut\n",
"msg_date": "Tue, 23 May 2000 09:58:29 +0200",
"msg_from": "\"Matthias Urlichs\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: A test to add to the crashme test"
}
] |
[
{
"msg_contents": "Just a reminder that there is some CORBA stuff under\nsrc/interfaces/jdbc/example/corba\n\nPeter\n\n-- \nPeter Mount\nEnterprise Support\nMaidstone Borough Council\nAny views stated are my own, and not those of Maidstone Borough Council.\n\n\n\n-----Original Message-----\nFrom: Thomas Lockhart [mailto:[email protected]]\nSent: Saturday, May 20, 2000 5:08 AM\nTo: Chris\nCc: Tom Lane; Chris Bitmead; [email protected]\nSubject: Re: [HACKERS] OO / fe-be protocol\n\n\n> Ok, I'll go back to reading about Corba and see if I can figure out if\n> it can do the job.\n\nIt can, and it is appropriate.\n\nThe devil is in the details, which include concerns on portability of\nthe ORB among our > 20 platforms, additional levels of complexity for\nthe minimum, small installation (Naming Service, etc etc), and general\nunfamiliarity with CORBA. I'm sure there are other concerns too.\n\nI've got some experience with C++ ORBs (TAO and Mico), but am not\nfamiliar with the C mapping and how clean it may or may not be.\n\nThe \"transform only if necessary\" philosophy of CORBA (that is,\nrecipients are responsible for changing byte order if required, but do\nnot if not) should minimize overhead. And the support for dynamic data\ndefinition and data handling should be a real winner, at least for\ncommunications to outside the server. Inside the server it could help\nus clean up our interfaces, and start thinking about distributing\nportions onto multiple platforms. Should be fun :)\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n",
"msg_date": "Mon, 22 May 2000 08:04:43 +0100",
"msg_from": "Peter Mount <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: OO / fe-be protocol"
}
] |
[
{
"msg_contents": "Relational online analytical processing. A variant of OLAP systems,\nwhich have their roots in decision support systems (DSSs) and executive\ninformation systems (EISs).\n\nWhy the original poster will use MySQL over PostgreSQL for this is maybe\nbest he explains. Maybe because of the supposed speed advantage ?\n\n",
"msg_date": "Mon, 22 May 2000 09:37:04 +0200",
"msg_from": "Kaare Rasmussen <[email protected]>",
"msg_from_op": true,
"msg_subject": "rolap"
},
{
"msg_contents": "* Kaare Rasmussen <[email protected]> [000522 01:19] wrote:\n> Relational online analytical processing. A variant of OLAP systems,\n> which have their roots in decision support systems (DSSs) and executive\n> information systems (EISs).\n> \n> Why the original poster will use MySQL over PostgreSQL for this is maybe\n> best he explains. Maybe because of the supposed speed advantage ?\n\nReasoning which goes into the bit bucket when he has to run his\nnext isamchk.\n\n-Alfred\n",
"msg_date": "Mon, 22 May 2000 02:15:06 -0700",
"msg_from": "Alfred Perlstein <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: rolap"
}
] |
[
{
"msg_contents": "Hi,\n\nI have statically linked the DynaLoader with plperl.so and modified plperl.c\nso that plperl functions executed do not execute within the 'safe\ncompartment'.\n\nplperl.c have been changed as follows:\n\n\textern void boot_Opcode _((CV * cv));\n\textern void boot_SPI _((CV * cv));\n\textern void boot_DynaLoader _((CV* cv)); <------ Added\n\n\tstatic void plperl_init_shared_libs(void) {\n char *file = __FILE__;\n dXSUB_SYS; <------- Added\n\n newXS(\"Opcode::bootstrap\", boot_Opcode, file);\n newXS(\"SPI::bootstrap\", boot_SPI, file);\n newXS(\"DynaLoader::boot_DynaLoader\", boot_DynaLoader, file);\n<--------added\n\t}\n\n\nI then compile plperl.c as follows:\n\n\tgcc -o blib/arch/auto/plperl/plperl.so -shared -L/usr/local/lib plperl.o\n\teloglvl.o SPI.o /usr/lib/perl5/5.00503/i586-linux/auto/Opcode/Opcode.so\n\t-L/usr/lib/perl5/5.00503/i586-linux/CORE -lperl `perl -MExtUtils::Embed -e\n\tccopts -e ldopts`\n\n\n\nWhen I try to import a perl module that relies on a C module I get the\nfollowing error message:\n\n\tERROR: creation of function failed : Can't load\n\t'/usr/lib/perl5/site_perl/5.005/i586-linux/auto/Pg/Pg.so' for module\n\tPg: /usr/lib/perl5/site_perl/5.005/i586-linux/auto/Pg/Pg.so: undefined\n\tsymbol: PL_sv_undef at /usr/lib/perl5/5.00503/i586-linux/DynaLoader.pm\n\tline 169.\n\n\tat /caseTracking/packages/SP.pm line 7 BEGIN failed--compilation aborted\n\tat /caseTracking/packages/SP.pm line 7.\n\tBEGIN failed--compilation aborted at (eval 1) line 2.\n\n\nAny ideas?\n\nRegards\nRagnar\n\n",
"msg_date": "Mon, 22 May 2000 11:37:32 +0100",
"msg_from": "\"Ragnar Hakonarson\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "plperl"
}
] |
[
{
"msg_contents": "Peter Eisentraut wrote:\n> \n> On Sun, 21 May 2000, Hannu Krosing wrote:\n> \n> > > Now a question in particular. I understand that this syntax might\n> > > give me some rows (a, b, c) and others (a, b, c, d, e) and perhaps others\n> > > (a, b, c, f, g, h). Now what would be the syntax for getting only (b, c),\n> > > (b, c, e) and (b, c, h)?\n> >\n> > What would you need that for ?\n> \n> Gee, lemme think. Why do we have SELECT a, b, c at all? Why doesn't\n> everyone just use SELECT * and filter the stuff themselves? What if I want\n> to apply a function on `h' but not on the others? Don't tell me there's no\n> syntax for that, only for getting all columns. (And the fact that your\n> proposed syntaxes seem completely ad hoc and home-brewed doesn't make me\n> feel better.)\n\nOh, now I understand what you asking. Yes I did suggest that you be\nallowed to specify sub-class attributes that don't occur in the\nsuper-class. The syntax would be the obvious - either attrname, or\nclass.attrname.\n\nAs far as syntax is concerned I don't think I'm welded to anything in\nparticular, so suggestions are welcome.\n",
"msg_date": "Mon, 22 May 2000 21:30:46 +1000",
"msg_from": "Chris Bitmead <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: OO Patch"
}
] |
[
{
"msg_contents": "\nFor those interested I've extended the patch to support the SQL3 UNDER\nsyntax...\n\nftp://ftp.tech.com.au/pub/diff.x\n",
"msg_date": "Mon, 22 May 2000 23:17:20 +1000",
"msg_from": "Chris Bitmead <[email protected]>",
"msg_from_op": true,
"msg_subject": "SQL3 UNDER"
},
{
"msg_contents": "\nMr. Bitmead,\n\nI read your patch (but have not applied it, or tested it exactly). It says\nthat UNDER supports multiple inheritance as a Postgres language extension. How\nabout instead, implementing UNDER exactly to the SQL3 spec? For multiple\ninheritance, why not just suggest the use of INHERITS, which is\nalready a Postgres language extension for multiple inheritance. UNDER covers\nthe tree/hierarchy situation, so make it only to SQL3 standards. \nINHERIT fits the clone/copy/inherits situation that, like I've\nsaid before, is like starting a new tree. You could constrain INHERITS to only\naccept tables that are maximal supertables. INHERITS probably should not\naccept a subtable that is within an UNDER tree - that would add much\ncomplication if allowed. I know that I said that maybe INHERITS should strive\nto become UNDER in a prior message. But now I can see exactly how they\ncomplement each other: each provides a different type of inheritance scheme.\n\n(sorry if I seem to keep saying the same things over and over ... again..)\nIn UNDER, the tables are connected into a tree like one big table with\nextensions to it (the subtables). The subtables are dependent on the\nsupertables so that the supertables cannot be dropped until you drop all its\nsubtables first. My impression of the subtable-supertable relationship is\nthat, again, the subtable stores only the subrow it declares. The subrow it\ninherited from its supertable is just a link. When inserting, the subtable\nstores the subrow it declared, then it accesses its supertable\nand inserts the inherited subrow. The subtable is incomplete without accessing\nits supertable for the inherited subrow. When you add a column to a superclass,\nthere should be no need to also add a column to the subclass. The subclass\ndoesn't store it, but should just begin accepting the new attribute of its\nsuperclass. Isn't this how the SQL3 spec works (I'll have to read it more)?\n\nINHERITS should accept only maximal supertables or tables that are not\npart of an UNDER tree. The child table could be independent of the parent\ntable. The parent table could be dropped without consequence to the child\ntable since it inherits a copy of all its parent's attributes. While the\nparent exists, it would maintain information about all of its children\ntables so that it can select down into them (in common attributes only). The\nchild maintains no linkage to the parent - its inserts etc only affect itself. \nThis contrasts with UNDER, where a subtable does maintain a link to its\nsupertable in order to cascade inserts etc to the supertable for the subrow it\ninherited.\n\nI hope my comments are helpful! :)\n\nOn Mon, 22 May 2000, Chris Bitmead wrote:\n> For those interested I've extended the patch to support the SQL3 UNDER\n> syntax...\n> \n> ftp://ftp.tech.com.au/pub/diff.x\n-- \nRobert B. Easter\[email protected]\n",
"msg_date": "Tue, 23 May 2000 02:16:52 -0400",
"msg_from": "\"Robert B. Easter\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SQL3 UNDER"
},
{
"msg_contents": "Chris Bitmead wrote:\n> \n> \"Robert B. Easter\" wrote:\n> > This contrasts with UNDER, where a subtable does maintain a link to its\n> > supertable in order to cascade inserts etc to the supertable for the subrow it\n> > inherited.\n> \n> What you have just described for the behaviour of UNDER (as opposed to\n> implementation) is just how INHERITS works now. i.e. you can't destroy\n> the parent unless there are no children.\n\nWe could supply DROP TABLE parent CASCADE; syntax to destroy bot parent and\nall \ninherited tables.\n\n---------------------\nHannu\n",
"msg_date": "Tue, 23 May 2000 09:40:42 +0300",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SQL3 UNDER"
},
{
"msg_contents": "\"Robert B. Easter\" wrote:\n> I read your patch (but have not applied it, or tested it exactly). It says\n> that UNDER supports multiple inheritance as a Postgres language extension. How\n> about instead, implementing UNDER exactly to the SQL3 spec? For multiple\n> inheritance, why not just suggest the use of INHERITS, which is\n> already a Postgres language extension for multiple inheritance. UNDER covers\n> the tree/hierarchy situation, so make it only to SQL3 standards.\n> INHERIT fits the clone/copy/inherits situation that, like I've\n> said before, is like starting a new tree. You could constrain INHERITS to only\n> accept tables that are maximal supertables. INHERITS probably should not\n> accept a subtable that is within an UNDER tree - that would add much\n> complication if allowed. I know that I said that maybe INHERITS should strive\n> to become UNDER in a prior message. But now I can see exactly how they\n> complement each other: each provides a different type of inheritance scheme.\n\nAs far as I'm concerned, current postgres INHERIT, is exactly the same\nsemantics as UNDER (apart from multiple inheritance). That being the\ncase, the only point of retaining INHERIT is legacy. Encouraging use of\na completely different syntax just for multiple inheritance seems\nfoolish.\n\nIf you think the semantics are different provide a specific example\n(including SQL) of how you think their behaviour is different - that is\nhow you think UNDER should work differently to current INHERIT.\n\n> (sorry if I seem to keep saying the same things over and over ... again..)\n> In UNDER, the tables are connected into a tree like one big table with\n> extensions to it (the subtables). The subtables are dependent on the\n> supertables so that the supertables cannot be dropped until you drop all its\n> subtables first. My impression of the subtable-supertable relationship is\n> that, again, the subtable stores only the subrow it declares. The subrow it\n> inherited from its supertable is just a link. When inserting, the subtable\n> stores the subrow it declared, then it accesses its supertable\n> and inserts the inherited subrow. The subtable is incomplete without accessing\n> its supertable for the inherited subrow. When you add a column to a superclass,\n> there should be no need to also add a column to the subclass. The subclass\n> doesn't store it, but should just begin accepting the new attribute of its\n> superclass. Isn't this how the SQL3 spec works (I'll have to read it more)?\n\nThat kinda sounds like how SQL3 describes the model. But it doesn't mean\nwe have to implement it that way to provide the same behaviour. And in\nfact we don't, and I don't think we should either.\n\n> INHERITS should accept only maximal supertables or tables that are not\n> part of an UNDER tree. The child table could be independent of the parent\n> table. The parent table could be dropped without consequence to the child\n> table since it inherits a copy of all its parent's attributes. While the\n> parent exists, it would maintain information about all of its children\n> tables so that it can select down into them (in common attributes only). The\n> child maintains no linkage to the parent - its inserts etc only affect itself.\n> This contrasts with UNDER, where a subtable does maintain a link to its\n> supertable in order to cascade inserts etc to the supertable for the subrow it\n> inherited.\n\nWhat you have just described for the behaviour of UNDER (as opposed to\nimplementation) is just how INHERITS works now. i.e. you can't destroy\nthe parent unless there are no children. While I think the ability to\ndestroy a parent would be a good feature (for evolving a schema for\nexample), it hardly amounts to a whole new model. The time to decide you\nwant the ability to destroy a parent, is when you've decided to destroy\na parent, not when you created a child. As far as I'm concerned, the\nmore wierd and wonderful ways to evolve the schema without destroying\ndata the better.\n\nIn general, I think you concentrating too hard on implementation as\nopposed to semantics. Try to crystalise the semantics first, then\nimplementation can be chosen on performance.\n",
"msg_date": "Tue, 23 May 2000 17:03:10 +1000",
"msg_from": "Chris Bitmead <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SQL3 UNDER"
},
{
"msg_contents": "On Tue, 23 May 2000, Chris Bitmead wrote:\n> If you think the semantics are different provide a specific example\n> (including SQL) of how you think their behaviour is different - that is\n> how you think UNDER should work differently to current INHERIT.\n> \n\nI'll try to provide examples later. For now, did you see the gif attachments\non a earlier message of mine? The UNDER and CLONES/INHERITS gif pictures\nprovide a graphical view of what I mean. UNDER creates tree hierarchy down\nvertically, while INHERITS supports multiple inheritance in a lateral\ndirection. The UNDER trees can be under any table that is part of an INHERITS\nrelationship. UNDER and INHERITS work at different levels sorta. A subtable\nin an UNDER hierarchy can't be in an INHERITS clause because it is logically\njust part of its maximal supertable. In other words, INHERITS can provide a\nrelationship between different whole trees created by UNDER, by way of a\nmaximal supertable being inherited by another maximal supertable with its own\nUNDER tree. Make any sense? :-)\n\n\n-- \nRobert B. Easter\[email protected]\n",
"msg_date": "Tue, 23 May 2000 03:37:48 -0400",
"msg_from": "\"Robert B. Easter\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SQL3 UNDER"
},
{
"msg_contents": "\n> I'll try to provide examples later. For now, did you see the gif \n> attachments on a earlier message of mine? \n\nI didn't before, but I do now.\n\n> The UNDER and CLONES/INHERITS gif pictures\n> provide a graphical view of what I mean. UNDER creates tree hierarchy \n> down vertically, while INHERITS supports multiple inheritance in a \n> lateral direction. The UNDER trees can be under any table that is part \n> of an INHERITS relationship. UNDER and INHERITS work at different \n> levels sorta. A subtable in an UNDER hierarchy can't be in an INHERITS > clause because it is logically just part of its maximal supertable. In \n> other words, INHERITS can provide a relationship between different \n> whole trees created by UNDER, by way of a maximal supertable being \n> inherited by another maximal supertable with its own\n> UNDER tree. Make any sense? :-)\n\nI'm afraid not. Show me the (SQL) code :-).\n",
"msg_date": "Tue, 23 May 2000 19:16:56 +1000",
"msg_from": "Chris Bitmead <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: SQL3 UNDER"
},
{
"msg_contents": "On Tue, 23 May 2000, Chris Bitmead wrote:\n> \n> > I'll try to provide examples later. For now, did you see the gif \n> > attachments on a earlier message of mine? \n> \n> I didn't before, but I do now.\n> \n> > The UNDER and CLONES/INHERITS gif pictures\n> > provide a graphical view of what I mean. UNDER creates tree hierarchy \n> > down vertically, while INHERITS supports multiple inheritance in a \n> > lateral direction. The UNDER trees can be under any table that is part \n> > of an INHERITS relationship. UNDER and INHERITS work at different \n> > levels sorta. A subtable in an UNDER hierarchy can't be in an INHERITS > clause because it is logically just part of its maximal supertable. In \n> > other words, INHERITS can provide a relationship between different \n> > whole trees created by UNDER, by way of a maximal supertable being \n> > inherited by another maximal supertable with its own\n> > UNDER tree. Make any sense? :-)\n> \n> I'm afraid not. Show me the (SQL) code :-).\n\n=======\nTree 1\n=======\nCREATE TABLE maxsuper1 (\n\tms1_id\t\tINTEGER PRIMARY KEY,\n\t...\n);\n\nCREATE TABLE sub1a (\n\tname\t\tVARCHAR(50);\n) UNDER maxsuper1; -- maxsuper1.ms1_id is PRIMARY KEY\n\n\n=======\nTree 2\n=======\nCREATE TABLE maxsuper2 (\n\tms2_id\t\tINTEGER PRIMARY KEY\n\t...\n);\n\nCREATE TABLE sub2a (\n\tname\t\tVARCHAR(50);\n\t...\n) UNDER maxsuper2; \n\n=====================================\nTree 3 is visible to Tree 1 and Tree 2 via INHERIT\nTree 1 (maxsuper1) and Tree 2 (maxsuper2) can see\ntheir own trees, AND Tree 3.\n=====================================\nCREATE TABLE maxsuper3 (\n\t-- inherited composite PRIMARY KEY (ms1_id, ms2_id)\n\t-- I think this might be the right thing to do, though this example is\n\t\tnot the best. Consider a TABLE row and a TABLE\n\t\tcol. TABLE cell could INHERIT (row,col). The\n\t\tinherited primary key (row_id, col_id) determines a cell.\n\t\tThis is also rather simple. It forces people who are going to\n\t\tuse multiple inheritance to really think about how the\n\t\tPRIMARY KEYs are chosen and when a composite\n\t\tdoesn't make sense, then they should probably not\n\t\tbe inherited together anyway.\n \t...\n) INHERITS (maxsuper1, maxsuper2); -- optional parens.\n\nCREATE TABLE sub3a (\n\tname\t\tVARCHAR(50);\n\t...\n) UNDER maxsuper3;\n\n========================================================\nExample SELECTs\n========================================================\nSELECT * FROM maxsuper1;\nReturns all rows, including into UNDER tree sub1a ...\nThis form will select though all UNDER related subtables.\n\nSELECT * FROM maxsuper1*;\nReturns all rows, including into UNDER tree sub1a and into child tree\nmaxsuper3 etc. If any subtables are parents of children in an INHERITS\nrelationship, then the select also continues through those INHERITS also,\ndescending into childs UNDER subtables and INHERIT children if any.\nThis form will select through all UNDER related subtables AND all INHERITED\nrelated children.\n\nSELECT * FROM ONLY maxsuper1;\nReturns only rows in maxsuper1, does NOT go into UNDER tree nor INHERIT\nrelated tree maxsuper3 ... maxsuper1 itself ONLY is selected.\nThis form will select from ONLY the specified table - INHERIT and UNDER related\nchildren and subtables are ignored.\n\nSELECT * FROM ONLY maxsuper1*;\nReturns only rows in maxsuper1 and INHERIT children, but does not get rows\nfrom any UNDER trees of maxsuper1 or its children.\nThis form will select through all INHERIT related children of the specified\ntable - all UNDER related tables are ignored.\n\n=============================\nSome Rules\n=============================\n1.\nUNDER and INHERIT can be used in the same CREATE TABLE, but with the following\nrestrictions:\n\na.\nIf C is UNDER A and INHERITS (B,...), then no table of (B,...) is UNDER A.\n\nb.\nIf C is UNDER B and INHERITS (A,...), then B INHERITS from no table of (A,...).\n\nBoth of these conditions prevent a situation where C tries to obtain the\nsame attributes two different ways. In other words, A and B must not be\nrelated by INHERIT or UNDER.\n\nYes, I'm saying that the following syntax is possible:\nCREATE TABLE subtable1b2 (\n\t...\n) UNDER maxsuper1 INHERITS(maxsuper2)\nThe inherited PRIMARY KEYs form a composite primary key.\n\n2.\nIf a column is added to a parent_table or supertable, the column add must\ncascade to the child_table(s) and subtable(s). If the column add does not\ncascade, then SELECT * FROM parent* and SELECT * FROM supertable, will not\nwork right. When adding a column to a supertable, any subtable that is a parent\ntable to children via INHERIT, has to cascade the new column to its children,\nwhich may also in turn cascade the column add further.\n\n3.\nA supertable cannot be deleted until all its subtables are deleted first, or\nsome syntax is used to cascade the delete (as suggested by Hannu Krosing).\n\n4.\nA parent table in an INHERIT relationship may be deleted without consequence to\nits children.\n\n5.\nIn the case of clashing same-name attributes in multiple inheritance from\nUNDER combined with INHERIT or just INHERIT, the CREATE TABLE fails until\nuse of ALTER TABLE RENAME COLUMN corrects the problem. Attribute rename will\nhave to cascade through child and subtables.\n\n ==================================================\n\nWell, enough for now. I hope somebody sees where I'm going here. In previous\nmessages I've said that it should not be allowed to inherit from a subtable. \nMy rules above now allow for that. The combination of UNDER and INHERIT allows\nfor quite a bit of flexibility if enough rules and details are sorted out.\n\nComments?\n\n-- \nRobert B. Easter\[email protected]\n",
"msg_date": "Tue, 23 May 2000 05:48:05 -0400",
"msg_from": "\"Robert B. Easter\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SQL3 UNDER"
},
{
"msg_contents": "\nWell, you've laid out a whole lot of rules here. I understand what those\nrules are, but I don't see the logical purpose for having such a set of\nrules.\n\nIt appears you've got two separate inheritance mechanisms that interact\nin strange ways. Which inheritance scheme that gets activated depends on\nwhether you use tablename or tablename*. Why not invent a few more\ninheritance mechanisms, then you can have tablename% and tablename&,\nthen there can be some more rules for how they interact? I don't\nunderstand why you want to have these kinds of semantics. Does it have\nprecedent in some programming language?\n\n\"Robert B. Easter\" wrote:\n> \n> On Tue, 23 May 2000, Chris Bitmead wrote:\n> > \n> > > I'll try to provide examples later. For now, did you see the gif\n> > > attachments on a earlier message of mine?\n> >\n> > I didn't before, but I do now.\n> >\n> > > The UNDER and CLONES/INHERITS gif pictures\n> > > provide a graphical view of what I mean. UNDER creates tree hierarchy\n> > > down vertically, while INHERITS supports multiple inheritance in a\n> > > lateral direction. The UNDER trees can be under any table that is part\n> > > of an INHERITS relationship. UNDER and INHERITS work at different\n> > > levels sorta. A subtable in an UNDER hierarchy can't be in an INHERITS > clause because it is logically just part of its maximal supertable. In\n> > > other words, INHERITS can provide a relationship between different\n> > > whole trees created by UNDER, by way of a maximal supertable being\n> > > inherited by another maximal supertable with its own\n> > > UNDER tree. Make any sense? :-)\n> >\n> > I'm afraid not. Show me the (SQL) code :-).\n> \n> =======\n> Tree 1\n> =======\n> CREATE TABLE maxsuper1 (\n> ms1_id INTEGER PRIMARY KEY,\n> ...\n> );\n> \n> CREATE TABLE sub1a (\n> name VARCHAR(50);\n> ) UNDER maxsuper1; -- maxsuper1.ms1_id is PRIMARY KEY\n> \n> =======\n> Tree 2\n> =======\n> CREATE TABLE maxsuper2 (\n> ms2_id INTEGER PRIMARY KEY\n> ...\n> );\n> \n> CREATE TABLE sub2a (\n> name VARCHAR(50);\n> ...\n> ) UNDER maxsuper2;\n> \n> =====================================\n> Tree 3 is visible to Tree 1 and Tree 2 via INHERIT\n> Tree 1 (maxsuper1) and Tree 2 (maxsuper2) can see\n> their own trees, AND Tree 3.\n> =====================================\n> CREATE TABLE maxsuper3 (\n> -- inherited composite PRIMARY KEY (ms1_id, ms2_id)\n> -- I think this might be the right thing to do, though this example is\n> not the best. Consider a TABLE row and a TABLE\n> col. TABLE cell could INHERIT (row,col). The\n> inherited primary key (row_id, col_id) determines a cell.\n> This is also rather simple. It forces people who are going to\n> use multiple inheritance to really think about how the\n> PRIMARY KEYs are chosen and when a composite\n> doesn't make sense, then they should probably not\n> be inherited together anyway.\n> ...\n> ) INHERITS (maxsuper1, maxsuper2); -- optional parens.\n> \n> CREATE TABLE sub3a (\n> name VARCHAR(50);\n> ...\n> ) UNDER maxsuper3;\n> \n> ========================================================\n> Example SELECTs\n> ========================================================\n> SELECT * FROM maxsuper1;\n> Returns all rows, including into UNDER tree sub1a ...\n> This form will select though all UNDER related subtables.\n> \n> SELECT * FROM maxsuper1*;\n> Returns all rows, including into UNDER tree sub1a and into child tree\n> maxsuper3 etc. If any subtables are parents of children in an INHERITS\n> relationship, then the select also continues through those INHERITS also,\n> descending into childs UNDER subtables and INHERIT children if any.\n> This form will select through all UNDER related subtables AND all INHERITED\n> related children.\n> \n> SELECT * FROM ONLY maxsuper1;\n> Returns only rows in maxsuper1, does NOT go into UNDER tree nor INHERIT\n> related tree maxsuper3 ... maxsuper1 itself ONLY is selected.\n> This form will select from ONLY the specified table - INHERIT and UNDER related\n> children and subtables are ignored.\n> \n> SELECT * FROM ONLY maxsuper1*;\n> Returns only rows in maxsuper1 and INHERIT children, but does not get rows\n> from any UNDER trees of maxsuper1 or its children.\n> This form will select through all INHERIT related children of the specified\n> table - all UNDER related tables are ignored.\n> \n> =============================\n> Some Rules\n> =============================\n> 1.\n> UNDER and INHERIT can be used in the same CREATE TABLE, but with the following\n> restrictions:\n> \n> a.\n> If C is UNDER A and INHERITS (B,...), then no table of (B,...) is UNDER A.\n> \n> b.\n> If C is UNDER B and INHERITS (A,...), then B INHERITS from no table of (A,...).\n> \n> Both of these conditions prevent a situation where C tries to obtain the\n> same attributes two different ways. In other words, A and B must not be\n> related by INHERIT or UNDER.\n> \n> Yes, I'm saying that the following syntax is possible:\n> CREATE TABLE subtable1b2 (\n> ...\n> ) UNDER maxsuper1 INHERITS(maxsuper2)\n> The inherited PRIMARY KEYs form a composite primary key.\n> \n> 2.\n> If a column is added to a parent_table or supertable, the column add must\n> cascade to the child_table(s) and subtable(s). If the column add does not\n> cascade, then SELECT * FROM parent* and SELECT * FROM supertable, will not\n> work right. When adding a column to a supertable, any subtable that is a parent\n> table to children via INHERIT, has to cascade the new column to its children,\n> which may also in turn cascade the column add further.\n> \n> 3.\n> A supertable cannot be deleted until all its subtables are deleted first, or\n> some syntax is used to cascade the delete (as suggested by Hannu Krosing).\n> \n> 4.\n> A parent table in an INHERIT relationship may be deleted without consequence to\n> its children.\n> \n> 5.\n> In the case of clashing same-name attributes in multiple inheritance from\n> UNDER combined with INHERIT or just INHERIT, the CREATE TABLE fails until\n> use of ALTER TABLE RENAME COLUMN corrects the problem. Attribute rename will\n> have to cascade through child and subtables.\n> \n> ==================================================\n> \n> Well, enough for now. I hope somebody sees where I'm going here. In previous\n> messages I've said that it should not be allowed to inherit from a subtable.\n> My rules above now allow for that. The combination of UNDER and INHERIT allows\n> for quite a bit of flexibility if enough rules and details are sorted out.\n> \n> Comments?\n> \n> --\n> Robert B. Easter\n> [email protected]\n",
"msg_date": "Tue, 23 May 2000 22:15:25 +1000",
"msg_from": "Chris Bitmead <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: SQL3 UNDER"
},
{
"msg_contents": "\nMaybe it would help if you have two examples. One that only uses UNDER,\nand one that only uses INHERITS, and explain how one or the other can\nwork differently.\n\n\n\"Robert B. Easter\" wrote:\n> \n> On Tue, 23 May 2000, Chris Bitmead wrote:\n> > \n> > > I'll try to provide examples later. For now, did you see the gif\n> > > attachments on a earlier message of mine?\n> >\n> > I didn't before, but I do now.\n> >\n> > > The UNDER and CLONES/INHERITS gif pictures\n> > > provide a graphical view of what I mean. UNDER creates tree hierarchy\n> > > down vertically, while INHERITS supports multiple inheritance in a\n> > > lateral direction. The UNDER trees can be under any table that is part\n> > > of an INHERITS relationship. UNDER and INHERITS work at different\n> > > levels sorta. A subtable in an UNDER hierarchy can't be in an INHERITS > clause because it is logically just part of its maximal supertable. In\n> > > other words, INHERITS can provide a relationship between different\n> > > whole trees created by UNDER, by way of a maximal supertable being\n> > > inherited by another maximal supertable with its own\n> > > UNDER tree. Make any sense? :-)\n> >\n> > I'm afraid not. Show me the (SQL) code :-).\n> \n> =======\n> Tree 1\n> =======\n> CREATE TABLE maxsuper1 (\n> ms1_id INTEGER PRIMARY KEY,\n> ...\n> );\n> \n> CREATE TABLE sub1a (\n> name VARCHAR(50);\n> ) UNDER maxsuper1; -- maxsuper1.ms1_id is PRIMARY KEY\n> \n> =======\n> Tree 2\n> =======\n> CREATE TABLE maxsuper2 (\n> ms2_id INTEGER PRIMARY KEY\n> ...\n> );\n> \n> CREATE TABLE sub2a (\n> name VARCHAR(50);\n> ...\n> ) UNDER maxsuper2;\n> \n> =====================================\n> Tree 3 is visible to Tree 1 and Tree 2 via INHERIT\n> Tree 1 (maxsuper1) and Tree 2 (maxsuper2) can see\n> their own trees, AND Tree 3.\n> =====================================\n> CREATE TABLE maxsuper3 (\n> -- inherited composite PRIMARY KEY (ms1_id, ms2_id)\n> -- I think this might be the right thing to do, though this example is\n> not the best. Consider a TABLE row and a TABLE\n> col. TABLE cell could INHERIT (row,col). The\n> inherited primary key (row_id, col_id) determines a cell.\n> This is also rather simple. It forces people who are going to\n> use multiple inheritance to really think about how the\n> PRIMARY KEYs are chosen and when a composite\n> doesn't make sense, then they should probably not\n> be inherited together anyway.\n> ...\n> ) INHERITS (maxsuper1, maxsuper2); -- optional parens.\n> \n> CREATE TABLE sub3a (\n> name VARCHAR(50);\n> ...\n> ) UNDER maxsuper3;\n> \n> ========================================================\n> Example SELECTs\n> ========================================================\n> SELECT * FROM maxsuper1;\n> Returns all rows, including into UNDER tree sub1a ...\n> This form will select though all UNDER related subtables.\n> \n> SELECT * FROM maxsuper1*;\n> Returns all rows, including into UNDER tree sub1a and into child tree\n> maxsuper3 etc. If any subtables are parents of children in an INHERITS\n> relationship, then the select also continues through those INHERITS also,\n> descending into childs UNDER subtables and INHERIT children if any.\n> This form will select through all UNDER related subtables AND all INHERITED\n> related children.\n> \n> SELECT * FROM ONLY maxsuper1;\n> Returns only rows in maxsuper1, does NOT go into UNDER tree nor INHERIT\n> related tree maxsuper3 ... maxsuper1 itself ONLY is selected.\n> This form will select from ONLY the specified table - INHERIT and UNDER related\n> children and subtables are ignored.\n> \n> SELECT * FROM ONLY maxsuper1*;\n> Returns only rows in maxsuper1 and INHERIT children, but does not get rows\n> from any UNDER trees of maxsuper1 or its children.\n> This form will select through all INHERIT related children of the specified\n> table - all UNDER related tables are ignored.\n> \n> =============================\n> Some Rules\n> =============================\n> 1.\n> UNDER and INHERIT can be used in the same CREATE TABLE, but with the following\n> restrictions:\n> \n> a.\n> If C is UNDER A and INHERITS (B,...), then no table of (B,...) is UNDER A.\n> \n> b.\n> If C is UNDER B and INHERITS (A,...), then B INHERITS from no table of (A,...).\n> \n> Both of these conditions prevent a situation where C tries to obtain the\n> same attributes two different ways. In other words, A and B must not be\n> related by INHERIT or UNDER.\n> \n> Yes, I'm saying that the following syntax is possible:\n> CREATE TABLE subtable1b2 (\n> ...\n> ) UNDER maxsuper1 INHERITS(maxsuper2)\n> The inherited PRIMARY KEYs form a composite primary key.\n> \n> 2.\n> If a column is added to a parent_table or supertable, the column add must\n> cascade to the child_table(s) and subtable(s). If the column add does not\n> cascade, then SELECT * FROM parent* and SELECT * FROM supertable, will not\n> work right. When adding a column to a supertable, any subtable that is a parent\n> table to children via INHERIT, has to cascade the new column to its children,\n> which may also in turn cascade the column add further.\n> \n> 3.\n> A supertable cannot be deleted until all its subtables are deleted first, or\n> some syntax is used to cascade the delete (as suggested by Hannu Krosing).\n> \n> 4.\n> A parent table in an INHERIT relationship may be deleted without consequence to\n> its children.\n> \n> 5.\n> In the case of clashing same-name attributes in multiple inheritance from\n> UNDER combined with INHERIT or just INHERIT, the CREATE TABLE fails until\n> use of ALTER TABLE RENAME COLUMN corrects the problem. Attribute rename will\n> have to cascade through child and subtables.\n> \n> ==================================================\n> \n> Well, enough for now. I hope somebody sees where I'm going here. In previous\n> messages I've said that it should not be allowed to inherit from a subtable.\n> My rules above now allow for that. The combination of UNDER and INHERIT allows\n> for quite a bit of flexibility if enough rules and details are sorted out.\n> \n> Comments?\n> \n> --\n> Robert B. Easter\n> [email protected]\n",
"msg_date": "Tue, 23 May 2000 22:28:51 +1000",
"msg_from": "Chris Bitmead <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: SQL3 UNDER"
},
{
"msg_contents": "On Tue, 23 May 2000, Chris Bitmead wrote:\n> Well, you've laid out a whole lot of rules here. I understand what those\n> rules are, but I don't see the logical purpose for having such a set of\n> rules.\n> \n> It appears you've got two separate inheritance mechanisms that interact\n> in strange ways. Which inheritance scheme that gets activated depends on\n> whether you use tablename or tablename*. Why not invent a few more\n> inheritance mechanisms, then you can have tablename% and tablename&,\n> then there can be some more rules for how they interact? I don't\n> understand why you want to have these kinds of semantics. Does it have\n> precedent in some programming language?\n\nA database is capable of more flexibility than a programming language with\nregard to how it can store objects. A database it not constrained by\nhardcoded runtime and compilation dependencies like objects in a programming\nlanguage. Changing the data structure of a program means reprogramming then\nrestarting the program. If made right, a database can evolve its classes\nwithout ever going offline. I think there are some differences and so I don't\nsee programming language precedents being so relevent.\n\nI'm just proposing things to see if we don't over look some possibilities. \nUnder my ideas here, UNDER can be implemented more like to spec (maybe\nexactly). INHERIT can pickup the Postgres extensions until a standard\ncovers it too.\n",
"msg_date": "Tue, 23 May 2000 08:45:23 -0400",
"msg_from": "\"Robert B. Easter\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SQL3 UNDER"
},
{
"msg_contents": "On Tue, 23 May 2000, Chris Bitmead wrote:\n> Maybe it would help if you have two examples. One that only uses UNDER,\n> and one that only uses INHERITS, and explain how one or the other can\n> work differently.\n\nWhich one (or both) that you use depends on the relationship the two entities\nhave. If you need multiple inheritance, your choice is clear: INHERITS. UNDER\nwill not do multiple inheritance.\nUNDER is the choice when the idea is of EXTENDing a class into more\nspecific types of subclasses. INHERIT is the choice when the idea is like\nparent and child or olddesign and newdesign where olddesign may disappear\nwithout any problem.\n\nWhat follows are some rough examples. There could be some errors. I'd like to\nsee someone elses examples too. I know there are possibilities for very good\nexamples.\n\n\nCREATE TABLE powersource (\n);\nCREATE TABLE nuclearpowersource (\n) UNDER powersource;\nCREATE fissionpowersource (\n) UNDER nuclearpowersource;\nCREATE fusionpowersource (\n) UNDER nuclearpowersource;\n\nCREATE TABLE machine (\n);\nCREATE TABLE poweredmachine (\n) INHERITS(powersource) UNDER machine ;\n\nCREATE TABLE wheel (\n);\nCREATE TABLE tire (\n) UNDER wheel;\nCREATE TABLE knobbedtire (\n) UNDER tire;\nCREATE TABLE smoothtire (\n) UNDER tire;\n\n\nCREATE TABLE transportmode (\n);\nCREATE TABLE wheeltransport (\n) INHERITS(tire) UNDER transportmode\nCREATE TABLE foottransport (\n) UNDER transportmode;\n\nCREATE TABLE engine (\n) INHERITS(poweredmachine);\nCREATE TABLE jetengine (\n) UNDER engine;\nCREATE TABLE PISTONENGINE (\n) UNDER engine;\nCREATE TABLE electricengine (\n) UNDER engine;\n\nCREATE TABLE lifeform (\n\tspecies\t\tINTEGER PRIMARY KEY,\n\tbrain\t\tINTEGER\n);\nCREATE TABLE human (\n) UNDER lifeform;\n\n\nCREATE TABLE autotransportmachine (\n) INHERITS (transportmode) UNDER poweredmachine\n\nCREATE TABLE cyborg (\n) INHERITS(autotransportmachine) UNDER human;\n\nCREATE TABLE entity (\n) INHERITS (cyborg);\n\n============================================\n\n> \n> \n> \"Robert B. Easter\" wrote:\n> > \n> > On Tue, 23 May 2000, Chris Bitmead wrote:\n> > > \n> > > > I'll try to provide examples later. For now, did you see the gif\n> > > > attachments on a earlier message of mine?\n> > >\n> > > I didn't before, but I do now.\n> > >\n> > > > The UNDER and CLONES/INHERITS gif pictures\n> > > > provide a graphical view of what I mean. UNDER creates tree hierarchy\n> > > > down vertically, while INHERITS supports multiple inheritance in a\n> > > > lateral direction. The UNDER trees can be under any table that is part\n> > > > of an INHERITS relationship. UNDER and INHERITS work at different\n> > > > levels sorta. A subtable in an UNDER hierarchy can't be in an INHERITS > clause because it is logically just part of its maximal supertable. In\n> > > > other words, INHERITS can provide a relationship between different\n> > > > whole trees created by UNDER, by way of a maximal supertable being\n> > > > inherited by another maximal supertable with its own\n> > > > UNDER tree. Make any sense? :-)\n> > >\n> > > I'm afraid not. Show me the (SQL) code :-).\n> > \n> > =======\n> > Tree 1\n> > =======\n> > CREATE TABLE maxsuper1 (\n> > ms1_id INTEGER PRIMARY KEY,\n> > ...\n> > );\n> > \n> > CREATE TABLE sub1a (\n> > name VARCHAR(50);\n> > ) UNDER maxsuper1; -- maxsuper1.ms1_id is PRIMARY KEY\n> > \n> > =======\n> > Tree 2\n> > =======\n> > CREATE TABLE maxsuper2 (\n> > ms2_id INTEGER PRIMARY KEY\n> > ...\n> > );\n> > \n> > CREATE TABLE sub2a (\n> > name VARCHAR(50);\n> > ...\n> > ) UNDER maxsuper2;\n> > \n> > =====================================\n> > Tree 3 is visible to Tree 1 and Tree 2 via INHERIT\n> > Tree 1 (maxsuper1) and Tree 2 (maxsuper2) can see\n> > their own trees, AND Tree 3.\n> > =====================================\n> > CREATE TABLE maxsuper3 (\n> > -- inherited composite PRIMARY KEY (ms1_id, ms2_id)\n> > -- I think this might be the right thing to do, though this example is\n> > not the best. Consider a TABLE row and a TABLE\n> > col. TABLE cell could INHERIT (row,col). The\n> > inherited primary key (row_id, col_id) determines a cell.\n> > This is also rather simple. It forces people who are going to\n> > use multiple inheritance to really think about how the\n> > PRIMARY KEYs are chosen and when a composite\n> > doesn't make sense, then they should probably not\n> > be inherited together anyway.\n> > ...\n> > ) INHERITS (maxsuper1, maxsuper2); -- optional parens.\n> > \n> > CREATE TABLE sub3a (\n> > name VARCHAR(50);\n> > ...\n> > ) UNDER maxsuper3;\n> > \n> > ========================================================\n> > Example SELECTs\n> > ========================================================\n> > SELECT * FROM maxsuper1;\n> > Returns all rows, including into UNDER tree sub1a ...\n> > This form will select though all UNDER related subtables.\n> > \n> > SELECT * FROM maxsuper1*;\n> > Returns all rows, including into UNDER tree sub1a and into child tree\n> > maxsuper3 etc. If any subtables are parents of children in an INHERITS\n> > relationship, then the select also continues through those INHERITS also,\n> > descending into childs UNDER subtables and INHERIT children if any.\n> > This form will select through all UNDER related subtables AND all INHERITED\n> > related children.\n> > \n> > SELECT * FROM ONLY maxsuper1;\n> > Returns only rows in maxsuper1, does NOT go into UNDER tree nor INHERIT\n> > related tree maxsuper3 ... maxsuper1 itself ONLY is selected.\n> > This form will select from ONLY the specified table - INHERIT and UNDER related\n> > children and subtables are ignored.\n> > \n> > SELECT * FROM ONLY maxsuper1*;\n> > Returns only rows in maxsuper1 and INHERIT children, but does not get rows\n> > from any UNDER trees of maxsuper1 or its children.\n> > This form will select through all INHERIT related children of the specified\n> > table - all UNDER related tables are ignored.\n> > \n> > =============================\n> > Some Rules\n> > =============================\n> > 1.\n> > UNDER and INHERIT can be used in the same CREATE TABLE, but with the following\n> > restrictions:\n> > \n> > a.\n> > If C is UNDER A and INHERITS (B,...), then no table of (B,...) is UNDER A.\n> > \n> > b.\n> > If C is UNDER B and INHERITS (A,...), then B INHERITS from no table of (A,...).\n> > \n> > Both of these conditions prevent a situation where C tries to obtain the\n> > same attributes two different ways. In other words, A and B must not be\n> > related by INHERIT or UNDER.\n> > \n> > Yes, I'm saying that the following syntax is possible:\n> > CREATE TABLE subtable1b2 (\n> > ...\n> > ) UNDER maxsuper1 INHERITS(maxsuper2)\n> > The inherited PRIMARY KEYs form a composite primary key.\n> > \n> > 2.\n> > If a column is added to a parent_table or supertable, the column add must\n> > cascade to the child_table(s) and subtable(s). If the column add does not\n> > cascade, then SELECT * FROM parent* and SELECT * FROM supertable, will not\n> > work right. When adding a column to a supertable, any subtable that is a parent\n> > table to children via INHERIT, has to cascade the new column to its children,\n> > which may also in turn cascade the column add further.\n> > \n> > 3.\n> > A supertable cannot be deleted until all its subtables are deleted first, or\n> > some syntax is used to cascade the delete (as suggested by Hannu Krosing).\n> > \n> > 4.\n> > A parent table in an INHERIT relationship may be deleted without consequence to\n> > its children.\n> > \n> > 5.\n> > In the case of clashing same-name attributes in multiple inheritance from\n> > UNDER combined with INHERIT or just INHERIT, the CREATE TABLE fails until\n> > use of ALTER TABLE RENAME COLUMN corrects the problem. Attribute rename will\n> > have to cascade through child and subtables.\n> > \n> > ==================================================\n> > \n> > Well, enough for now. I hope somebody sees where I'm going here. In previous\n> > messages I've said that it should not be allowed to inherit from a subtable.\n> > My rules above now allow for that. The combination of UNDER and INHERIT allows\n> > for quite a bit of flexibility if enough rules and details are sorted out.\n> > \n> > Comments?\n> > \n> > --\n> > Robert B. Easter\n> > [email protected]\n-- \nRobert B. Easter\[email protected]\n",
"msg_date": "Tue, 23 May 2000 09:01:52 -0400",
"msg_from": "\"Robert B. Easter\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SQL3 UNDER"
},
{
"msg_contents": "\"Robert B. Easter\" wrote:\n\n> A database is capable of more flexibility than a programming language \n> with regard to how it can store objects. A database it not constrained \n> by hardcoded runtime and compilation dependencies like objects in a \n> programming language. Changing the data structure of a program means \n> reprogramming then restarting the program. \n\nWell, my favoured language is lisp which can actually change its\nstructures, even its code and polymorphic rules at runtime.\n\n> If made right, a database can evolve its classes without ever going \n> offline. I think there are some differences and so I don't\n> see programming language precedents being so relevent.\n\nOk, programming languages aren't a precedent. Is there another database\nas precedent? Give me something to work with here.\n\n> I'm just proposing things to see if we don't over look some \n> possibilities. Under my ideas here, UNDER can be implemented more like \n> to spec (maybe exactly). INHERIT can pickup the Postgres extensions \n> until a standard covers it too.\n\nIt sounds to me you're worried about the implementation rather than the\nspec. IF someone were to bother implementing that layout it should\nprobably just be an option - not affecting semantics. CREATE TABLE\nfoo(...) UNDER bar LAYOUT IS HIERARCHIAL or LAYOUT IS SINGULAR. That\nwould complicate the code a lot though. Personally I think if it was\nimplemented the way the spec implies it would create an extra join for\nevery inheritance declaration. Avoiding that is the whole reason to have\nan object database. If you don't care about another join for every\ninheritance you may as well use a pure relational database with a mapper\nlibrary like persistance because you're not gaining a whole lot. On the\nother hand with the current implementation (which is pretty much how\nevery ODBMS and ORBMS I've ever seen works), there is very little\ndownside. If you implement a single index that indexes subclasses then\nboth index scans and sequential scans will be pretty near optimal with\nno joins. Compare against who knows how many joins if you split it up.\nThe only minor downside is you maybe lift a little more data off the\ndisk IF you happen to be doing a projection of super-class attributes.\nBut an ODMG interface would hardly ever do that anyway.\n",
"msg_date": "Tue, 23 May 2000 23:51:11 +1000",
"msg_from": "Chris Bitmead <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: SQL3 UNDER"
},
{
"msg_contents": "\"Robert B. Easter\" wrote:\n> \n> On Tue, 23 May 2000, Chris Bitmead wrote:\n> > Maybe it would help if you have two examples. One that only uses UNDER,\n> > and one that only uses INHERITS, and explain how one or the other can\n> > work differently.\n> \n> Which one (or both) that you use depends on the relationship the two entities\n> have. If you need multiple inheritance, your choice is clear: INHERITS. UNDER\n> will not do multiple inheritance.\n> UNDER is the choice when the idea is of EXTENDing a class into more\n> specific types of subclasses. INHERIT is the choice when the idea is like\n> parent and child or olddesign and newdesign where olddesign may disappear\n> without any problem.\n> \n> What follows are some rough examples. There could be some errors. I'd like to\n> see someone elses examples too. I know there are possibilities for very good\n> examples.\n> \n> CREATE TABLE powersource (\n> );\n> CREATE TABLE nuclearpowersource (\n> ) UNDER powersource;\n> CREATE fissionpowersource (\n> ) UNDER nuclearpowersource;\n> CREATE fusionpowersource (\n> ) UNDER nuclearpowersource;\n\nThis is what INHERITS currently is meant for.\n\n> CREATE TABLE machine (\n> );\n> CREATE TABLE poweredmachine (\n> ) INHERITS(powersource) UNDER machine ;\n\nWhy not just\n\n CREATE TABLE poweredmachine (\n machine_powersource powersource\n ) UNDER machine ;\n\nThis should probably allow to insert any powersource as machine_powersource.\n\n-------\nHannu\n",
"msg_date": "Tue, 23 May 2000 19:14:08 +0300",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SQL3 UNDER"
},
{
"msg_contents": "\"Robert B. Easter\" wrote:\n> \n> On Tue, 23 May 2000, Chris Bitmead wrote:\n> > Maybe it would help if you have two examples. One that only uses UNDER,\n> > and one that only uses INHERITS, and explain how one or the other can\n> > work differently.\n\nYes but how does a pure UNDER example actually work different to a pure\nINHERITS example? You've created various tables below (combining INHERIT\nand UNDER unfortunately), but how will the INHERITS hierarchies and\nUNDER hierarchies actually work differently in practice?\n\n> \n> Which one (or both) that you use depends on the relationship the two entities\n> have. If you need multiple inheritance, your choice is clear: INHERITS. UNDER\n> will not do multiple inheritance.\n> UNDER is the choice when the idea is of EXTENDing a class into more\n> specific types of subclasses. INHERIT is the choice when the idea is like\n> parent and child or olddesign and newdesign where olddesign may disappear\n> without any problem.\n> \n> What follows are some rough examples. There could be some errors. I'd like to\n> see someone elses examples too. I know there are possibilities for very good\n> examples.\n> \n> CREATE TABLE powersource (\n> );\n> CREATE TABLE nuclearpowersource (\n> ) UNDER powersource;\n> CREATE fissionpowersource (\n> ) UNDER nuclearpowersource;\n> CREATE fusionpowersource (\n> ) UNDER nuclearpowersource;\n> \n> CREATE TABLE machine (\n> );\n> CREATE TABLE poweredmachine (\n> ) INHERITS(powersource) UNDER machine ;\n> \n> CREATE TABLE wheel (\n> );\n> CREATE TABLE tire (\n> ) UNDER wheel;\n> CREATE TABLE knobbedtire (\n> ) UNDER tire;\n> CREATE TABLE smoothtire (\n> ) UNDER tire;\n> \n> CREATE TABLE transportmode (\n> );\n> CREATE TABLE wheeltransport (\n> ) INHERITS(tire) UNDER transportmode\n> CREATE TABLE foottransport (\n> ) UNDER transportmode;\n> \n> CREATE TABLE engine (\n> ) INHERITS(poweredmachine);\n> CREATE TABLE jetengine (\n> ) UNDER engine;\n> CREATE TABLE PISTONENGINE (\n> ) UNDER engine;\n> CREATE TABLE electricengine (\n> ) UNDER engine;\n> \n> CREATE TABLE lifeform (\n> species INTEGER PRIMARY KEY,\n> brain INTEGER\n> );\n> CREATE TABLE human (\n> ) UNDER lifeform;\n> \n> CREATE TABLE autotransportmachine (\n> ) INHERITS (transportmode) UNDER poweredmachine\n> \n> CREATE TABLE cyborg (\n> ) INHERITS(autotransportmachine) UNDER human;\n> \n> CREATE TABLE entity (\n> ) INHERITS (cyborg);\n> \n> ============================================\n> \n> >\n> >\n> > \"Robert B. Easter\" wrote:\n> > >\n> > > On Tue, 23 May 2000, Chris Bitmead wrote:\n> > > > \n> > > > > I'll try to provide examples later. For now, did you see the gif\n> > > > > attachments on a earlier message of mine?\n> > > >\n> > > > I didn't before, but I do now.\n> > > >\n> > > > > The UNDER and CLONES/INHERITS gif pictures\n> > > > > provide a graphical view of what I mean. UNDER creates tree hierarchy\n> > > > > down vertically, while INHERITS supports multiple inheritance in a\n> > > > > lateral direction. The UNDER trees can be under any table that is part\n> > > > > of an INHERITS relationship. UNDER and INHERITS work at different\n> > > > > levels sorta. A subtable in an UNDER hierarchy can't be in an INHERITS > clause because it is logically just part of its maximal supertable. In\n> > > > > other words, INHERITS can provide a relationship between different\n> > > > > whole trees created by UNDER, by way of a maximal supertable being\n> > > > > inherited by another maximal supertable with its own\n> > > > > UNDER tree. Make any sense? :-)\n> > > >\n> > > > I'm afraid not. Show me the (SQL) code :-).\n> > >\n> > > =======\n> > > Tree 1\n> > > =======\n> > > CREATE TABLE maxsuper1 (\n> > > ms1_id INTEGER PRIMARY KEY,\n> > > ...\n> > > );\n> > >\n> > > CREATE TABLE sub1a (\n> > > name VARCHAR(50);\n> > > ) UNDER maxsuper1; -- maxsuper1.ms1_id is PRIMARY KEY\n> > >\n> > > =======\n> > > Tree 2\n> > > =======\n> > > CREATE TABLE maxsuper2 (\n> > > ms2_id INTEGER PRIMARY KEY\n> > > ...\n> > > );\n> > >\n> > > CREATE TABLE sub2a (\n> > > name VARCHAR(50);\n> > > ...\n> > > ) UNDER maxsuper2;\n> > >\n> > > =====================================\n> > > Tree 3 is visible to Tree 1 and Tree 2 via INHERIT\n> > > Tree 1 (maxsuper1) and Tree 2 (maxsuper2) can see\n> > > their own trees, AND Tree 3.\n> > > =====================================\n> > > CREATE TABLE maxsuper3 (\n> > > -- inherited composite PRIMARY KEY (ms1_id, ms2_id)\n> > > -- I think this might be the right thing to do, though this example is\n> > > not the best. Consider a TABLE row and a TABLE\n> > > col. TABLE cell could INHERIT (row,col). The\n> > > inherited primary key (row_id, col_id) determines a cell.\n> > > This is also rather simple. It forces people who are going to\n> > > use multiple inheritance to really think about how the\n> > > PRIMARY KEYs are chosen and when a composite\n> > > doesn't make sense, then they should probably not\n> > > be inherited together anyway.\n> > > ...\n> > > ) INHERITS (maxsuper1, maxsuper2); -- optional parens.\n> > >\n> > > CREATE TABLE sub3a (\n> > > name VARCHAR(50);\n> > > ...\n> > > ) UNDER maxsuper3;\n> > >\n> > > ========================================================\n> > > Example SELECTs\n> > > ========================================================\n> > > SELECT * FROM maxsuper1;\n> > > Returns all rows, including into UNDER tree sub1a ...\n> > > This form will select though all UNDER related subtables.\n> > >\n> > > SELECT * FROM maxsuper1*;\n> > > Returns all rows, including into UNDER tree sub1a and into child tree\n> > > maxsuper3 etc. If any subtables are parents of children in an INHERITS\n> > > relationship, then the select also continues through those INHERITS also,\n> > > descending into childs UNDER subtables and INHERIT children if any.\n> > > This form will select through all UNDER related subtables AND all INHERITED\n> > > related children.\n> > >\n> > > SELECT * FROM ONLY maxsuper1;\n> > > Returns only rows in maxsuper1, does NOT go into UNDER tree nor INHERIT\n> > > related tree maxsuper3 ... maxsuper1 itself ONLY is selected.\n> > > This form will select from ONLY the specified table - INHERIT and UNDER related\n> > > children and subtables are ignored.\n> > >\n> > > SELECT * FROM ONLY maxsuper1*;\n> > > Returns only rows in maxsuper1 and INHERIT children, but does not get rows\n> > > from any UNDER trees of maxsuper1 or its children.\n> > > This form will select through all INHERIT related children of the specified\n> > > table - all UNDER related tables are ignored.\n> > >\n> > > =============================\n> > > Some Rules\n> > > =============================\n> > > 1.\n> > > UNDER and INHERIT can be used in the same CREATE TABLE, but with the following\n> > > restrictions:\n> > >\n> > > a.\n> > > If C is UNDER A and INHERITS (B,...), then no table of (B,...) is UNDER A.\n> > >\n> > > b.\n> > > If C is UNDER B and INHERITS (A,...), then B INHERITS from no table of (A,...).\n> > >\n> > > Both of these conditions prevent a situation where C tries to obtain the\n> > > same attributes two different ways. In other words, A and B must not be\n> > > related by INHERIT or UNDER.\n> > >\n> > > Yes, I'm saying that the following syntax is possible:\n> > > CREATE TABLE subtable1b2 (\n> > > ...\n> > > ) UNDER maxsuper1 INHERITS(maxsuper2)\n> > > The inherited PRIMARY KEYs form a composite primary key.\n> > >\n> > > 2.\n> > > If a column is added to a parent_table or supertable, the column add must\n> > > cascade to the child_table(s) and subtable(s). If the column add does not\n> > > cascade, then SELECT * FROM parent* and SELECT * FROM supertable, will not\n> > > work right. When adding a column to a supertable, any subtable that is a parent\n> > > table to children via INHERIT, has to cascade the new column to its children,\n> > > which may also in turn cascade the column add further.\n> > >\n> > > 3.\n> > > A supertable cannot be deleted until all its subtables are deleted first, or\n> > > some syntax is used to cascade the delete (as suggested by Hannu Krosing).\n> > >\n> > > 4.\n> > > A parent table in an INHERIT relationship may be deleted without consequence to\n> > > its children.\n> > >\n> > > 5.\n> > > In the case of clashing same-name attributes in multiple inheritance from\n> > > UNDER combined with INHERIT or just INHERIT, the CREATE TABLE fails until\n> > > use of ALTER TABLE RENAME COLUMN corrects the problem. Attribute rename will\n> > > have to cascade through child and subtables.\n> > >\n> > > ==================================================\n> > >\n> > > Well, enough for now. I hope somebody sees where I'm going here. In previous\n> > > messages I've said that it should not be allowed to inherit from a subtable.\n> > > My rules above now allow for that. The combination of UNDER and INHERIT allows\n> > > for quite a bit of flexibility if enough rules and details are sorted out.\n> > >\n> > > Comments?\n> > >\n> > > --\n> > > Robert B. Easter\n> > > [email protected]\n> --\n> Robert B. Easter\n> [email protected]\n",
"msg_date": "Wed, 24 May 2000 09:42:38 +1000",
"msg_from": "Chris Bitmead <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SQL3 UNDER"
}
] |
[
{
"msg_contents": "Robert B. Easter wrote:\n\n>On Sun, 21 May 2000, Chris Bitmead wrote:\n>> It is from\n>> ftp://gatekeeper.dec.com/pub/standards/sql\n>> and dated 1994. Is there something more recent?\n>\n>I believe so! 1994 is an old draft. From what I understand, SQL3 is\nan\n>official ISO standard as of sometime back in 1999. It may be that the\n>official standard cut out the things you quoted.\n>\n>Try downloading the stuff at:\n>ftp://jerry.ece.umassd.edu/isowg3/x3h2/Standards/\n>\n>>\n>> > What is the date on the copy of the SQL/Foundation you are\nreading? My copy is\n>> > dated September 23, 1999 ISO/IEC 9075-2 SQL3_ISO. I tried\nsearching for the\n>> > quotes above and could not find them. Do I have the correct\nversion?\n\nI have that the ISO statndard was adopted in July 1999. My copy is the\nANSI document, which I'm told is unchanged from ISO, and the adoption\ndate is listed as 8 December 1999. (Since I'm told both are identical,\nand I can get the ANSI PDF for $20, versus $310 for the ISO version,\nthat was an easy choice)\n\nOf course, now that the standard has been adopted, it is properly\nreferred to a SQL99.\n\nKarl DeBisschop\nwww.infoplease.com\n\n\n",
"msg_date": "Mon, 22 May 2000 09:17:22 -0400",
"msg_from": "Karl DeBisschop <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Thus spoke SQL3 (on OO)"
}
] |
[
{
"msg_contents": "[email protected] (Jan Wieck) writes:\n> I'm not totally sure what you mean with the ugly, non-\n> reentrant kluge. I assume it's this annoying\n> setjmp()/longjmp() juggling - isn't it?\n\nNo, I was unhappy with the global variables like fmgr_pl_info and\nCurrentTriggerData. As you say, error handling in the PL managers\nis pretty ugly, but I don't see a way around that --- and at least\nthe ugliness is localized ;-)\n\n> A new querytree structure cannot gain it, if the function\n> manager cannot handle it. At least we need to define how\n> tuple sets as arguments and results should be handled in the\n> future, and define the fmgr interface according to that\n> already.\n\nAt the moment I'm satisfied to have a trapdoor that allows extension of\nthe fmgr interface --- that's what the context and resultinfo fields are\nintended for. In my mind this is a limited redesign of one specific API\nfor limited objectives. If we try to turn the project into \"fix\neverything anyone could possibly want for functions\" then nothing will\nget done at all...\n\n>> resultinfo is NULL when calling any function from which a simple Datum\n>> result is expected. It may point to some subtype of Node if the function\n>> returns more than a Datum. Like the context field, resultinfo is a hook\n>> for expansion; fmgr itself doesn't constrain the use of the field.\n\n> Good place to put in a tuple descriptor for [SET] tuple\n> return types. But the same type of information should be\n> there per argument.\n\nThe context field could be used to pass additional information about\narguments, too. Actually, the way things are currently coded, it\nwouldn't be hard to throw in more extension pointers like context\nand resultinfo, so long as they are defined to default to NULL for\nsimple calls of functions accepting and returning Datums. As I was\nremarking to Chris, I have some concern about not bloating the struct,\nbut a pointer or two more or less won't hurt.\n\n> At this point I'd like to add another relkind we might want\n> to have. This relkind just describes a tuple structure,\n> without having a heap or rules. Only to define a complex type\n> to be used in function declarations.\n\nCould be a good idea. In the original Postgres code it seems the only\nway to define a tuple type is to create a table with that structure\n--- but maybe you have no intention of using the table, and only want\nthe type...\n\n>> It is generally the responsibility of the caller to ensure that the\n>> number of arguments passed matches what the callee is expecting; except\n>> for callees that take a variable number of arguments, the callee will\n>> typically ignore the nargs field and just grab values from arg[].\n\n> If you already think about calling the same function with\n> variable number of arguments, where are the argtypes?\n\nNot fmgr's problem --- it doesn't know a thing about the argument or\nresult types. I'm not sure that the variable-arguments business will\never really get implemented; I just wanted to be sure that these data\nstructures could represent it if we do want to implement it.\n\n>> For TOAST-able data types, the PG_GETARG macro will deliver a de-TOASTed\n>> data value. There might be a few cases where the still-toasted value is\n>> wanted, but I am having a hard time coming up with examples.\n\n> length() and octetlength() are good candidates.\n\nOK, so it will be possible to get at the still-toasted value.\n\n> For the two PL handlers I wrote that's enough. They allways\n> store their own private information in their own private\n> memory. Having some place there which is initialized to NULL,\n> where they can leave a pointer to avoid a lookup at each\n> invocation is perfect.\n\nYes, I've already changed them to do this ;-).\n\n>> In the initial phase, two new entries will be added to pg_language\n>> for language types \"newinternal\" and \"newC\", corresponding to\n>> builtin and dynamically-loaded functions having the new calling\n>> convention.\n\n> I would prefer \"interal_ext\" and \"C_ext\".\n\nSomeone else suggested renaming the old languages types to \"oldXXX\"\nand giving the new ones pride of place with the basic names \"internal\"\nand \"C\". For the internal functions we could do this if we like.\nFor dynamically loaded functions we will break existing code (or at\nleast the CREATE FUNCTION scripts for it) if we don't stick with \"C\"\nas the name for the old-style interface. Is that worth the long-term\nniceness of a simple name for the new-style interface? I went for\ncompatibility but I won't defend it very hard. Comments anyone?\n\n> What I'm missing (don't know which of these are standard\n> compliant):\n\n> Extending the system catalog to give arguments a name.\n\n> Extending the system catalog to provide default values\n> for arguments.\n\n> Extending call semantics so functions can have INPUT,\n> OUTPUT and INOUT arguments.\n\nNone of these are fmgr's problem AFAICS, nor do I see a reason to\nadd them to the current work proposal. They look like a future\nproject to me...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 22 May 2000 11:37:12 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Last call for comments: fmgr rewrite [LONG] "
}
] |
[
{
"msg_contents": "root <[email protected]> writes:\n> What do I need to change to stop the leak on the postmaster?\n\nGet rid of the getprotobyname call (essentially the same change\nas in fe-connect.c) in src/backend/libpq/pqcomm.c.\n\nYou could just grab last night's snapshot tarball; it should\nhave those fixes in it already.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 22 May 2000 12:19:06 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: libpq++ tracing considered harmfu[Fwd: libpq++ tracing considered\n\tharmful (was Re: libpq++ memory problems)]"
}
] |
[
{
"msg_contents": "\nI managed to compile (and sort of run) postgres 7.0 to SGI running IRIX\n6.5.7. I compiled to 64bit libraries. The problems I had were both due\nerrors in the configure script as well as postgres configuration files.\n\nconfigure problems:\n-------------------\n1- the program that configure uses to test for namespace std is faulty.\nI had to manually add #define HAVE_NAMESPACE_STD 1 to the top of\ninterfaces/libpq++/pgconnection.h\n\n2- configure badly guesses the type of the 3rd argument to accept(). it\ndecided it should be size_t (unsigned int on IRIX) while accept on IRIX\ntakes an int. postgres compiles, but accept never retrieves the IP\naddress of the connecting client (in StreamConnection() function in the\nfile backend/libpq/pqcomm.c). as a result no authentication can happen\nand the postmaster refuses connections regardless of what is in\npg_hba.conf. the error message is usually something like\n\n\"no pg_hba.conf entry for host localhost user postgres database\ntemplate1\"\n\nhere local host is just a default string and apparently will never match\nanything in pg_hba.conf. to fix this i changed line 521 of\ninclude/config.h\nfrom: #define SOCKET_SIZE_TYPE size_t\nto: #define SOCKET_SIZE_TYPE int\n\npostgres problems\n------------------\n3- src/pl/tcl/Makefile has a bug. line 69 is\nCFLAGS= $(TCL_CFLAGS_OPTIMIZE)\nthat clobbers all CFLAGS included previously. as a result the include\ndirectories, important to find tcl.h etc. will not be added to the\noptions and the compilation stops here complaining that it can't locate\ntcl.h etc.\nI just changed it to\nCFLAGS+= $(TCL_CFLAGS_OPTIMIZE)\n\n4- I had to change line 8 of interfaces/odbc/isqlext.h\n from # include <isql.h>\n to # include \"isql.h\"\nto force the inclusion of the local isql.h\n\n\nnow my questions: While compiling, i noticed a lot of warnings about\npointers getting truncated etc. it seems that postgres assumes pointer\nsizes to be 32 bits. so I suppose compiling for a 64bit platform can be\nrisky. Anyone have experience compiling postgres on a 64bit platform.\n\na lot of the regression tests also failed. some of these failures don't\nseem to be trivial (some are trivial). I will paste below the overall\nreport of the regression testing. But what I would really love is for\nsomeone to look at the my regression.diffs and kind of enlighten me to\nthe problems I should expect with this installation or (better still)\nwhat to look at to fix the problems. I have not used postgres before and\nam actually fairly green with databases in general so any help is most\nappreciated.\n\nif interested I could email you the diffs for the above changes.\n\nBest Regards\nMurad Nayal\n\n\nconfigure command\n-----------------\n./configure --prefix=/local --with-includes=/local/include\n--with-libraries=/local/lib --with-tcl --with-tclconfig=/local/lib\n--with-tkconfig=/local/lib --with-perl --with-odbc --with-CC=\"cc -O2\n-Xcpluscomm\" --with-CXX=\"CC -O2\" --with-x\n\nregression results\n------------------\n\ngmake runtest\ncc -O2 -Xcpluscomm -I../../include -I../../backend -I/local/include \n-U_NO_XOPEN4 -woff 1164,1171,1185,1195,1552 -Wl,-woff,15 -Wl,-woff,84\n-I../../interfaces/libpq -I../../include -c regress.c -o regress.o\nld -G -Bdynamic -shared -o regress.so regress.o \nMULTIBYTE=;export MULTIBYTE; \\\n/bin/sh ./regress.sh mips-sgi-irix6.5 2>&1 | tee regress.out\n=============== Notes... =================\npostmaster must already be running for the regression tests to succeed.\nThe time zone is set to PST8PDT for these tests by the client frontend.\nPlease report any apparent problems to [email protected]\nSee regress/README for more information.\n\n=============== dropping old regression database... =================\nDROP DATABASE\n=============== creating new regression database... =================\nCREATE DATABASE\n=============== installing languages... =================\ninstalling PL/pgSQL .. ok\n=============== running regression queries... =================\nboolean .. ok\nchar .. ok\nname .. ok\nvarchar .. ok\ntext .. ok\nint2 .. failed\nint4 .. failed\nint8 .. failed\noid .. ok\nfloat4 .. ok\nfloat8 .. ok\nnumeric .. ok\nstrings .. failed\nnumerology .. failed\npoint .. ok\nlseg .. ok\nbox .. ok\npath .. ok\npolygon .. ok\ncircle .. ok\ninterval .. ok\ntimestamp .. ok\nreltime .. ok\ntinterval .. failed\ninet .. ok\ncomments .. ok\noidjoins .. failed\ntype_sanity .. failed\nopr_sanity .. failed\nabstime .. failed\ngeometry .. failed\nhorology .. failed\ncreate_function_1 .. ./regress.sh[116]: sql/create_function_1.sql:\ncannot open: No such file or directory\ndiff: expected/create_function_1.out: No such file or directory\ndiff: results/create_function_1.out: No such file or directory\nok\ncreate_type .. failed\ncreate_table .. ok\ncreate_function_2 .. ./regress.sh[116]: sql/create_function_2.sql:\ncannot open: No such file or directory\ndiff: expected/create_function_2.out: No such file or directory\ndiff: results/create_function_2.out: No such file or directory\nok\ncopy .. ./regress.sh[116]: sql/copy.sql: cannot open: No such file or\ndirectory\ndiff: expected/copy.out: No such file or directory\ndiff: results/copy.out: No such file or directory\nok\nconstraints .. ./regress.sh[116]: sql/constraints.sql: cannot open: No\nsuch file or directory\ndiff: expected/constraints.out: No such file or directory\ndiff: results/constraints.out: No such file or directory\nok\ntriggers .. failed\ncreate_misc .. ok\ncreate_aggregate .. ok\ncreate_operator .. failed\ncreate_index .. failed\ncreate_view .. failed\nsanity_check .. ok\nerrors .. ok\nselect .. failed\nselect_into .. ok\nselect_distinct .. failed\nselect_distinct_on .. failed\nselect_implicit .. ok\nselect_having .. ok\nsubselect .. ok\nunion .. ok\ncase .. ok\njoin .. ok\naggregates .. failed\ntransactions .. failed\nrandom .. failed\nportals .. failed\narrays .. ok\nbtree_index .. failed\nhash_index .. failed\nmisc .. ./regress.sh[116]: sql/misc.sql: cannot open: No such file or\ndirectory\ndiff: expected/misc.out: No such file or directory\ndiff: results/misc.out: No such file or directory\nok\nselect_views .. failed\nalter_table .. failed\nportals_p2 .. failed\nrules .. failed\nforeign_key .. ok\nlimit .. failed\nplpgsql .. ok\ntemp .. ok\nACTUAL RESULTS OF REGRESSION TEST ARE NOW IN FILE regress.out\n\nTo run the optional big test(s) too, type 'make bigtest'\nThese big tests can take over an hour to complete\nThese actually are: numeric_big\nrm regress.o\n\n\n\n\n\n-- \nMurad Nayal M.D. Ph.D.\nDepartment of Biochemistry and Molecular Biophysics\nCollege of Physicians and Surgeons of Columbia University\n630 West 168th Street. New York, NY 10032\nTel: 212-305-6884\tFax: 212-305-6926\n",
"msg_date": "Mon, 22 May 2000 18:42:48 +0200",
"msg_from": "Murad Nayal <[email protected]>",
"msg_from_op": true,
"msg_subject": "port v7.0 to SGI-IRIX-6.5.7/64"
},
{
"msg_contents": "Bruce Momjian wrote:\n> \n> >\n> > I managed to compile (and sort of run) postgres 7.0 to SGI running IRIX\n> > 6.5.7. I compiled to 64bit libraries. The problems I had were both due\n> > errors in the configure script as well as postgres configuration files.\n> >\n> > configure problems:\n> > -------------------\n> > 1- the program that configure uses to test for namespace std is faulty.\n> > I had to manually add #define HAVE_NAMESPACE_STD 1 to the top of\n> > interfaces/libpq++/pgconnection.h\n> \n> Can you suggest a test that does work on Irix?\n> \n\nthe current test is:\n\n#line 1680 \"configure\"\n#include \"confdefs.h\"\n#include <stdio.h>\n#include <stdlib.h>\nusing namespace std;\n\nint main() {\n\n; return 0; }\n\nit fails with the message:\n\n\"configure\", line 1683: error(3173): name must be a namespace name\n using namespace std;\n\nyou just need to add a header file that contains elements of the C++\nstandard library defined in std. not all C++ standard library that comes\nwith the SGI C++ compiler are in defined in std. for example iostream\nstuff are not in std. however string is. so just include the string\nheader file in the program. This compiles fine with CC: \n\n#line 1680 \"configure\"\n#include \"confdefs.h\"\n#include <stdio.h>\n#include <stdlib.h>\n#include <string>\nusing namespace std;\n\nint main() {\n\n; return 0; }\n\nthe SGI compiler puts the std library in header files that don't end in\n\".h\". same header file ending in \".h\" will have declarations in the\nglobal name space.\n\nRegards\n\n-- \nMurad Nayal M.D. Ph.D.\nDepartment of Biochemistry and Molecular Biophysics\nCollege of Physicians and Surgeons of Columbia University\n630 West 168th Street. New York, NY 10032\nTel: 212-305-6884\tFax: 212-305-6926\n",
"msg_date": "Mon, 22 May 2000 22:53:07 +0200",
"msg_from": "Murad Nayal <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [BUGS] port v7.0 to SGI-IRIX-6.5.7/64"
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Bruce Momjian <[email protected]> writes:\n> >> 2- configure badly guesses the type of the 3rd argument to accept(). it\n> >> decided it should be size_t (unsigned int on IRIX) while accept on IRIX\n> >> takes an int.\n> \n> > Again, a suggested change?\n> \n> This is something that's been bugging me for a while; the problem on\n> most platforms is that int vs unsigned int parameter will only draw a\n> warning from the compiler, and autoconf's TRY_COMPILE macro is only able\n> to detect outright errors.\n> \n> I looked at the standard Autoconf macros just now, and found an example\n> that may give us the right solution: instead of trying to see whether\n> a call of accept with \"int\" or \"unsigned int\" parameter works, include\n> <sys/socket.h> and then write an \"extern\" declaration for accept with\n> the parameters we think it should have. This relies on the hope that\n> if the compiler sees two declarations for accept with different\n> parameter lists, it'll generate an error and not just a warning.\n\nsys/socket is already included in the test program. and yet all I get\nfrom the cc compiler is a warning!!! But here is a bit of trivia that I\nfound. the CC compiler (C++ on SGI) won't take it and will generate an\nerror. I am not sure obviously if this is to be expected of other C++\ncompilers. This particular warning message on my compiler has the number\n1164. you can turn warning messages to error conditions using the flag\n-diag_error message_number. So while cc conftest.c in this case\ngenerates a warning. cc -diag_error 1164 conftest.c will generate an\nerror. Again I don't know if this feature is common in other compilers.\n\nMurad\n\n\n\n-- \nMurad Nayal M.D. Ph.D.\nDepartment of Biochemistry and Molecular Biophysics\nCollege of Physicians and Surgeons of Columbia University\n630 West 168th Street. New York, NY 10032\nTel: 212-305-6884\tFax: 212-305-6926\n",
"msg_date": "Mon, 22 May 2000 23:12:51 +0200",
"msg_from": "Murad Nayal <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [BUGS] port v7.0 to SGI-IRIX-6.5.7/64"
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Murad Nayal <[email protected]> writes:\n> > 1- the program that configure uses to test for namespace std is faulty.\n> \n> That's not very helpful :-( --- what's wrong with it?\n> \n> > 2- configure badly guesses the type of the 3rd argument to accept().\n> \n> I have seen that happen on other platforms too; not clear how to fix it.\n> But as long as the guessed value is the right size it should work,\n> I would think --- unsigned int vs. int shouldn't make a difference.\n> Are you sure that that is the reason it wasn't working?\n> \n\nI am actually pretty sure that giving accept a size_t addrlen (in this\ncase) is causing accept to not copy the client address over (at least\nnot copy it to the right place!). I was a little bit inaccurate in my\nstatement earlier though. size_t is unsigned int when compiling in 32bit\nmode. however it seems to be unsigned long (8 bytes) when compiling in\n64bit mode, while int is still 4 bytes. so that seems consistent with\nwhat you said.\n\nI was trying to compile postgres in 64 because I would like to integrate\nit with the rest of my libraries which I usually compile in 64bit mode.\nI suppose I only need the frontend to be binary compatible with my code\n(libpq?) but I can only imagine the hassle I could get into trying to\ncompile the front end in 64 and the backend separately in n32. I am\ngoing to repeat the regression tests just to make sure I used gmake all\nin regress and will report back.\n\n\nRegards\n\n\n\n-- \nMurad Nayal M.D. Ph.D.\nDepartment of Biochemistry and Molecular Biophysics\nCollege of Physicians and Surgeons of Columbia University\n630 West 168th Street. New York, NY 10032\nTel: 212-305-6884\tFax: 212-305-6926\n",
"msg_date": "Mon, 22 May 2000 23:28:16 +0200",
"msg_from": "Murad Nayal <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: port v7.0 to SGI-IRIX-6.5.7/64"
},
{
"msg_contents": "\n> \n> It looks like you neglected to run \"make all\" before \"make runtest\".\n> Try that and see if it gets better...\n\nok, you were right. apparently on my last cycle of gmake clean/gmake all\nI dropped the gmake all. On a brand new installation (it is amazing how\nfast these things go once you figure out all the necessary tweaks :-) )\nthe testing output does get better. Still I am getting a lot of fails\nand most of them seem nontrivial. regress.out is below. \n\nI ran the sequential tests since I actually needed to insert a line in\npg_hba.conf to allow my host to connect: the line\nhost all 127.0.0.1 255.255.255.255 trust\nwasn't enough for some reason, I had to specify my IP address in an\nadditional line for the authentication to work.\nhost all 156.111.96.29 255.255.255.255 trust\nhence, I had to run the postmaster on the modified configuration before\nthe tests.\n\nanyway, please let me know if failing of any of these tests is\nparticularly ominous. the regressions.diffs, the postmaster output or\nthe gmake output are also available if anybody is interested.\n\nThanks\n\ncat regress.out \n=============== Notes... =================\npostmaster must already be running for the regression tests to succeed.\nThe time zone is set to PST8PDT for these tests by the client frontend.\nPlease report any apparent problems to [email protected]\nSee regress/README for more information.\n\n=============== dropping old regression database... =================\nERROR: DROP DATABASE: Database \"regression\" does not exist\ndropdb: database removal failed\n=============== creating new regression database... =================\nCREATE DATABASE\n=============== installing languages... =================\ninstalling PL/pgSQL .. ok\n=============== running regression queries... =================\nboolean .. ok\nchar .. ok\nname .. ok\nvarchar .. ok\ntext .. ok\nint2 .. failed\nint4 .. failed\nint8 .. failed\noid .. ok\nfloat4 .. ok\nfloat8 .. ok\nnumeric .. ok\nstrings .. failed\nnumerology .. failed\npoint .. ok\nlseg .. ok\nbox .. ok\npath .. ok\npolygon .. ok\ncircle .. ok\ninterval .. ok\ntimestamp .. ok\nreltime .. ok\ntinterval .. failed\ninet .. ok\ncomments .. ok\noidjoins .. failed\ntype_sanity .. failed\nopr_sanity .. failed\nabstime .. failed\ngeometry .. failed\nhorology .. failed\ncreate_function_1 .. ok\ncreate_type .. ok\ncreate_table .. ok\ncreate_function_2 .. ok\ncopy .. ok\nconstraints .. ok\ntriggers .. ok\ncreate_misc .. ok\ncreate_aggregate .. ok\ncreate_operator .. ok\ncreate_index .. ok\ncreate_view .. ok\nsanity_check .. ok\nerrors .. ok\nselect .. ok\nselect_into .. ok\nselect_distinct .. ok\nselect_distinct_on .. ok\nselect_implicit .. ok\nselect_having .. ok\nsubselect .. ok\nunion .. ok\ncase .. ok\njoin .. ok\naggregates .. ok\ntransactions .. ok\nrandom .. ok\nportals .. ok\narrays .. ok\nbtree_index .. ok\nhash_index .. ok\nmisc .. ok\nselect_views .. ok\nalter_table .. ok\nportals_p2 .. ok\nrules .. failed\nforeign_key .. ok\nlimit .. ok\nplpgsql .. ok\ntemp .. ok\n\n\n-- \nMurad Nayal M.D. Ph.D.\nDepartment of Biochemistry and Molecular Biophysics\nCollege of Physicians and Surgeons of Columbia University\n630 West 168th Street. New York, NY 10032\nTel: 212-305-6884\tFax: 212-305-6926\n",
"msg_date": "Tue, 23 May 2000 00:33:05 +0200",
"msg_from": "Murad Nayal <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: port v7.0 to SGI-IRIX-6.5.7/64"
},
{
"msg_contents": "Murad Nayal <[email protected]> writes:\n> 1- the program that configure uses to test for namespace std is faulty.\n\nThat's not very helpful :-( --- what's wrong with it?\n\n> 2- configure badly guesses the type of the 3rd argument to accept().\n\nI have seen that happen on other platforms too; not clear how to fix it.\nBut as long as the guessed value is the right size it should work,\nI would think --- unsigned int vs. int shouldn't make a difference.\nAre you sure that that is the reason it wasn't working?\n\n> 3- src/pl/tcl/Makefile has a bug. line 69 is\n> CFLAGS= $(TCL_CFLAGS_OPTIMIZE)\n> that clobbers all CFLAGS included previously. as a result the include\n> directories, important to find tcl.h etc. will not be added to the\n> options and the compilation stops here complaining that it can't locate\n> tcl.h etc.\n> I just changed it to\n> CFLAGS+= $(TCL_CFLAGS_OPTIMIZE)\n\nGood point, but that's no solution --- the reason that the makefile\nisn't keeping the main CFLAGS is that Tcl (and hence pltcl) may be\nbuilt with a different compiler than Postgres is being built with.\nThe Tcl compiler may not like the other compiler's switches. I guess\nwe could arrange to insert just the -I switches from your\n--with-includes configuration command, however.\n\n> 4- I had to change line 8 of interfaces/odbc/isqlext.h\n> from # include <isql.h>\n> to # include \"isql.h\"\n> to force the inclusion of the local isql.h\n\nGood catch.\n\n> now my questions: While compiling, i noticed a lot of warnings about\n> pointers getting truncated etc. it seems that postgres assumes pointer\n> sizes to be 32 bits. so I suppose compiling for a 64bit platform can be\n> risky. Anyone have experience compiling postgres on a 64bit platform.\n\nWe do assume that \"unsigned long\" will hold a pointer; if that's not\ntrue on IRIX then you're going to have troubles. There are a number\nof patches known to be needed on Alphas, which are planned for\nintegration into the standard distribution for 7.1 --- dunno if any\nof them would help on your setup.\n\n> a lot of the regression tests also failed. some of these failures don't\n> seem to be trivial (some are trivial).\n\nIt looks like you neglected to run \"make all\" before \"make runtest\".\nTry that and see if it gets better...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 22 May 2000 19:35:46 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: port v7.0 to SGI-IRIX-6.5.7/64 "
},
{
"msg_contents": "> \n> I managed to compile (and sort of run) postgres 7.0 to SGI running IRIX\n> 6.5.7. I compiled to 64bit libraries. The problems I had were both due\n> errors in the configure script as well as postgres configuration files.\n> \n> configure problems:\n> -------------------\n> 1- the program that configure uses to test for namespace std is faulty.\n> I had to manually add #define HAVE_NAMESPACE_STD 1 to the top of\n> interfaces/libpq++/pgconnection.h\n\nCan you suggest a test that does work on Irix?\n\n> \n> 2- configure badly guesses the type of the 3rd argument to accept(). it\n> decided it should be size_t (unsigned int on IRIX) while accept on IRIX\n> takes an int. postgres compiles, but accept never retrieves the IP\n> address of the connecting client (in StreamConnection() function in the\n> file backend/libpq/pqcomm.c). as a result no authentication can happen\n> and the postmaster refuses connections regardless of what is in\n> pg_hba.conf. the error message is usually something like\n> \n> \"no pg_hba.conf entry for host localhost user postgres database\n> template1\"\n> \n> here local host is just a default string and apparently will never match\n> anything in pg_hba.conf. to fix this i changed line 521 of\n> include/config.h\n> from: #define SOCKET_SIZE_TYPE size_t\n> to: #define SOCKET_SIZE_TYPE int\n\nAgain, a suggested change?\n\n> \n> postgres problems\n> ------------------\n> 3- src/pl/tcl/Makefile has a bug. line 69 is\n> CFLAGS= $(TCL_CFLAGS_OPTIMIZE)\n> that clobbers all CFLAGS included previously. as a result the include\n> directories, important to find tcl.h etc. will not be added to the\n> options and the compilation stops here complaining that it can't locate\n> tcl.h etc.\n> I just changed it to\n> CFLAGS+= $(TCL_CFLAGS_OPTIMIZE)\n\nChanged.\n\n> \n> 4- I had to change line 8 of interfaces/odbc/isqlext.h\n> from # include <isql.h>\n> to # include \"isql.h\"\n> to force the inclusion of the local isql.h\n\nChanged. Will appear in 7.0.1.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 22 May 2000 19:56:32 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [BUGS] port v7.0 to SGI-IRIX-6.5.7/64"
},
{
"msg_contents": "> > 3- src/pl/tcl/Makefile has a bug. line 69 is\n> > CFLAGS= $(TCL_CFLAGS_OPTIMIZE)\n> > that clobbers all CFLAGS included previously. as a result the include\n> > directories, important to find tcl.h etc. will not be added to the\n> > options and the compilation stops here complaining that it can't locate\n> > tcl.h etc.\n> > I just changed it to\n> > CFLAGS+= $(TCL_CFLAGS_OPTIMIZE)\n> \n> Good point, but that's no solution --- the reason that the makefile\n> isn't keeping the main CFLAGS is that Tcl (and hence pltcl) may be\n> built with a different compiler than Postgres is being built with.\n> The Tcl compiler may not like the other compiler's switches. I guess\n> we could arrange to insert just the -I switches from your\n> --with-includes configuration command, however.\n\nTom, do you want his verion in Makefile, or the original?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 22 May 2000 19:58:45 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PORTS] Re: port v7.0 to SGI-IRIX-6.5.7/64"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n>> Good point, but that's no solution --- the reason that the makefile\n>> isn't keeping the main CFLAGS is that Tcl (and hence pltcl) may be\n>> built with a different compiler than Postgres is being built with.\n>> The Tcl compiler may not like the other compiler's switches. I guess\n>> we could arrange to insert just the -I switches from your\n>> --with-includes configuration command, however.\n\n> Tom, do you want his verion in Makefile, or the original?\n\nI'm working on it now --- need to fix configure to export a list of\njust the -I switches, so we can include those into the pltcl build.\n\nplperl may have the same problem, haven't looked yet.\n\nThere is a potential problem in this area that I've actually seen\nhappen, but don't know a way around: the two compilers may have\ndifferent default search paths. For example, on my system gcc includes\n/usr/local/include in its default search path, but the system cc does\nnot. So, if you've built Tcl with system cc, and installed it in\n/usr/local, and then try to build Postgres with gcc and --with-tcl,\neverything goes fine until you get to pltcl, whereupon it falls over\nbecause pltcl will be built with cc and cc doesn't find tcl.h.\nThat will still happen with this fix. The only way to build that\nconfiguration successfully is to explicitly say\n\"--with-includes=/usr/local/include --with-libs=/usr/local/lib\".\n\nArguably this is a bug in Tcl's tclConfig.sh, since it probably ought to\nput \"-I/usr/local/include\" into its own CFLAGS settings when it's been\nbuilt that way ... but it doesn't ...\n\nI don't have an answer to solving that problem automatically,\nbut I thought it'd be good to mention it for the archives.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 22 May 2000 20:58:57 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PORTS] Re: port v7.0 to SGI-IRIX-6.5.7/64 "
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n>> 2- configure badly guesses the type of the 3rd argument to accept(). it\n>> decided it should be size_t (unsigned int on IRIX) while accept on IRIX\n>> takes an int.\n\n> Again, a suggested change?\n\nThis is something that's been bugging me for a while; the problem on\nmost platforms is that int vs unsigned int parameter will only draw a\nwarning from the compiler, and autoconf's TRY_COMPILE macro is only able\nto detect outright errors.\n\nI looked at the standard Autoconf macros just now, and found an example\nthat may give us the right solution: instead of trying to see whether\na call of accept with \"int\" or \"unsigned int\" parameter works, include\n<sys/socket.h> and then write an \"extern\" declaration for accept with\nthe parameters we think it should have. This relies on the hope that\nif the compiler sees two declarations for accept with different\nparameter lists, it'll generate an error and not just a warning.\n\nIt seems like that should work at least as well, maybe better, as what\nwe're doing now --- but it's not the kind of change that I want to shove\ninto 7.0.1 with no beta testing! Probably we should introduce it early\nin the 7.1 cycle instead, and see if anyone reports problems.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 22 May 2000 21:11:25 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [BUGS] port v7.0 to SGI-IRIX-6.5.7/64 "
},
{
"msg_contents": "Murad Nayal <[email protected]> writes:\n> I was a little bit inaccurate in my\n> statement earlier though. size_t is unsigned int when compiling in 32bit\n> mode. however it seems to be unsigned long (8 bytes) when compiling in\n> 64bit mode, while int is still 4 bytes.\n\nOK, in that case I'd believe it's a critical issue.\n\nI think the conflicting-declarations trick I mentioned earlier will let\nus build a more reliable configure test for this, but as I said I don't\nwant to risk shoving it into 7.0.* --- this is the sort of thing that\nyou can't trust until it's been through some beta testing on a variety\nof platforms. We'll put it in 7.1 and see what happens. In the\nmeantime you'll have to patch config.h by hand...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 22 May 2000 23:59:42 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: port v7.0 to SGI-IRIX-6.5.7/64 "
},
{
"msg_contents": "Murad Nayal <[email protected]> writes:\n> you just need to add a header file that contains elements of the C++\n> standard library defined in std. not all C++ standard library that comes\n> with the SGI C++ compiler are in defined in std. for example iostream\n> stuff are not in std. however string is. so just include the string\n> header file in the program.\n\nYou realize, of course, that we don't want to depend on <string>\nbeing there either ;-)\n\nBut I suppose we could swap the order of the tests, and then\ninclude <string> into the namespace test if we've found it.\nWill do...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 23 May 2000 00:02:57 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [BUGS] port v7.0 to SGI-IRIX-6.5.7/64 "
},
{
"msg_contents": "Murad Nayal <[email protected]> writes:\n> the testing output does get better. Still I am getting a lot of fails\n> and most of them seem nontrivial.\n\nI agree. Hard to tell with this level of detail, but you show many\nfailures in tests that aren't particularly platform-sensitive.\nI think there are some real bugs to be worked out...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 23 May 2000 00:59:57 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: port v7.0 to SGI-IRIX-6.5.7/64 "
},
{
"msg_contents": "On Mon, 22 May 2000, Tom Lane wrote:\n\n> Bruce Momjian <[email protected]> writes:\n> >> 2- configure badly guesses the type of the 3rd argument to accept(). it\n> >> decided it should be size_t (unsigned int on IRIX) while accept on IRIX\n> >> takes an int.\n> \n> > Again, a suggested change?\n\nThis is probably what you want:\n\nhttp://research.cys.de/autoconf-archive/Miscellaneous/ac_func_accept_argtypes.html\n\nI've been meaning to clean up the autoconf mess soon anyway, so I can take\na look at integrating this.\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Tue, 23 May 2000 15:03:41 +0200 (MET DST)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PORTS] Re: [BUGS] port v7.0 to SGI-IRIX-6.5.7/64 "
}
] |
[
{
"msg_contents": "Some of you may remember some mumblings about some work on access\nprivileges, so this is the idea. Comments welcome.\n\n\n* Goals\n\nThe goal of the first stage is to provide a fully SQL92 compliant\nsolution. That comprises particularly column level granularity, the\nREFERENCES[*] privilege for use by constraints, and the USAGE\nprivilege. We currently don't have any of the things that USAGE\npertains to -- domains, character sets, collations, translations --\nbut at least we shouldn't have to start all over when we do. Also\nGRANT OPTION needs to be supported.\n\n[*] -- now with that RI snafu unveiled that goal seems optimistic\n\nThe second stage would be adopting all specifications made by SQL3 if\nthey are applicable. This includes particularly the privilege types\nTRIGGER and UNDER (for table inheritance, which should probably work\nwell first). Also we could think about EXECUTE for functions and some\n`setuid'-like features.\n\nStage two isn't necessarily anticipated for 7.1 but I'd like to have a\nframework which adapts well.\n\n* User management\n\nOne thing I'd like to see resolved first is the issue of\npg_shadow.usesysid. This field is fully functionally dependent on\npg_shadow.oid so there's little theoretical need to keep it around.\nSecondly, the system happily reassigns previously used sysids, which\nis a pretty dangerous thing to do as we all know, since there might\nstill be old database objects hanging around that the new users\nshouldn't necessarily have access to. (And connecting to all databases\nin turn to remove any dangling objects when a user is dropped isn't\nreally an option.) So the answer is to not recycle sysids. But then\nwhy not use the oid?\n\nSome arguments for user sysids I have heard in the past were that some\npeople want to keep them the same as the Unix uid. While I'm at a loss\nas to how this would matter in practice (aren't names enough) I grant\nthat that's an argument (albeit one that doesn't scale well because\nnot every database user is a Unix user and two identically numbered\nUnix users from different machines would presumably map to different\ndatabase users). But if you look closer then this thinking is\nprimarily caused by the fact that there is a usesysid field at all --\nif there wasn't, you wouldn't have to keep it in sync.\n\nAnother reason why an oid based arrangement would be nicer is that if\nwe did the same thing for groups why could refer to both users and\ngroups through one attribute. See `Implementation' below.\n\n* Implementation\n\nThe central idea in this proposal is a new system table to store\npermissions:\n\npg_privilege (\n priobj oid,\n prigrantor oid,\n prigrantee oid,\n priaction char,\n priisgrantable boolean,\n\n primary key (priobj, prigrantee, priaction)\n) \n\n\"priobj\" would store the oid of the object being described, for\nexample a table or function or type. \"prigrantor\" is the user who\ngranted the privilege. (It is necessary to store this to get grant\noptions to work correctly.) \"prigrantee\" is obviously the user to\nwhich you grant the privilege or a group. We could put 0 for \"public\".\n\"priaction\" would be the encoding of the privilege type, such as\n's'=select, 'u'=update, perhaps. And \"prigrantable\" is whether the\nprivilege is grantable.\n\nThe key advantages to this method over the old one are:\n- Extensible beyond tables, in fact to any kind of object\n- Easier to query, e.g., for what-if inquiries\n- The old method would make grant options pretty tough without a major\n rework\n- A pg_privilege row would be almost exactly what SQL calls a\n \"privilege descriptor\". So the implementation will be much easier\n and verifyable because you can read the program code out of the\n standard text. (in theory anyway)\n\nThose that follow will see how simple-minded grant, revoke, and\nprivilege lookup will be in their core: simply insert, delete, or look\nfor a row. (Of course the devil is in the details.)\n\n* Column privileges\n\nThere are two approaches I see to managing column privileges, one is a\nlittle cleaner, the other faster. Note that granting access to a\ntable is different than granting access to all of its columns; the\ndifference is what happens when you add a new column.\n\nThe straightforward choice would be to store a single reference to\npg_class when the privilege describes the whole table, and\npg_attribute references when only specific columns are named. That\nwould mean the lookup routine will first look for a pg_class.oid entry\nand, failing that, then for possible pg_attribute.oid entries for the\ncolumns that it's interested in. This is of course suboptimal when no\nprivilege exists in the first place but that is not necessarily the case\nwe're optimizing for.\n\nThe second choice would be to always have an entry for the table, even\nif it only says \"I'm not the real privilege, but there are column\nprivileges, so you better keep looking.\" That would probably mean\nanother column in the pg_privilege. This way you have to maintain\nredundant information but there is enough precedent for this sort of\nthing in the other system catalogs.\n\n* Groups\n\nHandling groups efficiently is a bit tricky because it's essentially\nequivalent to a join: scan all the privileges and all the groups and\nlook for matches between them and with the current user id. I suppose\none could simply run this query by hand once and see what the\noptimizer thinks would be a good way to run it, but that isn't\nfacilitated by the way group information is stored right now.\n\nI would do it like this: Looking up privileges granted to groups would\nbe done if the lookup based on the user id fails. Then you have to\nscan pg_group anyway, so you might as well just scan it once\ncompletely and record all the groups the user is in. Then you do a\nprivilege lookup for each group in a manner identical to individual\nusers.\n\nThis is different from the current implementation which looks through\nall existing privileges on a table and if one is owned by a group then\nscan pg_group to see if the user is in the group. That might be\nsuboptimal.\n\n* Performance concerns\n\nThe fastest privilege system if of course one that does no checking at\nall. Features always cost a price. I have no concern, however, that\nthis new implementation would cause any noticeable penalty at all. If\nyou consider how much reading the parser, planner, optimizer, and\nrewriter do just to make sense of a query, this is really a minor\nitem.\n\nIf you're the table owner then no access checking is done at all. If\nyou don't use groups or column privileges then one syscache lookup\nwill tell you yes or no. If you do use groups then the new system\nwould potentially even be faster. If you want to use column privileges\nyou'd currently wait forever. :)\n\n* Possibilities for extensions\n\nOne thing that has been thrown around is a LOCK privilege. Currently\neveryone with write access can lock the table completely. It would\nmake sense to me to restrict locks of Share mode and higher to the\nowner and owners of this privilege.\n\nThere is also demand for various CREATE privileges (one for each thing\nyou can create, one supposes). Once we have schemas we can easily fit\nthis into the above design. Since this is not covered by the standard\n(\"implementation-defined\"), a good round of discussion ought to take\nplace first.\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Mon, 22 May 2000 19:25:29 +0200 (CEST)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Proposal for enhancements of privilege system"
},
{
"msg_contents": "Peter Eisentraut <[email protected]> writes:\n> [ pretty good proposal ]\n\nJust a couple of trivial comments ---\n\n> Some arguments for user sysids I have heard in the past were that some\n> people want to keep them the same as the Unix uid.\n\nThere may once have been a reason for that, but it's probably buried in\nancient Berkeley-specific admin practices. I sure can't see any good\nreason to keep the extra number around now. As you say, it should be\nOK to use the pg_shadow row OID to identify users.\n\nBTW I believe most of the \"owner\" columns in the system tables are\ndeclared as \"int4\" because they hold sysids ... don't forget to change\n'em to be \"Oid\" when you do this.\n\n> Another reason why an oid based arrangement would be nicer is that if\n> we did the same thing for groups why could refer to both users and\n> groups through one attribute. See `Implementation' below.\n\n\"findoidjoins\" will probably get unhappy with you if you do that.\nWhich is maybe not a big deal, but...\n\n> \"prigrantee\" is obviously the user to\n> which you grant the privilege or a group.\n> We could put 0 for \"public\".\n\nI'd be inclined to provide an additional field that explicitly encodes\n\"grantee is user\", \"grantee is group\", or \"grantee is public\". That\nway you don't need to do a join to find out what you are looking at.\n\nReally, having an OID column that might reference either users or groups\nis the SQL equivalent of a type pun. An alternative representation that\nwould avoid that would be two OID columns, one to use if user and one\nto use if group (if they're both 0 then it's grant to public).\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 22 May 2000 20:00:44 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Proposal for enhancements of privilege system "
},
{
"msg_contents": "> > Another reason why an oid based arrangement would be nicer is that if\n> > we did the same thing for groups why could refer to both users and\n> > groups through one attribute. See `Implementation' below.\n> \n> \"findoidjoins\" will probably get unhappy with you if you do that.\n> Which is maybe not a big deal, but...\n\nI think it will find it. It is not a big deal anyway.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 22 May 2000 20:23:18 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Proposal for enhancements of privilege system"
},
{
"msg_contents": "Peter Eisentraut wrote:\n\n> pg_privilege (\n> priobj oid,\n> prigrantor oid,\n> prigrantee oid,\n> priaction char,\n> priisgrantable boolean,\n> \n> primary key (priobj, prigrantee, priaction)\n> )\n> \n\nI like it.\n\n> The straightforward choice would be to store a single reference to\n> pg_class when the privilege describes the whole table, and\n> pg_attribute references when only specific columns are named. That\n> would mean the lookup routine will first look for a pg_class.oid entry\n> and, failing that, then for possible pg_attribute.oid entries for the\n> columns that it's interested in. This is of course suboptimal when no\n> privilege exists in the first place but that is not necessarily the case\n> we're optimizing for.\n\nDon't worry about performance for the access denied case. That is going\nto be outweighed 1000:1 by the access allowed case. Go for the clean\nsolution.\n",
"msg_date": "Tue, 23 May 2000 10:49:06 +1000",
"msg_from": "Chris Bitmead <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Proposal for enhancements of privilege system"
},
{
"msg_contents": "Tom Lane writes:\n\n> Really, having an OID column that might reference either users or groups\n> is the SQL equivalent of a type pun.\n\nWell, I don't really know what a type pun is but the priobj column would\ndo exactly the same thing by referring to tables, types, functions, etc.\nby unadorned oid, which I thought would be pretty nice. Really, in normal\nmode of operation there is never a question \"Does this privilege apply to\na user or a group?\" it's always \"Given this object and this user/group id,\ndo I have access?\" I don't see that as a practical problem, but I'll think\nabout it.\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Tue, 23 May 2000 23:38:11 +0200 (CEST)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Proposal for enhancements of privilege system "
},
{
"msg_contents": "On Tue, 23 May 2000, Chris Bitmead wrote:\n> Peter Eisentraut wrote:\n> \n> > pg_privilege (\n> > priobj oid,\n> > prigrantor oid,\n> > prigrantee oid,\n> > priaction char,\n> > priisgrantable boolean,\n> > \n> > primary key (priobj, prigrantee, priaction)\n> > )\n> > \n> \n> I like it.\n\nImho this is an area where it does make sense to look at what other db's do,\nbecause it makes the toolwriters life so much easier if pg behaves like some other\ncommon db. Thus I do not really like a standalone design.\n\nOther db's usually use a char array for priaction and don't have priisgrantable, \nbut code it into priaction. Or they use a bitfield. This has the advantage of only \nproducing one row per table.\n\nAndreas\n",
"msg_date": "Sun, 28 May 2000 10:12:23 +0200",
"msg_from": "Andreas Zeugswetter <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Proposal for enhancements of privilege system"
},
{
"msg_contents": "Andreas Zeugswetter writes:\n\n> Imho this is an area where it does make sense to look at what other\n> db's do, because it makes the toolwriters life so much easier if pg\n> behaves like some other common db.\n\nThe defined interface to the privilege system is GRANT, REVOKE, and\n\"access denied\" (and a couple of INFORMATION_SCHEMA views, eventually).\nI don't see how other db's play into this.\n\n> Other db's usually use a char array for priaction and don't have\n> priisgrantable, but code it into priaction. Or they use a bitfield.\n> This has the advantage of only producing one row per table.\n\nThat's the price I'm willing to pay for abstraction, extensibility, and\nverifyability. But I'm open for better ideas.\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Mon, 29 May 2000 19:11:49 +0200 (CEST)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Proposal for enhancements of privilege system"
}
] |
[
{
"msg_contents": "Hi,\n\nOk, I know the mailing list web page states: YOU MUST TRY ELSEWHERE FIRST! and\nthis should, technically speaking, be reported as a bug however given that the\nsecurity implications are potentially severe I thought here would be best in\nthe first instance.\n\nI have only briefly looked into this problem as I have just now discovered it. \nEssentially, in our environment, we require password authentication as a\ndefacto. However it appears that once a user has authenticated with the\nbackend it is possible for that user to trivially assume root dba privileges or\nprivileges of any other dba user.\n\nTo demonstrate the problem: \n\nConsider two systems: \n\n pgsqlserver 192.168.1.1 - backend system\n pgsqlclient 192.168.1.2 - client system\n \nOur pg_hba.conf (on pgsqlserver) now looks something similar to: \n\n local all password\n host all 127.0.0.1 255.255.255.255 password\n host all 192.168.1.2 255.255.255.255 password\n\nNow making connections from pgsqlclient (192.168.1.2) would require password\nauthentication. To show that this works, entering an incorrect passwd...\n\n pgsqlclient:/home/matt 11:33am > psql -h pgsqlserver -U matt matt\n Password: \n psql: Password authentication failed for user 'matt'\n\nNow a correct password: \n\n pgsqlclient:/home/matt 11:36am > psql -h pgsqlserver -U matt matt\n Password: \n Welcome to psql, the PostgreSQL interactive terminal.\n \n Type: \\copyright for distribution terms\n \\h for help with SQL commands\n \\? for help on internal slash commands\n \\g or terminate with semicolon to execute query\n \\q to quit\n \n matt=> \n\nOk, so at the top level password authentication works, now to the problem... \n\nOnce authenticated it is possible to trivially assume another users identity\nwithout further authentication e.g. \n\n matt=> \\c template1 postgres\n You are now connected to database template1 as user postgres.\n template1=# \n\nOr, assume any other users identity: \n\n matt=> \\c www www\n You are now connected to database www as user www.\n www=> \n\nOuch. \n\nI have not tested to see if this is specific to the password authentication\nmethod or a general problem relating to any of the supported methods and I have\nlimited time to investigate this at the moment.\n\nIs there anyone who specifically maintains the authentication subsystem that I\ncould communicate with directly? I would be interested to offer whatever\nassistance I can. \n\n\nRegards,\nMatt.\n\n",
"msg_date": "Tue, 23 May 2000 12:12:24 +1200 (NZST)",
"msg_from": "Matt Sullivan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Serious problem within authentication subsystem in 7.0"
},
{
"msg_contents": "Matt Sullivan <[email protected]> writes:\n> Essentially, in our environment, we require password authentication as\n> a defacto. However it appears that once a user has authenticated with\n> the backend it is possible for that user to trivially assume root dba\n> privileges or privileges of any other dba user.\n\nIt appears that psql will auto-supply the previously entered password,\nso if you were using the same password for all your accounts then this\nmight happen. Otherwise it's pretty hard to believe. That new\nconnection is to a new backend; there's no way for it to know that you\nwere previously connected.\n\nOffhand I think it would be a good idea for psql to insist on a new\npassword if the \\connect command gives a new user name...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 22 May 2000 21:39:44 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Serious problem within authentication subsystem in 7.0 "
},
{
"msg_contents": "On Mon, 22 May 2000, Tom Lane wrote:\n\n> Matt Sullivan <[email protected]> writes:\n> > Essentially, in our environment, we require password authentication as\n> > a defacto. However it appears that once a user has authenticated with\n> > the backend it is possible for that user to trivially assume root dba\n> > privileges or privileges of any other dba user.\n> \n> It appears that psql will auto-supply the previously entered password,\n> so if you were using the same password for all your accounts then this\n> might happen. Otherwise it's pretty hard to believe. That new\n> connection is to a new backend; there's no way for it to know that you\n> were previously connected.\n> \n> Offhand I think it would be a good idea for psql to insist on a new\n> password if the \\connect command gives a new user name...\n\nOk, phew...\n\n matt=> \\c wwwdata wwwdata\n Password authentication failed for user 'wwwdata'\n Previous connection kept\n matt=> \n\nThis would infer though that the passwd data is cached within each instance of\npsql which could present it's own set of security risks. \n \nI would think that it should probably be *forgotton* after authentication is\nestablished and required on any new \\connect. This might present some issues\nwith pg_dump etc. I guess though.\n\n\nMatt.\n\n",
"msg_date": "Tue, 23 May 2000 13:54:20 +1200 (NZST)",
"msg_from": "Matt Sullivan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Serious problem within authentication subsystem in 7.0 "
},
{
"msg_contents": "Matt Sullivan <[email protected]> writes:\n> This would infer though that the passwd data is cached within each\n> instance of psql which could present it's own set of security risks.\n\nYeah. There's been discussion about that, and the consensus seems to\nbe that the advantages outweigh the very small risks. (One of the\ndisadvantages of forgetting the password is that PQreset can't work...)\n\n> I would think that it should probably be *forgotton* after\n> authentication is established and required on any new \\connect. This\n> might present some issues with pg_dump etc. I guess though.\n\npg_dump is actually pretty nearly useless in a password-auth\ninstallation; to run the restore script, you'd have to manually enter\na password each time it hits a \\connect command :-(.\n\nThe best idea I've heard for fixing this is to invent a quasi-suid\nmechanism: the pg_dump script would be started as postgres (enter\npassword for same, once) and it would NOT do any \\connect commands.\nInstead it would issue some kind of \"SET effective_user = 'name'\"\ncommand, which would determine the ownership assigned to subsequently-\ncreated objects, but the backend would still remember that the user\nwas \"really\" postgres. Presumably this SET command would only be\nallowed to superusers, so the backend must remember that the user is\nreally postgres, or it'll reject SET effective_user commands after\nthe first one.\n\nThe devil is in the details, of course, and the details here would be\nto figure out which operations should pay attention to effective_user\nand which to the true userid. But it seems doable.\n\n(Hey Peter, wanna put this on your todo list for that privilege-system\nwork?)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 22 May 2000 22:37:38 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Serious problem within authentication subsystem in 7.0 "
}
] |
[
{
"msg_contents": "Since logging with syslog is a documented feature, we need to fix it\nanyway, IMHO. Here is a proposed fix for \"too long message\" problem of\nsyslog.\n\nIncluded patches do followings:\n\no modifies write_syslog in utils/misc/trace.c\n\no if the message is too long (currently longer than 128 bytes), then\ndivides into smaller message segments. \"128\" would be probably too\nsmall for some platforms, but I don't how long is the actual safe\nlimit of syslog messages on particular platform (seems 512 is safe on\nLinux, for example).\n\no division of the message is carefully done so that it does not break\nthe word boundary.\n\no each message is prefixed with [logid-seq] where logid is incremented\nat each write_syslog call, and seq is incremented at each syslog call\nwithin a write_syslog call. This will prevent syslog to suppress\n\"same\" messages.\n\no tested on Linux (RH4.2) and FreeBSD 4.0.\n\nComments are welcome....\n--\nTatsuo Ishii\n\nBTW, here are some examples from actual syslog messages:\n\nMay 23 10:58:51 srapc968-yotsuya postgres[9902]: [1-1] NOTICE: QUERY DUMP: { SEQSCAN :startup_cost 0.00 :total_cost 3.22 :rows 122 :width 81 :state <> :qptargetlist ({ TARGETENTRY\nMay 23 10:58:51 srapc968-yotsuya postgres[9902]: [1-2] :resdom { RESDOM :resno 1 :restype 19 :restypmod -1 :resname typname :reskey 0 :reskeyop 0 :ressortgroupref 0 :resjunk false }\nMay 23 10:58:51 srapc968-yotsuya postgres[9902]: [1-3] :expr { VAR :varno 1 :varattno 1 :vartype 19 :vartypmod -1 :varlevelsup 0 :varnoold 1 :varoattno 1}} { TARGETENTRY :resdom {\nMay 23 10:58:51 srapc968-yotsuya postgres[9902]: [1-4] RESDOM :resno 2 :restype 23 :restypmod -1 :resname typowner :reskey 0 :reskeyop 0 :ressortgroupref 0 :resjunk false } :expr {\nMay 23 10:58:51 srapc968-yotsuya postgres[9902]: [1-5] VAR :varno 1 :varattno 2 :vartype 23 :vartypmod -1 :varlevelsup 0 :varnoold 1 :varoattno 2}} { TARGETENTRY :resdom { RESDOM\nMay 23 10:58:51 srapc968-yotsuya postgres[9902]: [1-6] :resno 3 :restype 21 :restypmod -1 :resname typlen :reskey 0 :reskeyop 0 :ressortgroupref 0 :resjunk false } :expr { VAR :varno\nMay 23 10:58:51 srapc968-yotsuya postgres[9902]: [1-7] 1 :varattno 3 :vartype 21 :vartypmod -1 :varlevelsup 0 :varnoold 1 :varoattno 3}} { TARGETENTRY :resdom { RESDOM :resno 4\nMay 23 10:58:51 srapc968-yotsuya postgres[9902]: [1-8] :restype 21 :restypmod -1 :resname typprtlen :reskey 0 :reskeyop 0 :ressortgroupref 0 :resjunk false } :expr { VAR :varno 1\nMay 23 10:58:51 srapc968-yotsuya postgres[9902]: [1-9] :varattno 4 :vartype 21 :vartypmod -1 :varlevelsup 0 :varnoold 1 :varoattno 4}} { TARGETENTRY :resdom { RESDOM :resno 5\nMay 23 10:58:51 srapc968-yotsuya postgres[9902]: [1-10] :restype 16 :restypmod -1 :resname typbyval :reskey 0 :reskeyop 0 :ressortgroupref 0 :resjunk false } :expr { VAR :varno 1\nMay 23 10:58:51 srapc968-yotsuya postgres[9902]: [1-11] :varattno 5 :vartype 16 :vartypmod -1 :varlevelsup 0 :varnoold 1 :varoattno 5}} { TARGETENTRY :resdom { RESDOM :resno 6\nMay 23 10:58:51 srapc968-yotsuya postgres[9902]: [1-12] :restype 18 :restypmod -1 :resname typtype :reskey 0 :reskeyop 0 :ressortgroupref 0 :resjunk false } :expr { VAR :varno 1\nMay 23 10:58:51 srapc968-yotsuya postgres[9902]: [1-13] :varattno 6 :vartype 18 :vartypmod -1 :varlevelsup 0 :varnoold 1 :varoattno 6}} { TARGETENTRY :resdom { RESDOM :resno 7\nMay 23 10:58:51 srapc968-yotsuya postgres[9902]: [1-14] :restype 16 :restypmod -1 :resname typisdefined :reskey 0 :reskeyop 0 :ressortgroupref 0 :resjunk false } :expr { VAR :varno 1\nMay 23 10:58:51 srapc968-yotsuya postgres[9902]: [1-15] :varattno 7 :vartype 16 :vartypmod -1 :varlevelsup 0 :varnoold 1 :varoattno 7}} { TARGETENTRY :resdom { RESDOM :resno 8\nMay 23 10:58:51 srapc968-yotsuya postgres[9902]: [1-16] :restype 18 :restypmod -1 :resname typdelim :reskey 0 :reskeyop 0 :ressortgroupref 0 :resjunk false } :expr { VAR :varno 1\nMay 23 10:58:51 srapc968-yotsuya postgres[9902]: [1-17] :varattno 8 :vartype 18 :vartypmod -1 :varlevelsup 0 :varnoold 1 :varoattno 8}} { TARGETENTRY :resdom { RESDOM :resno 9\nMay 23 10:58:51 srapc968-yotsuya postgres[9902]: [1-18] :restype 26 :restypmod -1 :resname typrelid :reskey 0 :reskeyop 0 :ressortgroupref 0 :resjunk false } :expr { VAR :varno 1\nMay 23 10:58:51 srapc968-yotsuya postgres[9902]: [1-19] :varattno 9 :vartype 26 :vartypmod -1 :varlevelsup 0 :varnoold 1 :varoattno 9}} { TARGETENTRY :resdom { RESDOM :resno 10\nMay 23 10:58:51 srapc968-yotsuya postgres[9902]: [1-20] :restype 26 :restypmod -1 :resname typelem :reskey 0 :reskeyop 0 :ressortgroupref 0 :resjunk false } :expr { VAR :varno 1\nMay 23 10:58:51 srapc968-yotsuya postgres[9902]: [1-21] :varattno 10 :vartype 26 :vartypmod -1 :varlevelsup 0 :varnoold 1 :varoattno 10}} { TARGETENTRY :resdom { RESDOM :resno 11\nMay 23 10:58:51 srapc968-yotsuya postgres[9902]: [1-22] :restype 24 :restypmod -1 :resname typinput :reskey 0 :reskeyop 0 :ressortgroupref 0 :resjunk false } :expr { VAR :varno 1\nMay 23 10:58:51 srapc968-yotsuya postgres[9902]: [1-23] :varattno 11 :vartype 24 :vartypmod -1 :varlevelsup 0 :varnoold 1 :varoattno 11}} { TARGETENTRY :resdom { RESDOM :resno 12\nMay 23 10:58:51 srapc968-yotsuya postgres[9902]: [1-24] :restype 24 :restypmod -1 :resname typoutput :reskey 0 :reskeyop 0 :ressortgroupref 0 :resjunk false } :expr { VAR :varno 1\nMay 23 10:58:51 srapc968-yotsuya postgres[9902]: [1-25] :varattno 12 :vartype 24 :vartypmod -1 :varlevelsup 0 :varnoold 1 :varoattno 12}} { TARGETENTRY :resdom { RESDOM :resno 13\nMay 23 10:58:51 srapc968-yotsuya postgres[9902]: [1-26] :restype 24 :restypmod -1 :resname typreceive :reskey 0 :reskeyop 0 :ressortgroupref 0 :resjunk false } :expr { VAR :varno 1\nMay 23 10:58:51 srapc968-yotsuya postgres[9902]: [1-27] :varattno 13 :vartype 24 :vartypmod -1 :varlevelsup 0 :varnoold 1 :varoattno 13}} { TARGETENTRY :resdom { RESDOM :resno 14\nMay 23 10:58:51 srapc968-yotsuya postgres[9902]: [1-28] :restype 24 :restypmod -1 :resname typsend :reskey 0 :reskeyop 0 :ressortgroupref 0 :resjunk false } :expr { VAR :varno 1\nMay 23 10:58:51 srapc968-yotsuya postgres[9902]: [1-29] :varattno 14 :vartype 24 :vartypmod -1 :varlevelsup 0 :varnoold 1 :varoattno 14}} { TARGETENTRY :resdom { RESDOM :resno 15\nMay 23 10:58:51 srapc968-yotsuya postgres[9902]: [1-30] :restype 18 :restypmod -1 :resname typalign :reskey 0 :reskeyop 0 :ressortgroupref 0 :resjunk false } :expr { VAR :varno 1\nMay 23 10:58:51 srapc968-yotsuya postgres[9902]: [1-31] :varattno 15 :vartype 18 :vartypmod -1 :varlevelsup 0 :varnoold 1 :varoattno 15}} { TARGETENTRY :resdom { RESDOM :resno 16\nMay 23 10:58:51 srapc968-yotsuya postgres[9902]: [1-32] :restype 25 :restypmod -1 :resname typdefault :reskey 0 :reskeyop 0 :ressortgroupref 0 :resjunk false } :expr { VAR :varno 1\nMay 23 10:58:51 srapc968-yotsuya postgres[9902]: [1-33] :varattno 16 :vartype 25 :vartypmod -1 :varlevelsup 0 :varnoold 1 :varoattno 16}}) :qpqual <> :lefttree <> :righttree <>\nMay 23 10:58:51 srapc968-yotsuya postgres[9902]: [1-34] :extprm () :locprm () :initplan <> :nprm 0 :scanrelid 1 }\nMay 23 10:58:51 srapc968-yotsuya postgres[9902]: NOTICE: QUERY PLAN: Seq Scan on pg_type (cost=0.00..3.22 rows=122 width=81)",
"msg_date": "Tue, 23 May 2000 11:06:24 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": true,
"msg_subject": "syslog fix"
},
{
"msg_contents": "> > Since logging with syslog is a documented feature, we need to fix it\n> > anyway, IMHO. Here is a proposed fix for \"too long message\" problem of\n> > syslog.\n> \n> Do you plan to get this into 7.0.1? If not, let me take your patch and\n> integrate it into my new configuration system, because all that code has\n> moved around quite a bit in there.\n\nI would like the fix appear in 7.0.1.\n--\nTatsuo Ishii\n\n",
"msg_date": "Tue, 23 May 2000 23:38:57 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: syslog fix"
},
{
"msg_contents": "> > > Since logging with syslog is a documented feature, we need to fix it\n> > > anyway, IMHO. Here is a proposed fix for \"too long message\" problem of\n> > > syslog.\n> > \n> > Do you plan to get this into 7.0.1? If not, let me take your patch and\n> > integrate it into my new configuration system, because all that code has\n> > moved around quite a bit in there.\n> \n> I would like the fix appear in 7.0.1.\n\nFix comitted.\n--\nTatsuo Ishii\n",
"msg_date": "Fri, 26 May 2000 21:20:12 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: syslog fix"
}
] |
[
{
"msg_contents": "\"Ross J. Reedstrom\" <[email protected]> writes:\n>On Mon, May 22, 2000 at 03:46:39PM +0800, Michael Robinson wrote:\n>> MySQL is extremely well suited for it: the data is essentially \"read-only\"\n>> so transactions, locking, etc., are not an issue, \n>\n>People keep claiming that applications that are essentially \"read-only\"\n>don't need transactions. I'll agree in the limit, that truly read only\n>databases don't, but I think a lot of people might be surprised at how\n>little writing you need before you get into trouble. \n\nVery true. However, if you can guarantee that there is only ever one\nwriter (e.g., a batch process), and you don't mind the occasional dirty\nread, you don't need any locking at all.\n\n\t-Michael Robinson\n\n",
"msg_date": "Tue, 23 May 2000 12:36:16 +0800 (+0800)",
"msg_from": "Michael Robinson <[email protected]>",
"msg_from_op": true,
"msg_subject": "ROLAP (was Re: A test to add to the crashme test)"
},
{
"msg_contents": "On Tue, May 23, 2000 at 12:36:16PM +0800, Michael Robinson wrote:\n> \"Ross J. Reedstrom\" <[email protected]> writes:\n> >On Mon, May 22, 2000 at 03:46:39PM +0800, Michael Robinson wrote:\n> >> MySQL is extremely well suited for it: the data is essentially \"read-only\"\n> >> so transactions, locking, etc., are not an issue, \n> >\n> >People keep claiming that applications that are essentially \"read-only\"\n> >don't need transactions. I'll agree in the limit, that truly read only\n> >databases don't, but I think a lot of people might be surprised at how\n> >little writing you need before you get into trouble. \n> \n> Very true. However, if you can guarantee that there is only ever one\n> writer (e.g., a batch process), and you don't mind the occasional dirty\n> read, you don't need any locking at all.\n\nRight - my whole diatribe was actually about locking. And the case you\ndescribe is what I meant by \"in the limit, [a] truly read only database\"\nIf all your writing is planned, controlled, batch updates, fine.\n\nBut my point stands: you only need _one_ ad hoc writer to potentially\nget you into trouble, with all the messiness of lock contention, etc.\n\nThe work arounds mozilla.org were looking at to fix this problem\ninvolve keeping multiple databases and syncing periodically, or lower\nthe priority of INSERTS (!): they looked like treating the symptom,\nrather that addressing the problem, to me.\n\nRoss\n-- \nRoss J. Reedstrom, Ph.D., <[email protected]> \nNSBRI Research Scientist/Programmer\nComputer and Information Technology Institute\nRice University, 6100 S. Main St., Houston, TX 77005\n",
"msg_date": "Tue, 23 May 2000 09:33:15 -0500",
"msg_from": "\"Ross J. Reedstrom\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ROLAP (was Re: A test to add to the crashme test)"
}
] |
[
{
"msg_contents": "Hello!\n\nI created a sort list of header files, whos refered by other installed .h-s but\nnot preset in src/backend/Makefile.\nThis list based on output of gcc -E ${installed}.h\nCan Somebody add them to install-headers target?\n\nI have a little idea, to solve the problem of missing headers\n(because i created my list on a linux_sparc, but on other system may diffrent\n(I don't know. Realy can be different?))\nSo what do you mean about a \"make-install-headers-with-deps\" script?\n\n--\n nek;(",
"msg_date": "Tue, 23 May 2000 10:14:45 +0200 (CEST)",
"msg_from": "Peter Vazsonyi <[email protected]>",
"msg_from_op": true,
"msg_subject": "headers, who need to be installed"
}
] |
[
{
"msg_contents": "Peter Eisentraut <[email protected]> writes:\n> The assumption that the old password can be reused between\n> password connections seems pretty unwise.\n\nI think it's OK, and a useful convenience, if you are reconnecting with\nthe same username as before. What I had in mind was to discard the\nprior password if the \\connect command specifies a username.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 23 May 2000 10:13:50 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Serious problem within authentication subsystem in 7.0 "
},
{
"msg_contents": "Tom Lane writes:\n\n> > The assumption that the old password can be reused between\n> > password connections seems pretty unwise.\n> \n> I think it's OK, and a useful convenience, if you are reconnecting with\n> the same username as before. What I had in mind was to discard the\n> prior password if the \\connect command specifies a username.\n\nBut if you have different passwords between databases then you are still\nhaving the same problem, only at a different scale.\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Tue, 23 May 2000 23:39:20 +0200 (CEST)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Serious problem within authentication subsystem in\n 7.0"
},
{
"msg_contents": "Peter Eisentraut <[email protected]> writes:\n> But if you have different passwords between databases then you are still\n> having the same problem, only at a different scale.\n\n... which we do not have, at the moment; there's one password per user\nper installation, and since psql's \\connect doesn't allow reconnection\nto a different postmaster (does it?) the same password should still work.\nUnless it's been changed meanwhile, I suppose.\n\nIn any case, isn't psql's logic such that it will prompt again if the\nprevious password doesn't work? I'd hate to think I only get one try to\nenter my password correctly...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 23 May 2000 18:19:38 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Serious problem within authentication subsystem in 7.0 "
},
{
"msg_contents": "Tom Lane writes:\n\n> Peter Eisentraut <[email protected]> writes:\n> > But if you have different passwords between databases then you are still\n> > having the same problem, only at a different scale.\n> \n> ... which we do not have, at the moment; there's one password per user\n> per installation,\n\nNo, pg_hba.conf allows per database passwords.\n\n> In any case, isn't psql's logic such that it will prompt again if the\n> previous password doesn't work?\n\nNo, it will only prompt you for a password if it notices one is required.\nIf that's wrong the connection attempt fails and you can try again (to\nconnect). That's reasonable enough I think.\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Wed, 24 May 2000 23:50:39 +0200 (CEST)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Serious problem within authentication subsystem in\n 7.0"
},
{
"msg_contents": "Peter Eisentraut <[email protected]> writes:\n>> ... which we do not have, at the moment; there's one password per user\n>> per installation,\n\n> No, pg_hba.conf allows per database passwords.\n\nOh you're right, I had forgotten about that barely-supported hack for\nalternate password files.\n\n>> In any case, isn't psql's logic such that it will prompt again if the\n>> previous password doesn't work?\n\n> No, it will only prompt you for a password if it notices one is required.\n> If that's wrong the connection attempt fails and you can try again (to\n> connect). That's reasonable enough I think.\n\nSeems like if it inserts the old password and notices that the error is\n'bogus password' then it should prompt you for a new one.\n\nBTW, I notice that there seems to be a nasty portability bug in that\nlogic: it'll try to \"free(prompted_password)\" even if prompted_password\nis NULL. On a lot of systems that's a recipe for a coredump, or at\nleast used to be (is everyone ANSI enough now to get this right??)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 24 May 2000 18:07:27 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Serious problem within authentication subsystem in 7.0 "
}
] |
[
{
"msg_contents": "\nJust upgraded Majordomo2 to the latest version, to fix a problem that some\nhave been noticing where USERID@domain != userid@domain ... just want to\nmake sure mail is going through properly now ..\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Tue, 23 May 2000 17:36:55 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Ignore, just testing ..."
}
] |
[
{
"msg_contents": "\nIn an effort to help ppl follow more specific topics, I'm trying to split\na few threads into seperate lists and archives ... I *think* that I'm\nsplitting it relatively large segments, based on current threads, but if\nppl can think of something better, please feel free ...\n\nI've created three lists:\n\n\tpgsql-hackers-fmgr: function manager related issues\n\tpgsql-hackers-smgr: storage manager related issues\n\tpgsql-hackers-oo: OO related issues\n\nThe split only works if ppl make use of them ... \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Tue, 23 May 2000 18:13:26 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": true,
"msg_subject": "New Lists ... "
},
{
"msg_contents": "The Hermit Hacker writes:\n\n> I've created three lists:\n> \n> \tpgsql-hackers-fmgr: function manager related issues\n> \tpgsql-hackers-smgr: storage manager related issues\n> \tpgsql-hackers-oo: OO related issues\n> \n> The split only works if ppl make use of them ... \n\nThe fact that many people don't even make use of the current split in a\nrecognizable fashion makes me pretty pessimistic about that goal. But one\ncan always try.\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Tue, 23 May 2000 23:45:26 +0200 (CEST)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New Lists ... "
},
{
"msg_contents": "On Tue, 23 May 2000, Peter Eisentraut wrote:\n\n> The Hermit Hacker writes:\n> \n> > I've created three lists:\n> > \n> > \tpgsql-hackers-fmgr: function manager related issues\n> > \tpgsql-hackers-smgr: storage manager related issues\n> > \tpgsql-hackers-oo: OO related issues\n> > \n> > The split only works if ppl make use of them ... \n> \n> The fact that many people don't even make use of the current split in a\n> recognizable fashion makes me pretty pessimistic about that goal. But one\n> can always try.\n\nIts up to those using it to make it work ...\n\n\n",
"msg_date": "Tue, 23 May 2000 19:31:18 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: New Lists ... "
},
{
"msg_contents": "\nOne more created, as per Vadim's request: pgsql-hackers-wal ... \n\nOn Tue, 23 May 2000, The Hermit Hacker wrote:\n\n> On Tue, 23 May 2000, Peter Eisentraut wrote:\n> \n> > The Hermit Hacker writes:\n> > \n> > > I've created three lists:\n> > > \n> > > \tpgsql-hackers-fmgr: function manager related issues\n> > > \tpgsql-hackers-smgr: storage manager related issues\n> > > \tpgsql-hackers-oo: OO related issues\n> > > \n> > > The split only works if ppl make use of them ... \n> > \n> > The fact that many people don't even make use of the current split in a\n> > recognizable fashion makes me pretty pessimistic about that goal. But one\n> > can always try.\n> \n> Its up to those using it to make it work ...\n> \n> \n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Tue, 23 May 2000 19:36:52 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: New Lists ... "
}
] |
[
{
"msg_contents": "> > I've created three lists:\n> > \n> > \tpgsql-hackers-fmgr: function manager related issues\n> > \tpgsql-hackers-smgr: storage manager related issues\n> > \tpgsql-hackers-oo: OO related issues\n> > \n> > The split only works if ppl make use of them ... \n> \n> The fact that many people don't even make use of the current \n> split in a recognizable fashion makes me pretty pessimistic\n> about that goal. But one can always try.\n\nThe goal is to give more convenient way for everyday discussions\nto groups working on particulare subjects.\nAs Jan noted in SF, big projects require *groups* to succeed,\nin shorter time...\nThese groups will put \"current development statement\" to global\nhackers-list from time to time.\n\n?\n\nVadim\n",
"msg_date": "Tue, 23 May 2000 15:04:02 -0700",
"msg_from": "\"Mikheev, Vadim\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: New Lists ... "
}
] |
[
{
"msg_contents": "Seems BSD/OS 4.01 has setproctitle() in libutil.o, even though there is\nno include file nor manual page.\n\nI checked the library source, and it is a very light-weight function. \nIt grabs an argv global from crt0.o, and changes the ps args. Very fast.\n\nI now see that systems that use setproctitle() seem totally broken for\nupdates. The code says:\n\n setproctitle(\"%s %s %s %s %s\", execname, hostname, username, db...\n\n#define PS_SET_STATUS(status) \\\n do { strcpy(Ps_status_buffer, (status)); } while (0)\n\nOf course, there is no linkage between Ps_status_buffer and the\nsetproctitle args here, so it is no-op. The fix is to move\nsetproctitle() down into PS_SET_STATUS().\n\nSeems this is Marc's new code:\n\n\tdate: 2000/05/12 13:58:24; author: scrappy; state: Exp; lines: +2 -1\n\t\n\tAdd two checks ... one for setproctitle and one for -lutil ...\n\t\n\tDon't do anything with them at this time, but am working on that ...\n\nI know we have been talking about using setproctitle() in all cases that\nsupport it, and in fact we now do that. What we don't do is use\nsetproctitle() to update the status for each query.\n\nSo it seems that he has enabled it on my platform. Do people want\nsetproctitle() to update for every query for 7.01? I have seen FreeBSD\nand BSDI implementations, and they are both light-weight.\n\nComments?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 23 May 2000 18:48:02 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "setproctitle()"
},
{
"msg_contents": "On Tue, 23 May 2000, Bruce Momjian wrote:\n\n> Seems BSD/OS 4.01 has setproctitle() in libutil.o, even though there is\n> no include file nor manual page.\n> \n> I checked the library source, and it is a very light-weight function. \n> It grabs an argv global from crt0.o, and changes the ps args. Very fast.\n> \n> I now see that systems that use setproctitle() seem totally broken for\n> updates. The code says:\n> \n> setproctitle(\"%s %s %s %s %s\", execname, hostname, username, db...\n> \n> #define PS_SET_STATUS(status) \\\n> do { strcpy(Ps_status_buffer, (status)); } while (0)\n> \n> Of course, there is no linkage between Ps_status_buffer and the\n> setproctitle args here, so it is no-op. The fix is to move\n> setproctitle() down into PS_SET_STATUS().\n> \n> Seems this is Marc's new code:\n> \n> \tdate: 2000/05/12 13:58:24; author: scrappy; state: Exp; lines: +2 -1\n> \t\n> \tAdd two checks ... one for setproctitle and one for -lutil ...\n> \t\n> \tDon't do anything with them at this time, but am working on that ...\n> \n> I know we have been talking about using setproctitle() in all cases that\n> support it, and in fact we now do that. What we don't do is use\n> setproctitle() to update the status for each query.\n> \n> So it seems that he has enabled it on my platform. Do people want\n> setproctitle() to update for every query for 7.01? I have seen FreeBSD\n> and BSDI implementations, and they are both light-weight.\n> \n> Comments?\n\nI would like to see it, but not for v7.0.1 ... unless you can figure out a\ncleaner way of doing it, the coding changes would be extensive ...\n\nI looked at it, and unless we go with global variables *yeech*, you would\nhave to pass down the \"fixed\" part of the setproctitle to sub-functions\n(ie. argv[0-4](?)) ... I asked on one of the freebsd lists if anyone could\nsuggest a way of getting 'argv[0]', but never did hear anything back ...\n\nIf you want, you could just added, for v7.0.1, a simple addition of 'if\n__FreeBSD__' to the code, so that setproctitle is only used under FreeBSD\n...\n\n\n",
"msg_date": "Tue, 23 May 2000 20:07:00 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: setproctitle()"
},
{
"msg_contents": "> > Comments?\n> \n> I would like to see it, but not for v7.0.1 ... unless you can figure out a\n> cleaner way of doing it, the coding changes would be extensive ...\n> \n> I looked at it, and unless we go with global variables *yeech*, you would\n> have to pass down the \"fixed\" part of the setproctitle to sub-functions\n> (ie. argv[0-4](?)) ... I asked on one of the freebsd lists if anyone could\n> suggest a way of getting 'argv[0]', but never did hear anything back ...\n> \n> If you want, you could just added, for v7.0.1, a simple addition of 'if\n> __FreeBSD__' to the code, so that setproctitle is only used under FreeBSD\n> ...\n\nNo, it is pretty easy to do it in pg_status.h alone. The trick is to\ndo sprintf(ps_status_buffer, \"val val %s\"), then use that in\nsetproctitle for every command.\n\nI will code it up if no one objects.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 23 May 2000 19:28:33 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: setproctitle()"
},
{
"msg_contents": "I am on it. I will post the code here.\n\n> * The Hermit Hacker <[email protected]> [000523 16:50] wrote:\n> > \n> > I would like to see it, but not for v7.0.1 ... unless you can figure out a\n> > cleaner way of doing it, the coding changes would be extensive ...\n> > \n> > I looked at it, and unless we go with global variables *yeech*, you would\n> > have to pass down the \"fixed\" part of the setproctitle to sub-functions\n> > (ie. argv[0-4](?)) ... I asked on one of the freebsd lists if anyone could\n> > suggest a way of getting 'argv[0]', but never did hear anything back ...\n> \n> Can you clarify the question please?\n> \n> Provide pseudo-code and I should be able to whip something up.\n> \n> --\n> -Alfred Perlstein - [[email protected]|[email protected]]\n> \"I have the heart of a child; I keep it in a jar on my desk.\"\n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 23 May 2000 19:28:50 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: setproctitle()"
},
{
"msg_contents": "* The Hermit Hacker <[email protected]> [000523 16:50] wrote:\n> \n> I would like to see it, but not for v7.0.1 ... unless you can figure out a\n> cleaner way of doing it, the coding changes would be extensive ...\n> \n> I looked at it, and unless we go with global variables *yeech*, you would\n> have to pass down the \"fixed\" part of the setproctitle to sub-functions\n> (ie. argv[0-4](?)) ... I asked on one of the freebsd lists if anyone could\n> suggest a way of getting 'argv[0]', but never did hear anything back ...\n\nCan you clarify the question please?\n\nProvide pseudo-code and I should be able to whip something up.\n\n--\n-Alfred Perlstein - [[email protected]|[email protected]]\n\"I have the heart of a child; I keep it in a jar on my desk.\"\n",
"msg_date": "Tue, 23 May 2000 16:54:44 -0700",
"msg_from": "Alfred Perlstein <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: setproctitle()"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n> No, it is pretty easy to do it in pg_status.h alone. The trick is to\n> do sprintf(ps_status_buffer, \"val val %s\"), then use that in\n> setproctitle for every command.\n\n> I will code it up if no one objects.\n\nWell, at this point committed changes are going to go out in 7.0.1\nwith essentially zero beta testing. Are you sure you've not introduced\nany portability issues?\n\nAs long as the changes are only enabled for platform(s) you've been able\nto test, I've got no objection. Otherwise I'm a bit worried...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 24 May 2000 00:24:40 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: setproctitle() "
},
{
"msg_contents": "> Bruce Momjian <[email protected]> writes:\n> > No, it is pretty easy to do it in pg_status.h alone. The trick is to\n> > do sprintf(ps_status_buffer, \"val val %s\"), then use that in\n> > setproctitle for every command.\n> \n> > I will code it up if no one objects.\n> \n> Well, at this point committed changes are going to go out in 7.0.1\n> with essentially zero beta testing. Are you sure you've not introduced\n> any portability issues?\n> \n> As long as the changes are only enabled for platform(s) you've been able\n> to test, I've got no objection. Otherwise I'm a bit worried...\n\nI am too, but Marc put the setproctitle() stuff in there. I just made\nit work. Whether it should be in there is a separate issue I will let\nMarc address.\n\nIf we turn it off in configure.in, then it will stay dormant until\nactivated again.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 24 May 2000 01:09:36 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: setproctitle()"
},
{
"msg_contents": "On Wed, 24 May 2000, Bruce Momjian wrote:\n\n> > Bruce Momjian <[email protected]> writes:\n> > > No, it is pretty easy to do it in pg_status.h alone. The trick is to\n> > > do sprintf(ps_status_buffer, \"val val %s\"), then use that in\n> > > setproctitle for every command.\n> > \n> > > I will code it up if no one objects.\n> > \n> > Well, at this point committed changes are going to go out in 7.0.1\n> > with essentially zero beta testing. Are you sure you've not introduced\n> > any portability issues?\n> > \n> > As long as the changes are only enabled for platform(s) you've been able\n> > to test, I've got no objection. Otherwise I'm a bit worried...\n> \n> I am too, but Marc put the setproctitle() stuff in there. I just made\n> it work. Whether it should be in there is a separate issue I will let\n> Marc address.\n> \n> If we turn it off in configure.in, then it will stay dormant until\n> activated again.\n\nI have no probs with making it \"dormant\" for v7.0.1 and then activiating\nit for testing afterwards ...\n\n\n",
"msg_date": "Wed, 24 May 2000 09:03:08 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: setproctitle()"
},
{
"msg_contents": "> > I am too, but Marc put the setproctitle() stuff in there. I just made\n> > it work. Whether it should be in there is a separate issue I will let\n> > Marc address.\n> > \n> > If we turn it off in configure.in, then it will stay dormant until\n> > activated again.\n> \n> I have no probs with making it \"dormant\" for v7.0.1 and then activiating\n> it for testing afterwards ...\n\nIs that what you want to do? It will need to be done in configure.in. \nI don't know how to do it in there.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 24 May 2000 10:20:19 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: setproctitle()"
},
{
"msg_contents": "\ndone ... :)\n\n\nOn Wed, 24 May 2000, Bruce Momjian wrote:\n\n> > > I am too, but Marc put the setproctitle() stuff in there. I just made\n> > > it work. Whether it should be in there is a separate issue I will let\n> > > Marc address.\n> > > \n> > > If we turn it off in configure.in, then it will stay dormant until\n> > > activated again.\n> > \n> > I have no probs with making it \"dormant\" for v7.0.1 and then activiating\n> > it for testing afterwards ...\n> \n> Is that what you want to do? It will need to be done in configure.in. \n> I don't know how to do it in there.\n> \n> \n> -- \n> Bruce Momjian | http://www.op.net/~candle\n> [email protected] | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Wed, 24 May 2000 11:59:01 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: setproctitle()"
},
{
"msg_contents": "> \n> done ... :)\n> \n\nGreat. I am adding it to my configure run. Let's re-enable in 7.1.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 24 May 2000 11:06:18 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: setproctitle()"
},
{
"msg_contents": "If y'all just want to be a little patient, I have this all coded up as per\nvarious previous discussions and I'm just waiting for the branch. :)\n\nBruce Momjian writes:\n\n> I know we have been talking about using setproctitle() in all cases that\n> support it, and in fact we now do that. What we don't do is use\n> setproctitle() to update the status for each query.\n> \n> So it seems that he has enabled it on my platform. Do people want\n> setproctitle() to update for every query for 7.01? I have seen FreeBSD\n> and BSDI implementations, and they are both light-weight.\n> \n> Comments?\n> \n> \n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Wed, 24 May 2000 23:54:24 +0200 (CEST)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: setproctitle()"
}
] |
[
{
"msg_contents": "\n\nhttp://www.mysql.com/download_3.23.html\n\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Tue, 23 May 2000 19:55:23 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": true,
"msg_subject": "MySQL now supports transactions ... "
},
{
"msg_contents": "Apparently by way of some Berkeley DB code....\n\nhttp://web.mysql.com/php/manual.php3?section=BDB\n\n\n\nThe Hermit Hacker wrote:\n\n> http://www.mysql.com/download_3.23.html\n>\n> Marc G. Fournier ICQ#7615664 IRC Nick: Scrappy\n> Systems Administrator @ hub.org\n> primary: [email protected] secondary: scrappy@{freebsd|postgresql}.org\n\n",
"msg_date": "Tue, 23 May 2000 20:02:20 -0400",
"msg_from": "Ned Lilly <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: MySQL now supports transactions ..."
},
{
"msg_contents": "At 08:02 PM 5/23/00 -0400, you wrote:\n\n> Apparently by way of some Berkeley DB code....\n> \n> http://web.mysql.com/php/manual.php3?section=BDB\n\nYeah, that's correct. As there was no existing transaction layer in\nplace, it was pretty straightforward to add Berkeley DB. They had\nto abstract the ISAM layer that they've used to date, but there were\nno serious technical issues.\n\nYou can choose which kind of tables you want (myisam or bdb) at\ntable creation time. BDB tables have the standard ACID properties\nthat Berkeley DB provides generally, via the standard mechanisms\n(two-phase locking, write-ahead logging, and so forth).\n\nThe 3.23.16 release is decidedly alpha, but is in good enough\nshape to distribute. My bet is that we'll hammer out a few dumb\nbugs in the next weeks, and they'll cut something more stable\nsoon.\n\nYou need to download the 3.1.5 distribution of Berkeley DB from\nMySQL.com. We're not distributing that version from Sleepycat.\nWe're in the middle of the release cycle for our 3.1 release, and\nexpect to cut a stable one in the next week or so. MySQL relies\non a couple of features we added to 3.1 for them, so they can't\nrun with the 3.0 release that's up on our site now.\n\nIt's been pretty quiet since my message on Sunday, about the\ndifficulties in integrating Berkeley DB with the PostgreSQL backend.\nVadim (and others), what is your opinion? My impression is that\nthe project is too much trouble, but I'd be glad to hear from you\nfolks on the topic.\n\n\t\t\t\tmike\n\n",
"msg_date": "Tue, 23 May 2000 17:25:37 -0700",
"msg_from": "\"Michael A. Olson\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: MySQL now supports transactions ..."
},
{
"msg_contents": "> Apparently by way of some Berkeley DB code....\n> \n> http://web.mysql.com/php/manual.php3?section=BDB\n> \n> \n> \n> The Hermit Hacker wrote:\n> \n> > http://www.mysql.com/download_3.23.html\n> >\n> > Marc G. Fournier ICQ#7615664 IRC Nick: Scrappy\n> > Systems Administrator @ hub.org\n> > primary: [email protected] secondary: scrappy@{freebsd|postgresql}.org\n> \n> \n\nYes, I see that too. It makes much more sense for them because they\nhave ordered heaps anyway, with secondary indexes, rather than our\nunordered heap and indexes.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 23 May 2000 20:26:41 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: MySQL now supports transactions ..."
},
{
"msg_contents": "On Tue, 23 May 2000, Bruce Momjian wrote:\n\n> > Apparently by way of some Berkeley DB code....\n> > \n> > http://web.mysql.com/php/manual.php3?section=BDB\n> > \n> > \n> > \n> > The Hermit Hacker wrote:\n> > \n> > > http://www.mysql.com/download_3.23.html\n> > >\n> > > Marc G. Fournier ICQ#7615664 IRC Nick: Scrappy\n> > > Systems Administrator @ hub.org\n> > > primary: [email protected] secondary: scrappy@{freebsd|postgresql}.org\n> > \n> > \n> \n> Yes, I see that too. It makes much more sense for them because they\n> have ordered heaps anyway, with secondary indexes, rather than our\n> unordered heap and indexes.\n\nJust figured a heads up was in order for those that have been using\nACID/transactions in their arguments :)\n\n\n",
"msg_date": "Tue, 23 May 2000 21:39:28 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: MySQL now supports transactions ..."
}
] |
[
{
"msg_contents": "Peter Eisentraut wrote:\n> \n> On Tue, 23 May 2000, Chris Bitmead wrote:\n> \n> > As far as I'm concerned, current postgres INHERIT, is exactly the same\n> > semantics as UNDER (apart from multiple inheritance).\n> \n> Agreed, but note that according to the final SQL99 standard the UNDER\n> clause comes before the originally defined column list, which does make\n> sense because that's how the columns end up.\n\nAre you sure? It actually looks to me like you can have the UNDER before\nor after. What sense do you make of that? (Note the <table element\nlist> occuring before and after the <subtable clause>.\n\n <table definition> ::=\n CREATE [ <table scope> ] TABLE <table name>\n <table contents source>\n [ ON COMMIT <table commit action> ROWS ]\n\n <table contents source> ::=\n <table element list>\n | OF <user-defined type>\n [ <subtable clause> ]\n [ <table element list> ]\n <subtable clause> ::=\n UNDER <supertable clause>\n",
"msg_date": "Wed, 24 May 2000 10:00:46 +1000",
"msg_from": "Chris Bitmead <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: SQL3 UNDER"
},
{
"msg_contents": "\n> Chris Bitmead wrote:\n> >Peter Eisentraut wrote:\n> > Agreed, but note that according to the final SQL99 standard the UNDER\n> > clause comes before the originally defined column list, which does make\n> > sense because that's how the columns end up.\n> Are you sure? It actually looks to me like you can have the UNDER before\n> or after. What sense do you make of that? (Note the <table element\n> list> occuring before and after the <subtable clause>.\n> <table definition> ::=\n> CREATE [ <table scope> ] TABLE <table name>\n> <table contents source>\n> [ ON COMMIT <table commit action> ROWS ]\n>\n> <table contents source> ::=\n> <table element list>\n> | OF <user-defined type>\n> [ <subtable clause> ]\n> [ <table element list> ]\n> <subtable clause> ::=\n> UNDER <supertable clause>\n\nActually, from this I'd say Peter was right unless I'm horribly misreading\nthe\ngrammar piece provided, <table element list> doesn't come both before and\nafter <subtable clause> in the <table contents source>, it is either alone,\nor part of the OF...<table element list> with the | breaking the two\noptions.\n\n\n",
"msg_date": "Tue, 23 May 2000 17:43:05 -0700",
"msg_from": "\"Stephan Szabo\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SQL3 UNDER"
},
{
"msg_contents": "You're right. I'll have to look at making changes.\n\nStephan Szabo wrote:\n> \n> > Chris Bitmead wrote:\n> > >Peter Eisentraut wrote:\n> > > Agreed, but note that according to the final SQL99 standard the UNDER\n> > > clause comes before the originally defined column list, which does make\n> > > sense because that's how the columns end up.\n> > Are you sure? It actually looks to me like you can have the UNDER before\n> > or after. What sense do you make of that? (Note the <table element\n> > list> occuring before and after the <subtable clause>.\n> > <table definition> ::=\n> > CREATE [ <table scope> ] TABLE <table name>\n> > <table contents source>\n> > [ ON COMMIT <table commit action> ROWS ]\n> >\n> > <table contents source> ::=\n> > <table element list>\n> > | OF <user-defined type>\n> > [ <subtable clause> ]\n> > [ <table element list> ]\n> > <subtable clause> ::=\n> > UNDER <supertable clause>\n> \n> Actually, from this I'd say Peter was right unless I'm horribly misreading\n> the\n> grammar piece provided, <table element list> doesn't come both before and\n> after <subtable clause> in the <table contents source>, it is either alone,\n> or part of the OF...<table element list> with the | breaking the two\n> options.\n",
"msg_date": "Wed, 24 May 2000 10:49:51 +1000",
"msg_from": "Chris Bitmead <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: SQL3 UNDER"
}
] |
[
{
"msg_contents": "Here is the new setproctitle() code, already committed for 7.0.1.\n\nWorks under BSDI, and I assume FreeBSD too.\n\n---------------------------------------------------------------------------\n\n#define PS_INIT_STATUS(argc, argv, execname, username, hostname, dbname) \\\n do { \\\n sprintf(Ps_status_buffer, \"%s %s %s %s\", execname, hostname, username, dbname); \\\n } while (0)\n\n#define PS_CLEAR_STATUS() \\\n do { setproctitle(\"%s\", Ps_status_buffer); } while (0)\n\n#define PS_SET_STATUS(status) \\\n do { setproctitle(\"%s %s\", Ps_status_buffer, (status)); } while (0)\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 23 May 2000 20:13:40 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "setproctitle()"
},
{
"msg_contents": "[ Charset ISO-8859-1 unsupported, converting... ]\n> Bruce Momjian writes:\n> \n> > Here is the new setproctitle() code, already committed for 7.0.1.\n> \n> Isn't that 7.1 material?\n> \n> Anyway, as I said before, I have this stuff all written up and it should\n> even work on non-BSD and non-Linux systems. I'll get it tested and then we\n> can put the issue to rest, I hope.\n\nWell, the new setproctitle() code was getting used on my machine, and it\ndidn't work. Seems it is disabled now, so it will wait for your 7.1\nversion.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 24 May 2000 17:54:27 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: setproctitle()"
},
{
"msg_contents": "Bruce Momjian writes:\n\n> Here is the new setproctitle() code, already committed for 7.0.1.\n\nIsn't that 7.1 material?\n\nAnyway, as I said before, I have this stuff all written up and it should\neven work on non-BSD and non-Linux systems. I'll get it tested and then we\ncan put the issue to rest, I hope.\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Wed, 24 May 2000 23:57:25 +0200 (CEST)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: setproctitle()"
}
] |
[
{
"msg_contents": "Is there a reason to keep SQL functions now that we have PL/PgSQL,\nexcept for backward compatibility? What do SQL functions do that can\nnot be done in PLpgSQL? Are they faster?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 23 May 2000 21:53:02 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Plpsql vs. SQL functions"
},
{
"msg_contents": "On Tue, 23 May 2000, Bruce Momjian wrote:\n\n> Is there a reason to keep SQL functions now that we have PL/PgSQL,\n> except for backward compatibility? What do SQL functions do that can\n> not be done in PLpgSQL? Are they faster?\n\nSQL function can return a new tuple. To my knowledge, PLpgSQL cannot.\nI hope someone can prove me wrong ;)\n\n-alex\n\n",
"msg_date": "Tue, 23 May 2000 22:25:02 -0400 (EDT)",
"msg_from": "Alex Pilosov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Plpsql vs. SQL functions"
},
{
"msg_contents": "> On Tue, 23 May 2000, Bruce Momjian wrote:\n> \n> > Is there a reason to keep SQL functions now that we have PL/PgSQL,\n> > except for backward compatibility? What do SQL functions do that can\n> > not be done in PLpgSQL? Are they faster?\n> \n> SQL function can return a new tuple. To my knowledge, PLpgSQL cannot.\n> I hope someone can prove me wrong ;)\n\nMaybe. I know SQL can return multiple tuples.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 23 May 2000 22:30:53 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Plpsql vs. SQL functions"
}
] |
[
{
"msg_contents": "---------- Forwarded Message ----------\nSubject: Re: [HACKERS] SQL3 UNDER\nDate: Tue, 23 May 2000 21:53:24 -0400\nFrom: Robert B. Easter <[email protected]>\n\n\n\nOn Tue, 23 May 2000, Chris Bitmead wrote:\n> \"Robert B. Easter\" wrote:\n> > \n> > On Tue, 23 May 2000, Chris Bitmead wrote:\n> > > Maybe it would help if you have two examples. One that only uses UNDER,\n> > > and one that only uses INHERITS, and explain how one or the other can\n> > > work differently.\n> \n> Yes but how does a pure UNDER example actually work different to a pure\n> INHERITS example? You've created various tables below (combining INHERIT\n> and UNDER unfortunately), but how will the INHERITS hierarchies and\n> UNDER hierarchies actually work differently in practice?\n> \n\nI guess I've said most of what I can say about this idea. Attached is another\nGIF picture of the ideas, I suppose. If I come up with a good example, I'll\npost it.\n\nI'm willing to admit my idea could be very flawed! I'm hoping others in here\nwill find it worthy enough to try to find those flaws and examples on their own.\n\nI've started posting this OO stuff to [email protected]. I'll\ntry to not post anymore oo stuff in pgsql-hackers (if there is even anything\nelse say about this).\n\nGood luck,\nRobert B. Easter\n\n\n\n\n\n-------------------------------------------------------\n\n\n\n-- \nRobert B. Easter\[email protected]",
"msg_date": "Tue, 23 May 2000 22:03:19 -0400",
"msg_from": "\"Robert B. Easter\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Fwd: Re: SQL3 UNDER"
},
{
"msg_contents": "\"Robert B. Easter\" wrote:\n> \n> I've started posting this OO stuff to [email protected].\n\nWhere can I subscribe to that list ?\n\nI must have missed or lost the announcement possibly when I lost 3 days worth \nof mail sunday night ;(\n\n> I'll try to not post anymore oo stuff in pgsql-hackers (if there is even \n> anything else say about this).\n\nThe OO-PostgreSQL discussion is not even near being over ...\n\n-------------\nHannu\n",
"msg_date": "Wed, 24 May 2000 17:04:10 +0300",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fwd: Re: SQL3 UNDER"
},
{
"msg_contents": "On Wed, 24 May 2000, Hannu Krosing wrote:\n> \"Robert B. Easter\" wrote:\n> > \n> > I've started posting this OO stuff to [email protected].\n> \n> Where can I subscribe to that list ?\n\nJust send a message to [email protected]\nwith subject \"subscribe\"\n\n> \n> \n> The OO-PostgreSQL discussion is not even near being over ...\n> \n\nWell, I hope some one sees my point about INHERITS and UNDER\nmaybe being complementary. UNDER is a single inheritance container/tree all\ncontained inside maximal supertable. INHERITS provides multiple inheritance\nand can provide links between tables in different containers/trees, subject to\nsome restrictions. I think it deserves some looking at rather than just doing\naway with INHERIT for just UNDER. (again I can be wrong). I guess its hard to\nexplain. I still need to provide good examples. I can best describe the\ndifference as UNDER creates circles within circles representing tables and\nsubtables. INHERITS provides for circles/tables to overlap (to be cloned in a\nsense) and allows it multiple overlapping/merging. The INHERITS does it as it\nis now that way, by merging same name attributes from two or more parents into a\nsingle child. INHERIT is like cells reproducing using one or n parents. \nUNDER is like a single cell making baby cells inside of itself. :-) hehe\n\n",
"msg_date": "Wed, 24 May 2000 13:19:18 -0400",
"msg_from": "\"Robert B. Easter\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Fwd: Re: SQL3 UNDER"
},
{
"msg_contents": "\"Robert B. Easter\" wrote:\n> Well, I hope some one sees my point about INHERITS and UNDER\n> maybe being complementary. UNDER is a single inheritance container/tree all\n> contained inside maximal supertable. INHERITS provides multiple inheritance\n> and can provide links between tables in different containers/trees, subject to\n> some restrictions. I think it deserves some looking at rather than just doing\n> away with INHERIT for just UNDER. (again I can be wrong). I guess its hard to\n> explain. I still need to provide good examples. I can best describe the\n> difference as UNDER creates circles within circles representing tables and\n> subtables. INHERITS provides for circles/tables to overlap (to be cloned in a\n> sense) and allows it multiple overlapping/merging. The INHERITS does it as it\n> is now that way, by merging same name attributes from two or more parents into a\n> single child. INHERIT is like cells reproducing using one or n parents.\n> UNDER is like a single cell making baby cells inside of itself. :-) hehe\n\nWould you still be having these thoughts if you were looking at the\nolder SQL3 draft that included multiple inheritance UNDER? The newer\nUNDER appears to be a subset, which I presume they adopted to get the\nproposal out the door quicker. Personally I'd like to implement the\nSQL3-1994 extensions as well, because they actually seemed well thought\nout (I'm thinking particularly of the rename stuff).\n",
"msg_date": "Thu, 25 May 2000 09:34:58 +1000",
"msg_from": "Chris Bitmead <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fwd: Re: SQL3 UNDER"
},
{
"msg_contents": "On Wed, 24 May 2000, Chris Bitmead wrote:\n> \"Robert B. Easter\" wrote:\n> > Well, I hope some one sees my point about INHERITS and UNDER\n> > maybe being complementary. UNDER is a single inheritance container/tree all\n> > contained inside maximal supertable. INHERITS provides multiple inheritance\n> > and can provide links between tables in different containers/trees, subject to\n> > some restrictions. I think it deserves some looking at rather than just doing\n> > away with INHERIT for just UNDER. (again I can be wrong). I guess its hard to\n> > explain. I still need to provide good examples. I can best describe the\n> > difference as UNDER creates circles within circles representing tables and\n> > subtables. INHERITS provides for circles/tables to overlap (to be cloned in a\n> > sense) and allows it multiple overlapping/merging. The INHERITS does it as it\n> > is now that way, by merging same name attributes from two or more parents into a\n> > single child. INHERIT is like cells reproducing using one or n parents.\n> > UNDER is like a single cell making baby cells inside of itself. :-) hehe\n> \n> Would you still be having these thoughts if you were looking at the\n> older SQL3 draft that included multiple inheritance UNDER? The newer\n> UNDER appears to be a subset, which I presume they adopted to get the\n> proposal out the door quicker. Personally I'd like to implement the\n> SQL3-1994 extensions as well, because they actually seemed well thought\n> out (I'm thinking particularly of the rename stuff).\n\nThere are documents at\n\nftp://jerry.ece.umassd.edu/isowg3/dbl/BASEdocs/sql4hold/\n\nthat maybe we should look at. It *might* represent what is planned for SQL4. \nIt shows UNDER accepting multiple supertables like the 1994 draft. However,\nthese documents are dated 1996 and probably don't really represent SQL4, which\nmight take many years still until its a standard. By the time SQL4 comes out,\nthere's no telling what it will look like.\n\nI'm thinking, it might be best just to implement UNDER as it stands in the\nofficial standard for now. Leave INHERIT the way it is (for the most part) and\nimplement UNDER separately. Continue to use inherit if you need multiple\ninheritance. If you implement multiple inherit UNDER, it will create a user\nbase that will depend on that functionality. Later, it will be difficult to\nchange that functionality if the official standard for UNDER does switch to\nmultiple inheritance but in a way that is incompatible with yours. Its not a\ngood idea to second guess the future standard. People already use INHERIT the\nway it is and it can be used in combination with UNDER.\n\n -- \nRobert B. Easter\[email protected]\n",
"msg_date": "Wed, 24 May 2000 19:49:07 -0400",
"msg_from": "\"Robert B. Easter\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Fwd: Re: SQL3 UNDER"
}
] |
[
{
"msg_contents": "while doing a pg_dump of a table after postgresql made a mess of itself:\n\ndumpRules(): SELECT failed for table website. Explanation from backend: 'ERROR: cache lookup of attribute 1 in relation 9892634 failed\n'.\n\nGuys, there has to be a simple command to fix a corrupted database.\n\nI'm really killing myself over here trying to mix REINDEX, VACUUM\nalong with creating temp tables and reinserting the data which gives me:\n\ndumpRules(): SELECT failed for table webmaster. Explanation from backend: 'ERROR: cache lookup of attribute 2 in relation 9892495 failed\n'.\n\n:(\n\nYup, we're still willing to pay for support.\n\nThe database isn't even active but seems to be corrupting itself just\nby running these administrative commands.\n\nWould anyone like access to the box? I'm currently recompiling a what\nI hope is 7.0.1 to give it a shot.\n\nthanks,\n-- \n-Alfred Perlstein - [[email protected]|[email protected]]\n\"I have the heart of a child; I keep it in a jar on my desk.\"\n",
"msg_date": "Wed, 24 May 2000 02:10:58 -0700",
"msg_from": "Alfred Perlstein <[email protected]>",
"msg_from_op": true,
"msg_subject": "yowch: dumpRules(): SELECT failed for table website."
},
{
"msg_contents": "Alfred Perlstein <[email protected]> writes in [email protected]:\n\n> while doing a pg_dump of a table after postgresql made a mess of itself:\n\n> dumpRules(): SELECT failed for table website. Explanation from\n> backend: 'ERROR: cache lookup of attribute 1 in relation 9892634\n> failed '.\n\nI just got a message like that earlier this afternoon. My problem was\nthat I had created a view and later dropped and recreated one of the\ntables the view referenced. Dropping and recreating the view fixed\nthings.\n\n",
"msg_date": "24 May 2000 18:19:33 +0900",
"msg_from": "SL Baur <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: yowch: dumpRules(): SELECT failed for table website."
},
{
"msg_contents": "* Hiroshi Inoue <[email protected]> [000524 02:40] wrote:\n> > -----Original Message-----\n> > From: [email protected] [mailto:[email protected]]On\n> > Behalf Of Alfred Perlstein\n> > \n> > while doing a pg_dump of a table after postgresql made a mess of itself:\n> > \n> > dumpRules(): SELECT failed for table website. Explanation from \n> > backend: 'ERROR: cache lookup of attribute 1 in relation 9892634 failed\n> > '.\n> > \n> > Guys, there has to be a simple command to fix a corrupted database.\n> > \n> > I'm really killing myself over here trying to mix REINDEX, VACUUM\n> > along with creating temp tables and reinserting the data which gives me:\n> >\n> \n> How did you issue REINDEX command ?\n\npostmaster -p 1080 -o \"-O -P\"\nwas run\nthen:\n\npsql -p 1080 webcounter\nREINDEX DATABASE webcounter force;\n\n-- \n-Alfred Perlstein - [[email protected]|[email protected]]\n\"I have the heart of a child; I keep it in a jar on my desk.\"\n",
"msg_date": "Wed, 24 May 2000 02:43:07 -0700",
"msg_from": "Alfred Perlstein <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: yowch: dumpRules(): SELECT failed for table website."
},
{
"msg_contents": "* Hiroshi Inoue <[email protected]> [000524 02:58] wrote:\n> > -----Original Message-----\n> > From: Alfred Perlstein [mailto:[email protected]]\n> > \n> > * Hiroshi Inoue <[email protected]> [000524 02:40] wrote:\n> > > > -----Original Message-----\n> > > > From: [email protected] \n> > [mailto:[email protected]]On\n> > > > Behalf Of Alfred Perlstein\n> > > > \n> > > > while doing a pg_dump of a table after postgresql made a mess \n> > of itself:\n> > > > \n> > > > dumpRules(): SELECT failed for table website. Explanation from \n> > > > backend: 'ERROR: cache lookup of attribute 1 in relation \n> > 9892634 failed\n> > > > '.\n> > > > \n> > > > Guys, there has to be a simple command to fix a corrupted database.\n> > > > \n> > > > I'm really killing myself over here trying to mix REINDEX, VACUUM\n> > > > along with creating temp tables and reinserting the data \n> > which gives me:\n> > > >\n> > > \n> > > How did you issue REINDEX command ?\n> > \n> > postmaster -p 1080 -o \"-O -P\"\n> > was run\n> > then:\n> > \n> > psql -p 1080 webcounter\n> > REINDEX DATABASE webcounter force;\n> >\n> \n> Hmm,shutdown postmaster and invoke standalone postgres.\n> \n> postgres -O -P webmaster\n> REINDEX DATABASE webcounter force; \n> ^D\n\ngah!\n\n~/scripts % postgres -O -P webmaster\nDEBUG: Data Base System is starting up at Wed May 24 02:24:49 2000\nDEBUG: Data Base System was shut down at Wed May 24 02:24:46 2000\nDEBUG: Data Base System is in production state at Wed May 24 02:24:49 2000\nFATAL 1: Database \"webmaster\" does not exist in the system catalog.\nFATAL 1: Database \"webmaster\" does not exist in the system catalog.\n\nnot good :(\n\n-- \n-Alfred Perlstein - [[email protected]|[email protected]]\n\"I have the heart of a child; I keep it in a jar on my desk.\"\n",
"msg_date": "Wed, 24 May 2000 03:00:00 -0700",
"msg_from": "Alfred Perlstein <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: yowch: dumpRules(): SELECT failed for table website."
},
{
"msg_contents": "* SL Baur <[email protected]> [000524 02:59] wrote:\n> Alfred Perlstein <[email protected]> writes in [email protected]:\n> \n> > while doing a pg_dump of a table after postgresql made a mess of itself:\n> \n> > dumpRules(): SELECT failed for table website. Explanation from\n> > backend: 'ERROR: cache lookup of attribute 1 in relation 9892634\n> > failed '.\n> \n> I just got a message like that earlier this afternoon. My problem was\n> that I had created a view and later dropped and recreated one of the\n> tables the view referenced. Dropping and recreating the view fixed\n> things.\n\nI'm not using views afaik.\n\nThere seems to be a serious corruption problem when a transaction\nis aborted, I'll see if I can have a reproduceable bug report\ntomorrow.\n\nAfaik it has to do with a transaction aborting after inserting or\nupdating into a table. Something seems to go seriously wrong.\n\nWe're also getting some problems when we don't \"SET ENABLE_SEQSCAN=OFF;\"\nfor certain queries, Postgresql takes a really unoptimal path and\nwill loop forever.\n\nBtw, is there any way to specify an abort timeout for a query if it's\ntaking too long to just fail?\n\n-- \n-Alfred Perlstein - [[email protected]|[email protected]]\n\"I have the heart of a child; I keep it in a jar on my desk.\"\n",
"msg_date": "Wed, 24 May 2000 03:10:06 -0700",
"msg_from": "Alfred Perlstein <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Re: yowch: dumpRules(): SELECT failed for table website."
},
{
"msg_contents": "* Hiroshi Inoue <[email protected]> [000524 03:05] wrote:\n> > > Hmm,shutdown postmaster and invoke standalone postgres.\n> > > \n> > > postgres -O -P webmaster\n> > > REINDEX DATABASE webcounter force; \n> > > ^D\n> > \n> > gah!\n> > \n> > ~/scripts % postgres -O -P webmaster\n> \n> Sorry,webcounter instead of webmaster.\n> \n> > DEBUG: Data Base System is starting up at Wed May 24 02:24:49 2000\n> > DEBUG: Data Base System was shut down at Wed May 24 02:24:46 2000\n> > DEBUG: Data Base System is in production state at Wed May 24 \n> > 02:24:49 2000\n> > FATAL 1: Database \"webmaster\" does not exist in the system catalog.\n> > FATAL 1: Database \"webmaster\" does not exist in the system catalog.\n> > \n> > not good :(\n\nugh, it's late for me over here, I should have noticed \"database\"\nrather than \"table\" but i've already fixed it via moving the data\nto another table.\n\nI'm wondering if there's a way to get a unique value into a table?\n\nthis caused some problems:\n\nCREATE TABLE \"data\" (\n \"d\" varchar(256) PRIMARY KEY,\n \"d_id\" serial\n);\n\nbecause after I reloaded the table from:\n\n insert into data select * from data_backup;\n\nthen tried to insert into 'data' using only values for 'd' then it barfed\nbecause it was trying to use values from the serial that were already\nin the table.\n\nis there a way around this? using OID doesn't seem right, but seems to\nbe the only \"safe\" way to get a truly unique key to use as a forien key\nthat I've seen.\n\nany suggestions?\n\n-- \n-Alfred Perlstein - [[email protected]|[email protected]]\n\"I have the heart of a child; I keep it in a jar on my desk.\"\n",
"msg_date": "Wed, 24 May 2000 03:33:39 -0700",
"msg_from": "Alfred Perlstein <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: yowch: dumpRules(): SELECT failed for table website."
},
{
"msg_contents": "On Wed, May 24, 2000 at 03:33:39AM -0700, Alfred Perlstein wrote:\n> I'm wondering if there's a way to get a unique value into a table?\n> \n> this caused some problems:\n> \n> CREATE TABLE \"data\" (\n> \"d\" varchar(256) PRIMARY KEY,\n> \"d_id\" serial\n> );\n> \n> because after I reloaded the table from:\n> \n> insert into data select * from data_backup;\n> \n> then tried to insert into 'data' using only values for 'd' then it barfed\n> because it was trying to use values from the serial that were already\n> in the table.\n> \n> is there a way around this? using OID doesn't seem right, but seems to\n> be the only \"safe\" way to get a truly unique key to use as a forien key\n> that I've seen.\n> \n> any suggestions?\n> \n\nRight, I assume this is after you recreated the table? That created a new\nsequence behind the serial for d_id, which needs to be updated after you\ninsert explicit values into the id field. here's my standard fix for that\n\nSELECT setval('data_d_id_seq',max(d_id)) from data;\n\nThe name of the sequence is <tablename>_<serial field name>_seq,\ntrimmed to fit in NAMEDATALEN (default 30). If you created the table\nwith a different name, that's how the sequence is named (they're not\nautomatically renamed, or dropped, with their associated table)\n\nI do this whenever I load data into a table manually. Hmm, it might be\npossible to setup a trigger (or rule?) to handle the non-default case\n(i.e., whenever a serial values is actually provided) and do this\nautomatically. It'd only need to fire if the inserted/updated value is\ngreater than currval of the sequence. Hmm...\n\nRoss\n-- \nRoss J. Reedstrom, Ph.D., <[email protected]> \nNSBRI Research Scientist/Programmer\nComputer and Information Technology Institute\nRice University, 6100 S. Main St., Houston, TX 77005\n",
"msg_date": "Wed, 24 May 2000 09:59:39 -0500",
"msg_from": "\"Ross J. Reedstrom\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: yowch: dumpRules(): SELECT failed for table website."
}
] |
[
{
"msg_contents": "Hi, I have encountered a really strange problem with PostgreSQL 7.0 on\nSolaris 2.6/Sparc. The problem is that createdb command or create\ndatabase SQL always fails. Inspecting the output of truss shows that\nsystem() call in createdb() (commands/dbcomand.c) fails because\nwaitid() system call in system() returns error no. 10 (ECHILD).\n\nThis problem was not in 6.5.3, so I checked the source of it. The\nreason why 6.5.3's createdb worked was that it just ignored the return\ncode of system()!\n\nIt seems that we need to ignore an error from system() if the error is\nECHILD on Solaris.\n\nAny idea?\n\nBTW, I have compiled PostgreSQL with egcs 2.95 with/without\noptimization.\n--\nTatsuo Ishii\n",
"msg_date": "Wed, 24 May 2000 18:28:25 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": true,
"msg_subject": "Solaris 2.6 problems"
},
{
"msg_contents": "Tatsuo Ishii writes:\n\n> It seems that we need to ignore an error from system() if the error is\n> ECHILD on Solaris.\n\nSeems reasonable. I vaguely recall something about child process\nbogosities on Solaris/SunOS, perhaps that is related to this.\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Thu, 25 May 2000 00:00:27 +0200 (CEST)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Solaris 2.6 problems"
},
{
"msg_contents": "> > It seems that we need to ignore an error from system() if the error is\n> > ECHILD on Solaris.\n> \n> Seems reasonable. I vaguely recall something about child process\n> bogosities on Solaris/SunOS, perhaps that is related to this.\n\nOk, fix committed.\n--\nTatsuo Ishii\n",
"msg_date": "Thu, 25 May 2000 15:55:38 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Solaris 2.6 problems"
}
] |
[
{
"msg_contents": "\n\nYour name\t\t: Nishad Prakash\nYour email address\t: [email protected]\n\n\nSystem Configuration\n---------------------\n Architecture (example: Intel Pentium) \t: Sun Sparc\n\n Operating System (example: Linux 2.0.26 ELF) \t: Solaris 2.6\n\n PostgreSQL version (example: PostgreSQL-6.5.1): PostgreSQL-7.0\n\n Compiler used (example: gcc 2.8.0)\t\t: gcc 2.95.2\n\n\nPlease enter a FULL description of your problem:\n------------------------------------------------\n\nIn psql, when connected to template1 as the postgres superuser, the\n\\df function complains about some memory allocation problem. See the\nfollowing four examples for representative errors:\n\ntemplate1=# \\df get\nERROR: AllocSetFree: cannot find block containing chunk\n\ntemplate1=# \\df get\nNOTICE: PortalHeapMemoryFree: 0x31f5b0 not in alloc set!\n List of functions\n Result | Function | Arguments \n--------+---------------------+-------------\n int4 | get_bit | bytea int4 \n int4 | get_byte | bytea int4 \n name | getdatabaseencoding | \n name | getpgusername | \n(4 rows)\n\ntemplate1=# \\df get\nNOTICE: PortalHeapMemoryFree: 0x344350 not in alloc set!\nERROR: AllocSetFree: cannot find block containing chunk\n\ntemplate1=# \\df get\nERROR: SearchSysCache: recursive use of cache 2\n\nNote that this is before creating any of my own databases -- at the\ntime when I got these errors I had just finished the installation. \n\nThere is another problem with the \\d family. I created a new db\n(named can) and its tables. Then, typing \\dS has the following\neffect:\n\ncan=# \\dS\nThe connection to the server was lost. Attempting reset: Failed.\n!# \\d\nYou are currently not connected to a database.\n!# \\c can\nNo Postgres username specified in startup packet.\nSegmentation fault\n\nNote that this happens whether or not the tables are actually\npopulated; I ran a vacuum right after both acts (creation and\npopulation) and \\dS caused a crash out of psql each time.\n\nFWIW, my 6.5.3 installation with the same configure and build\nparameters, same data, etc. ran with no problems at all. Has anyone\nhad similar problems with the \\d functions in 7.0?\n\nNishad\n\n\n\n",
"msg_date": "Wed, 24 May 2000 02:49:22 -0700 (PDT)",
"msg_from": "Nishad PRAKASH <[email protected]>",
"msg_from_op": true,
"msg_subject": "\\dS and \\df <pattern> crashing psql"
},
{
"msg_contents": "> System Configuration\n> ---------------------\n> Architecture (example: Intel Pentium) \t: Sun Sparc\n> \n> Operating System (example: Linux 2.0.26 ELF) \t: Solaris 2.6\n> \n> PostgreSQL version (example: PostgreSQL-6.5.1): PostgreSQL-7.0\n> \n> Compiler used (example: gcc 2.8.0)\t\t: gcc 2.95.2\n> \n> \n> Please enter a FULL description of your problem:\n> ------------------------------------------------\n> \n> In psql, when connected to template1 as the postgres superuser, the\n> \\df function complains about some memory allocation problem. See the\n> following four examples for representative errors:\n\nNeither \\df or \\dS problem reproduces here (I have exactly same\nconfiguration as you).\n\nInstead, I have another problem already reported at hackers list:\n\n\tcreatdb/dropdb does not work\n\nSee the posting \"Solaris 2.6 problems\" in the archives.\n--\nTatsuo Ishii\n",
"msg_date": "Wed, 24 May 2000 19:29:39 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: \\dS and \\df <pattern> crashing psql"
},
{
"msg_contents": "Nishad PRAKASH writes:\n\n> In psql, when connected to template1 as the postgres superuser, the\n> \\df function complains about some memory allocation problem.\n\nThe \\d series of psql commands are really just shortcuts for various SQL\nqueries to the system catalogs. Start psql with the -E option to see them.\nTherefore it is unlikely that this behaviour is entirely localized at\nthese functions. Have you run the regression tests without problems?\n\n> can=# \\dS\n> The connection to the server was lost. Attempting reset: Failed.\n\nCan you show the server output. There's probably a segmentation fault or\nfailed assertion in the backend involved, which we'd need to see.\n\n> !# \\d\n> You are currently not connected to a database.\n> !# \\c can\n> No Postgres username specified in startup packet.\n> Segmentation fault\n\nThat's certainly a psql problem. Can you show a backtrace from gdb?\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Fri, 26 May 2000 01:52:38 +0200 (CEST)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: \\dS and \\df <pattern> crashing psql"
},
{
"msg_contents": "\n\nOn Fri, 26 May 2000, Peter Eisentraut wrote:\n\n\n> The \\d series of psql commands are really just shortcuts for various SQL\n> queries to the system catalogs. Start psql with the -E option to see them.\n> Therefore it is unlikely that this behaviour is entirely localized at\n> these functions. Have you run the regression tests without problems?\n\nFirst of all, this was not a Postgres bug but a configuration mistake on\nmy part. I had been meaning to write back to the list explaining what\nreally happened:\n\nI compiled 7.0 with locale support, recode, and multibyte options all\nenabled. In the postgres (db superuser) .cshrc, I had set LC_CTYPE to\n\"en_US\". This was the problem. When I would start postmaster and run\nanything that involved a regexp (and the query that \\dS expands to uses\nregexps) on a \"bytea\" type field, psql would crash. \n\nTo fix this, I tried first letting the locale default to \"C\", then setting\nLC_CTYPE to \"iso_8859_1\". Starting postmaster with either of these works\nperfectly. \n\nIf you are still interested in server output or backtraces (perhaps to\nimplement a more graceful exit?), I'd be glad to send them, but I'm sure\nyou can replicate this pretty easily now if required.\n\nI have never needed to mess around with locales before, so I apologize for\nposting this as bug -- I didn't quite know where to look at first.\n\nBy the way, I don't know what you guys have done with the optimizer but my\npreviously slow queries now run VERY FAST. This prevents me from\ntaking cigarette breaks, coffee breaks, etc. under the \"I'm running a\nlarge query\" pretext. Please do what you can to fix this problem.\n\nThanks for the help,\n\nNishad\n\n\n\n\n",
"msg_date": "Thu, 25 May 2000 17:37:12 -0700 (PDT)",
"msg_from": "Nishad PRAKASH <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: \\dS and \\df <pattern> crashing psql"
},
{
"msg_contents": "Nishad PRAKASH <[email protected]> writes:\n> I compiled 7.0 with locale support, recode, and multibyte options all\n> enabled. In the postgres (db superuser) .cshrc, I had set LC_CTYPE to\n> \"en_US\". This was the problem. When I would start postmaster and run\n> anything that involved a regexp (and the query that \\dS expands to uses\n> regexps) on a \"bytea\" type field, psql would crash. \n\n> To fix this, I tried first letting the locale default to \"C\", then setting\n> LC_CTYPE to \"iso_8859_1\". Starting postmaster with either of these works\n> perfectly. \n\n> If you are still interested in server output or backtraces (perhaps to\n> implement a more graceful exit?), I'd be glad to send them, but I'm sure\n> you can replicate this pretty easily now if required.\n\nHmm, news to us. It may be a platform-specific problem, so yes please\ndo send a backtrace.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 25 May 2000 21:00:07 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: \\dS and \\df <pattern> crashing psql "
},
{
"msg_contents": "> First of all, this was not a Postgres bug but a configuration mistake on\n> my part. I had been meaning to write back to the list explaining what\n> really happened:\n> \n> I compiled 7.0 with locale support, recode, and multibyte options all\n> enabled. In the postgres (db superuser) .cshrc, I had set LC_CTYPE to\n> \"en_US\". This was the problem. When I would start postmaster and run\n> anything that involved a regexp (and the query that \\dS expands to uses\n> regexps) on a \"bytea\" type field, psql would crash. \n> \n> To fix this, I tried first letting the locale default to \"C\", then setting\n> LC_CTYPE to \"iso_8859_1\". Starting postmaster with either of these works\n> perfectly. \n> \n> If you are still interested in server output or backtraces (perhaps to\n> implement a more graceful exit?), I'd be glad to send them, but I'm sure\n> you can replicate this pretty easily now if required.\n\nOf course regexp should not crash in this situation above. Thanks for\nthe info. I will dig into the problem.\n--\nTatsuo Ishii\n",
"msg_date": "Fri, 26 May 2000 10:34:32 +0900",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: \\dS and \\df <pattern> crashing psql"
},
{
"msg_contents": "\n\nOn Thu, 25 May 2000, Tom Lane wrote:\n\n> \n> Hmm, news to us. It may be a platform-specific problem, so yes please\n> do send a backtrace.\n> \n\nCAVEAT: I may just be missing something really obvious.\n\nA high-level description of the problem is: If postmaster is started\nwith LC_COLLATE set to en_US in the db superuser's environment, then\nworking on a db created with createdb -E LATIN1 <foo> causes strange\nbehaviour in regexps. If that sounds like an obviously wrong use\nof locale settings, you probably don't need to read any further, but\njust tell me what's going on.\n\nTo replicate the problem, you need to do the following. All actions\nare performed by postgres, the db superuser account\n\nInstall postgres 7.0 with all three of --enable-locale, --enable-recode,\nand --enable-multibyte specified. Set the user postgres's LC_COLLATE env\nvar to any of the en_* locales available on your machine /except/\nen_US.UTF-8, which doesn't seem to cause problems. The other locale vars\nappear to be irrelevant; LC_COLLATE alone will do for replication. These\nwere my settings:\n\n> locale\nLANG=\nLC_CTYPE=\"C\"\nLC_NUMERIC=\"C\"\nLC_TIME=\"C\"\nLC_COLLATE=en_US\nLC_MONETARY=\"C\"\nLC_MESSAGES=\"C\"\nLC_ALL=\n\nWhat follows are the operations I performed to get psql to crash:\n\n> createdb -E LATIN1 foo\nCREATE DATABASE\n> psql foo\nWelcome to psql, the PostgreSQL interactive terminal.\n<snip>\nfoo=# create table TenChrName ( somelongname varchar (100) unique); \nNOTICE: CREATE TABLE/UNIQUE will create implicit index\n'tenchrname_somelongname_key' for table 'tenchrname'\nCREATE\nfoo=# vacuum analyze;\nVACUUM\nfoo=# \\dS\nThe connection to the server was lost. Attempting reset: Failed.\n!# \\q\n> kill `cat postmaster.pid`\n> gdb postgres\n<snip>\n(gdb) run foo\n\n/* note: the following query is the smallest part of \\dS's expansion\n * that is sufficient for a crash \n */\nbackend> select * from pg_class where relname ~ '^n';\nERROR: expression_tree_walker: Unexpected node type 0\nERROR: expression_tree_walker: Unexpected node type 0\nbackend> select * from pg_class where relname ~ '^n'; \nNOTICE: PortalHeapMemoryFree: 0x51c330 not in alloc set!\nNOTICE: PortalHeapMemoryFree: 0x51c330 not in alloc set!\n\nProgram received signal SIGBUS, Bus error.\n0x21ddf4 in AllocSetAlloc (set=0x500ff8, size=12) at aset.c:233\n233 if (chunk->size >= size)\n(gdb) bt\n#0 0x21ddf4 in AllocSetAlloc (set=0x500ff8, size=12) at aset.c:233\n#1 0x21f8a0 in PortalHeapMemoryAlloc (this=0x2bddc0, size=12)\n at portalmem.c:253\n#2 0x21ed20 in MemoryContextAlloc (context=0x2bddc0, size=12) at\nmcxt.c:224\n#3 0x126e84 in newNode (size=12, tag=T_List) at nodes.c:38\n#4 0x127180 in lcons (obj=0x51a240, list=0x0) at list.c:112\n#5 0x127220 in lappend (list=0x0, obj=0x51a240) at list.c:144\n#6 0x14e6f8 in get_actual_clauses (restrictinfo_list=0x51a298)\n at restrictinfo.c:55\n#7 0x144b80 in create_scan_node (root=0x5134f8, best_path=0x51be80, \n tlist=0x51b0b0) at createplan.c:152\n#8 0x144ab0 in create_plan (root=0x5134f8, best_path=0x51be80)\n at createplan.c:103\n#9 0x147698 in subplanner (root=0x5134f8, flat_tlist=0x51a4a0, \n qual=0x51a280, tuple_fraction=0) at planmain.c:288\n#10 0x14740c in query_planner (root=0x5134f8, tlist=0x519b08,\nqual=0x51a280, \n tuple_fraction=0) at planmain.c:128\n#11 0x14817c in union_planner (parse=0x5134f8, tuple_fraction=0)\n at planner.c:530\n#12 0x147b38 in subquery_planner (parse=0x5134f8, tuple_fraction=-1)\n at planner.c:202\n#13 0x147810 in planner (parse=0x5134f8) at planner.c:67\n#14 0x1977c0 in pg_plan_query (querytree=0x5134f8) at postgres.c:512\n#15 0x197a9c in pg_exec_query_dest (\n query_string=0x2ba070 \"select * from pg_class where relname ~ '^n';\n\\n\", \n dest=Debug, aclOverride=0 '\\000') at postgres.c:646\n#16 0x1978e4 in pg_exec_query (\n query_string=0x2ba070 \"select * from pg_class where relname ~ '^n';\n\\n\")\n at postgres.c:562\n#17 0x1996f4 in PostgresMain (argc=2, argv=0xeffffa64, real_argc=2, \n real_argv=0xeffffa64) at postgres.c:1590\n#18 0x1026d0 in main (argc=2, argv=0xeffffa64) at main.c:103\n\nIf you actually care to go through the steps above, don't leave\nanything out. The vacuum analyze serves no useful purpose, but you\nwon't get a crash if you omit it. The table indentifiers really\ndo need to be around 10 chars long. The regexp needs to match the\nfront of a string, so use '^foo' -- I couldn't get a crash with other\ntypes of regexps but then I didn't try too many. \n\nWith the local settings described above, a query on pg_proc of the type\n\"select * from pg_proc where proname ~ '^n';\" will /always/ produce the\nfollowing kind of error: \"NOTICE: PortalHeapMemoryFree: <addr> not in\nalloc set!\" before printing the result (it never causes a crash, AFAICT,\nand always does produce a correct result). You can get this behaviour\njust by connecting to template1; perhaps other tables with bytea fields\nmay also do this, but pg_proc does it every single time. If you like, I'll\ndo a backtrace from where it produces that error, but this message is\ngetting too long for that.\n\nIf someone can replicate this (or even try and fail), it would help me\nto learn whether the error lies in Postgres, Solaris's locales, or\nyours truly. It seems too quirky to be a genuine bug.\n\nThanks, and let me know if you have any ideas.\n\nNishad\n\n\n",
"msg_date": "Fri, 26 May 2000 07:18:01 -0700 (PDT)",
"msg_from": "Nishad PRAKASH <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: \\dS and \\df <pattern> crashing psql "
}
] |
[
{
"msg_contents": "\nDo people interpret this syntax to mean that you can only have an UNDER\nclause when using the OF <user-defined type> clause as well?\n\n\n <table definition> ::=\n CREATE [ <table scope> ] TABLE <table name>\n <table contents source>\n [ ON COMMIT <table commit action> ROWS ]\n\n <table contents source> ::=\n <table element list>\n | OF <user-defined type>\n [ <subtable clause> ]\n [ <table element list> ]\n <subtable clause> ::=\n UNDER <supertable clause>\n",
"msg_date": "Wed, 24 May 2000 20:16:21 +1000",
"msg_from": "Chris Bitmead <[email protected]>",
"msg_from_op": true,
"msg_subject": "UNDER syntax"
}
] |
[
{
"msg_contents": "Hi all,\n\nI've tried to compile interfaces/perl5 with perl-5.6.0.\n\nIt fails because the perl symbols sv-undef and na have been renamed to\nPL_sv_undef and PL_na;\n\nThe obvious trick was to patch Pg.xs.\n\nSurely, Makefile.PL can take care of that and name those two symbols\naccording to the perl version.\n\nNow, not being a perl expert, I just warn you that you may go into a\nproblem (and FAQ's) sooner or later..\n\nRegards,\n\n-- \nOlivier PRENANT \tTel:\t+33-5-61-50-97-00 (Work)\nQuartier d'Harraud Turrou +33-5-61-50-97-01 (Fax)\n31190 AUTERIVE +33-6-07-63-80-64 (GSM)\nFRANCE Email: [email protected]\n------------------------------------------------------------------------------\nMake your life a dream, make your dream a reality. (St Exupery)\n\n",
"msg_date": "Wed, 24 May 2000 14:45:02 +0200 (MET DST)",
"msg_from": "Olivier PRENANT <[email protected]>",
"msg_from_op": true,
"msg_subject": "Perl 5.6.0"
},
{
"msg_contents": "Olivier PRENANT <[email protected]> writes:\n\n> It fails because the perl symbols sv-undef and na have been renamed to\n> PL_sv_undef and PL_na;\n\nFWIW, the postgresql in Red Hat Rawhide is built with perl 5.6.0\nand didn't need any patches to make it build.\n\n-- \nTrond Eivind Glomsr�d\nRed Hat, Inc.\n",
"msg_date": "24 May 2000 09:39:49 -0400",
"msg_from": "[email protected] (Trond Eivind=?iso-8859-1?q?_Glomsr=F8d?=)",
"msg_from_op": false,
"msg_subject": "Re: Perl 5.6.0"
},
{
"msg_contents": "Hi,\n\nTrond Eivind Glomsr�d:\n> Olivier PRENANT <[email protected]> writes:\n> \n> > It fails because the perl symbols sv-undef and na have been renamed to\n> > PL_sv_undef and PL_na;\n> \n> FWIW, the postgresql in Red Hat Rawhide is built with perl 5.6.0\n> and didn't need any patches to make it build.\n> \n... which is because the POLLUTE=1 flag is given to Makefile.PL script,\nin src/pl/Makefile.\n\nThis is usually what you do when you encounter this kind of error.\n\n-- \nMatthias Urlichs | noris network GmbH | [email protected] | ICQ: 20193661\nThe quote was selected randomly. Really. | http://smurf.noris.de/\n-- \nTo the memory of the man, first in war, first in peace, and first in the\nhearts of his country.\n -- General Henry Lee\n",
"msg_date": "Wed, 24 May 2000 16:15:54 +0200",
"msg_from": "\"Matthias Urlichs\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Perl 5.6.0"
},
{
"msg_contents": "[email protected] (Trond Eivind=?iso-8859-1?q?_Glomsr=F8d?=) writes:\n> Olivier PRENANT <[email protected]> writes:\n>> It fails because the perl symbols sv-undef and na have been renamed to\n>> PL_sv_undef and PL_na;\n\n> FWIW, the postgresql in Red Hat Rawhide is built with perl 5.6.0\n> and didn't need any patches to make it build.\n\nThe stopgap solution is to say\n\tperl Makefile.PL POLLUTE=1\n(in the interfaces/perl5 directory) and then build as usual. This\nshould happen automatically if you are using Postgres 7.0, but you\ncould do it manually if you have an older release.\n\nThere is a cleaner long-term solution but it involves more work...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 24 May 2000 11:18:29 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Perl 5.6.0 "
},
{
"msg_contents": "Thanks to all who replied to my mail.\n\nNow that I switched to 7.0, you were right it works...\n\nAgain, thank you al\n\nRegards\nOn Wed, 24 May 2000, Tom Lane wrote:\n\n> [email protected] (Trond Eivind=?iso-8859-1?q?_Glomsr=F8d?=) writes:\n> > Olivier PRENANT <[email protected]> writes:\n> >> It fails because the perl symbols sv-undef and na have been renamed to\n> >> PL_sv_undef and PL_na;\n> \n> > FWIW, the postgresql in Red Hat Rawhide is built with perl 5.6.0\n> > and didn't need any patches to make it build.\n> \n> The stopgap solution is to say\n> \tperl Makefile.PL POLLUTE=1\n> (in the interfaces/perl5 directory) and then build as usual. This\n> should happen automatically if you are using Postgres 7.0, but you\n> could do it manually if you have an older release.\n> \n> There is a cleaner long-term solution but it involves more work...\n> \n> \t\t\tregards, tom lane\n> \n\n-- \nOlivier PRENANT \tTel:\t+33-5-61-50-97-00 (Work)\nQuartier d'Harraud Turrou +33-5-61-50-97-01 (Fax)\n31190 AUTERIVE +33-6-07-63-80-64 (GSM)\nFRANCE Email: [email protected]\n------------------------------------------------------------------------------\nMake your life a dream, make your dream a reality. (St Exupery)\n\n",
"msg_date": "Sun, 28 May 2000 19:01:57 +0200 (MET DST)",
"msg_from": "Olivier PRENANT <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Perl 5.6.0 "
}
] |
[
{
"msg_contents": "\nCan someone give be a bit of help with gram.y to get this UNDER syntax\nright? I did what I though was the obvious syntax, but it no longer\naccepts a plain create table after this change...\n\nOptUnder: UNDER relation_name_list \t{ $$ = $2; }\n | /*EMPTY*/ { $$ = NIL; } \n\t;\n\n\n\nCreateStmt: CREATE OptTemp TABLE relation_name OptUnder '('\nOptTableElementList ')' OptInherit\n\t\t\t{\n \t/*etc */\n\t\t\t}\n\t\t;\n\nThe full patch is here...\nftp://ftp.tech.com.au/pub/diff.x\n",
"msg_date": "Wed, 24 May 2000 23:10:23 +1000",
"msg_from": "Chris Bitmead <[email protected]>",
"msg_from_op": true,
"msg_subject": "Help with gram.y (UNDER)"
},
{
"msg_contents": "\nIt seems like bison is confused by having that '(' just after an\noptional syntax (UNION). If I place something after OptUnder (USING just\nto pick a token), then everything works fine (except of course the\nspurious USING becomes part of the syntax).\n\nDoes the '(' have some kind of second-class status as a token that would\ncause this wierdness?\n\nChris Bitmead wrote:\n> \n> Can someone give be a bit of help with gram.y to get this UNDER syntax\n> right? I did what I though was the obvious syntax, but it no longer\n> accepts a plain create table after this change...\n> \n> OptUnder: UNDER relation_name_list { $$ = $2; }\n> | /*EMPTY*/ { $$ = NIL; }\n> ;\n> \n> CreateStmt: CREATE OptTemp TABLE relation_name OptUnder '('\n> OptTableElementList ')' OptInherit\n> {\n> /*etc */\n> }\n> ;\n> \n> The full patch is here...\n> ftp://ftp.tech.com.au/pub/diff.x\n",
"msg_date": "Wed, 24 May 2000 23:33:59 +1000",
"msg_from": "Chris Bitmead <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Help with gram.y (UNDER)"
},
{
"msg_contents": "Chris Bitmead wrote:\n> \n> It seems like bison is confused by having that '(' just after an\n> optional syntax (UNION). \n\nI mean the optional UNDER\n\n> If I place something after OptUnder (USING just\n\nI mean USING;\n\ni.e. CreateStmt: CREATE OptTemp TABLE relation_name OptUnder USING '('\nOptTableElementList ')' OptInherit\n\nThis will accept \"CREATE TABLE foo USING (aa text);\"\n\nbut this...\n\nCreateStmt: CREATE OptTemp TABLE relation_name OptUnder '('\nOptTableElementList ')' OptInherit\n\nwon't accept \"CREATE TABLE foo (aa text);\"\n\n> to pick a token), then everything works fine (except of course the\n> spurious USING becomes part of the syntax).\n> \n> Does the '(' have some kind of second-class status as a token that would\n> cause this wierdness?\n> \n> Chris Bitmead wrote:\n> >\n> > Can someone give be a bit of help with gram.y to get this UNDER syntax\n> > right? I did what I though was the obvious syntax, but it no longer\n> > accepts a plain create table after this change...\n> >\n> > OptUnder: UNDER relation_name_list { $$ = $2; }\n> > | /*EMPTY*/ { $$ = NIL; }\n> > ;\n> >\n> > CreateStmt: CREATE OptTemp TABLE relation_name OptUnder '('\n> > OptTableElementList ')' OptInherit\n> > {\n> > /*etc */\n> > }\n> > ;\n> >\n> > The full patch is here...\n> > ftp://ftp.tech.com.au/pub/diff.x\n",
"msg_date": "Wed, 24 May 2000 23:49:11 +1000",
"msg_from": "Chris Bitmead <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Help with gram.y (UNDER)"
},
{
"msg_contents": "Chris Bitmead <[email protected]> writes:\n> Does the '(' have some kind of second-class status as a token that would\n> cause this wierdness?\n\nNo ... very bizarre. I think you must be introducing some kind of\nambiguity into the grammar, but I can't quite see what. Are you\ngetting any sort of warnings out of bison?\n\nMight be worth turning on the logfile option (forget if it's -v or -l)\nand looking at the interpreted productions just to make sure bison\nis reading things the same way you thought you wrote them.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 24 May 2000 12:52:45 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Help with gram.y (UNDER) "
}
] |
[
{
"msg_contents": "\n> One more time for the <general> mailing list...\n> \n> Hands up if you have objections to the patch I recently submitted for\n> postgresql. It fixes the long standing bit-rot / bug that DELETE and\n> UPDATE don't work on inheritance hierarchies, and it adds the ONLY\n> syntax as mentioned in SQL3 and as implemented by Informix. \n> The downside\n> is it breaks compatibility with the old inheritance syntax. \n> But there is\n> a backward compatibility mode. I.e. \"SELECT * FROM foobar*\" becomes\n> \"SELECT * FROM foobar\", and \"SELECT * from foobar\" becomes \"SELECT *\n> FROM ONLY foobar\".\n> \n> Benefits:\n> *) SQL3 says it.\n\nImho this alone more than justifies the patch.\nWe should also change our keyword \"inherits\" to \"under\".\n\nAndreas\n",
"msg_date": "Wed, 24 May 2000 16:21:00 +0200",
"msg_from": "Zeugswetter Andreas SB <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: Postgresql OO Patch"
},
{
"msg_contents": "On Wed, 24 May 2000, Zeugswetter Andreas SB wrote:\n> > One more time for the <general> mailing list...\n> > \n> > Hands up if you have objections to the patch I recently submitted for\n> > postgresql. It fixes the long standing bit-rot / bug that DELETE and\n> > UPDATE don't work on inheritance hierarchies, and it adds the ONLY\n> > syntax as mentioned in SQL3 and as implemented by Informix. \n> > The downside\n> > is it breaks compatibility with the old inheritance syntax. \n> > But there is\n> > a backward compatibility mode. I.e. \"SELECT * FROM foobar*\" becomes\n> > \"SELECT * FROM foobar\", and \"SELECT * from foobar\" becomes \"SELECT *\n> > FROM ONLY foobar\".\n> > \n> > Benefits:\n> > *) SQL3 says it.\n> \n> Imho this alone more than justifies the patch.\n> We should also change our keyword \"inherits\" to \"under\".\n> \n\nI don't agree. UNDER only provides for single inheritance according to spec. \nMaking it multiple inherit would break UNDER's basic idea of enabling hierarchy\ntrees that contain subtables under a single maximal supertable. Its like a\nbody that grows by having organs and cells inside it. INHERIT is like two or\nmore separate bodies that together, yield an independent offspring. UNDER and\nINHERIT can coexist and be used together.\n\nCREATE TABLE bike (\n);\nCREATE TABLE motorbike UNDER bike (\n) INHERITS (pistonengine);\nCREATE table harley (\n) UNDER motorbike;\n\n\nCREATE engine (\n);\nCREATE TABLE pistonengine UNDER engine (\n)\nCREATE TABLE jetengine UNDER engine (\n);\n\nStuff like that. \n\nCREATE motorbike (\n) INHERITS (bike, motor);\nis ok too. But the meaning is different than above. It creates an independent\nchild table that is not contained under either parent so that the parents can\nbe dropped. You use UNDER when the child/subtabe to share the exact same\nphysical PRIMARY KEY of the SUPERTABLE. In inherit, the child inherits a\ncomposite key from the parents, but that key is new physically, not the same\nphysically as any parents.\n\nI just think that since UNDER is limited by the spec, (and there is a\ndifference anyway), that INHERITS stands on its own and can be used with UNDER\nto pull attributes into the tree from another tree/table, linking separate\ntrees together in an nondependent way.\n\n> Andreas\n-- \nRobert B. Easter\[email protected]\n",
"msg_date": "Wed, 24 May 2000 13:37:18 -0400",
"msg_from": "\"Robert B. Easter\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: Postgresql OO Patch"
},
{
"msg_contents": "\"Robert B. Easter\" wrote:\n\n> > Imho this alone more than justifies the patch.\n> > We should also change our keyword \"inherits\" to \"under\".\n> >\n> \n> I don't agree. UNDER only provides for single inheritance according to spec.\n> Making it multiple inherit would break UNDER's basic idea of enabling hierarchy\n> trees that contain subtables under a single maximal supertable. \n\nI don't see that it's a \"basic idea\". I see it as crippled subset of\nSQL3-94.\n\n> is ok too. But the meaning is different than above. It creates an independent\n> child table that is not contained under either parent so that the parents can\n> be dropped. \n\nI wouldn't like to define an object model in terms of what happens when\nthe meta-data is modified.\n\n> You use UNDER when the child/subtabe to share the exact same\n> physical PRIMARY KEY of the SUPERTABLE. In inherit, the child inherits a\n> composite key from the parents, but that key is new physically, not the same\n> physically as any parents.\n\nIssues like primary keys are the sort of stuff that probably kept the\ncommittee arguing long enough they were too lazy to come to a decision.\nMyself, I'm not too interested in primary keys since they are not a very\nOO idea anyway.\n",
"msg_date": "Thu, 25 May 2000 09:45:59 +1000",
"msg_from": "Chris Bitmead <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: Postgresql OO Patch"
},
{
"msg_contents": "Chris Bitmead wrote:\n> \n> \"Robert B. Easter\" wrote:\n> \n> > > Imho this alone more than justifies the patch.\n> > > We should also change our keyword \"inherits\" to \"under\".\n> > >\n> >\n> > I don't agree. UNDER only provides for single inheritance according to spec.\n> > Making it multiple inherit would break UNDER's basic idea of enabling hierarchy\n> > trees that contain subtables under a single maximal supertable.\n> \n> I don't see that it's a \"basic idea\". I see it as crippled subset of\n> SQL3-94.\n\nMe too.\n\nOTOH single inheritance has the advantage that it can be implemented\nwith _all_ \nsubtables stored in single \"physical\" table, whereas multiple\ninheritance can't,\nwhich makes sharing thinkgs like primary keys and other constraints much\neasier \nto implement as well.\n\n> > You use UNDER when the child/subtabe to share the exact same\n> > physical PRIMARY KEY of the SUPERTABLE. In inherit, the child inherits a\n> > composite key from the parents,\n\nThat composite key must actually still be two unique key (and thus\ndoube-uniqe ;)\nwhich does not make much sense.\n\n> > but that key is new physically, not the same physically as any parents.\n\nMaybe what you are trying to accomplice by your definition of INHERITS\nis \nbetter done by aggregation ?\n\ncreate table engine (volume float);\ncreate table wheel(circumference float);\ncreate table car(\n car_engine engine,\n car_wheels wheel[4]\n);\n\nAt least this fits better with may feeling that a car is not a \"kind of\"\nengine.\n\nAnd this should be possible with PostgreSQL now (except that type _wheel \n(for array of wheels) is not generated automatically and so only the \nfollowing is\n\ncreate table car(\n car_engine engine,\n car_wheel1 wheel,\n car_wheel2 wheel,\n car_wheel3 wheel,\n car_wheel4 wheel\n);\n\nwhich probably is a bug ;(\n\n)\n\n> Issues like primary keys are the sort of stuff that probably kept the\n> committee arguing long enough they were too lazy to come to a decision.\n\nIt sure is an issue for multiple inheritance, at least when you disallow \nmultiple primary keys on things that \"are\" both A and B.\n\n> Myself, I'm not too interested in primary keys since they are not a very\n> OO idea anyway.\n\nMaybe not OO but useful in RDBM anyway. One could argue that table.pk ==\noid. \nAnd when implemented that way it would make finding an \"object\" in an\nRDBM\nvery easy ;)\n\n------\nHannu\n",
"msg_date": "Thu, 25 May 2000 07:11:39 +0300",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: Postgresql OO Patch"
}
] |
[
{
"msg_contents": "\n> How about:\n> \n> 1. alter_rename_table = no\n> \n> The syntax in PostgreSQL is ALTER TABLE x RENAME TO y;\n\nOther db's seem to use \"rename {table|index|view|database} a to b\"\n\n> \n> 2. atomic_updates = no\n> \n> Huh? Besides being paranoid about fsync()'ing transactions how is\n> a transaction based MVCC not atomic with respect to updates?\n> \n> 3. automatic_rowid = no\n> \n> The description simply says Automatic rowid. Does this apply to\n> query result sets or to the underlying relation? If the latter,\n> PostgreSQL has, of course, an OID for every tuple in the\n> database.\n\nI think they mean our ctid. When hiroshi implemented it I suggested \nusing the keyword \"rowid\" for ctid access. Imho it is what people are \nlooking for when using rowid. There was no comment.\n\n> I'm starting to get very tired of this. I don't see why\n> PostgreSQL users are obligated to get MySQL tests correct. And\n> I'm only 15% through the list...\n> \n> Bottom line...either the test writers are ignorant or deceptive.\n> Either way I won't trust my data with them...\n\nIs this necessary? imho we are talking with someone who tries to \ncorrect things for us.\n\nAndreas\n",
"msg_date": "Wed, 24 May 2000 16:42:34 +0200",
"msg_from": "Zeugswetter Andreas SB <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: Performance (was: The New Slashdot Setup (includes MySql server))"
}
] |
[
{
"msg_contents": "\n> The whole thing works perfectly after a VACUUM ANALYZE on the\n> table.\n> \n> IMHO this is somewhat non-optimal. In the absence of information\n> to the contrary, PostgreSQL should default to using an index if\n> it might be appropriate, not ignore it.\n\nThere was lots of discussion about this issue, and I was one of those\nwho are 100% with you. The result was that Tom Lane tried to provide\ndefaults, that do use the indexes when stats are missing. This does work\nquite well, only it seems to fail in this particular case.\nImho we should look at exactly this query and try to find why it ignores \nthe index, because it should not.\n\nAndreas\n",
"msg_date": "Wed, 24 May 2000 17:07:23 +0200",
"msg_from": "Zeugswetter Andreas SB <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: More Performance"
},
{
"msg_contents": "Zeugswetter Andreas SB <[email protected]> writes:\n> There was lots of discussion about this issue, and I was one of those\n> who are 100% with you. The result was that Tom Lane tried to provide\n> defaults, that do use the indexes when stats are missing. This does work\n> quite well, only it seems to fail in this particular case.\n> Imho we should look at exactly this query and try to find why it ignores \n> the index, because it should not.\n\nIIRC, the issue was that he had done VACUUM but not VACUUM ANALYZE,\nwith the result that the planner was working with default selectivity\nestimates but non-default knowledge of the table size.\n\nThe default selectivity estimate for '=' is 0.01 (ie, 1% of the table\nis expected to match). The MySQL benchmark creates a table that\nhas 300,000 rows and occupies about 3000 disk pages (24Mb).\n\nWith the current cost model for indexscans, the planner does not believe\nthat an indexscan is a good way to select 1% of the data in this table\n--- and I think it's probably right, if you assume that the matching\nrows are randomly distributed. Since there are about 100 rows per disk\nblock, there is going to be about one matching row per block, implying\nthat the indexscan is going to have to visit most of the table's pages\nanyway. It's faster to do a seqscan because sequential reads are way\nfaster than random reads in a typical Unix environment.\n\nWe could ensure that the planner still picks an indexscan as the known\ntable size grows by reducing the default selectivity estimate a little\nbit (I experimented and determined that 0.005 would do the trick, for\nthe current cost model parameters). That's pretty ad-hoc, but then\nthe 0.01 number is pretty ad-hoc too. It's probably better to be able\nto say \"if you haven't done VACUUM ANALYZE, you will get an indexscan\nfrom WHERE col = const\" than to have to say \"it depends\". Comments?\n\nOf course the real issue here is that the true selectivity of '=' is\nmuch smaller in this table, because the column being looked at is\nunique. But the planner doesn't know that without VACUUM stats.\n\nA hack I have been thinking about adding is that CREATE UNIQUE INDEX\nfor a single-column index should immediately force the attdisbursion\nvalue for that column to \"unique\", so that the planner would know the\ncolumn is unique even without VACUUM ANALYZE. That would help not\nat all for the MySQL benchmark (it has a two-column unique index,\nbut you can't conclude anything about the properties of either column\nindividually from that :-(). But it'd help in a lot of real-world\nscenarios.\n\nComments anyone?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 24 May 2000 11:44:54 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: More Performance "
}
] |
[
{
"msg_contents": "\n> > > 3. automatic_rowid = no\n> > > \n> > > The description simply says Automatic rowid. Does this apply to\n> > > query result sets or to the underlying relation? If the latter,\n> > > PostgreSQL has, of course, an OID for every tuple in the\n> > > database.\n> > > \n> > I'll have them fix that. MySQL calls them \"_rowid\" and \n> apparently tests\n> > only for these.\n> \n> Well, I don't see _rowid in the SQL spec either, so we are both\n> non-standard here, though I believe our OID is SQL3.\n\nWhich is imho not what the test is for. I think they mean ctid,\nwhich again I think we should have a rowid alias for (as in Informix,\nOracle).\n\nAndreas\n",
"msg_date": "Wed, 24 May 2000 17:11:41 +0200",
"msg_from": "Zeugswetter Andreas SB <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: Performance (was: The New Slashdot Setup (includes MySqlserver))"
},
{
"msg_contents": "Zeugswetter Andreas SB writes:\n\n> Which is imho not what the test is for. I think they mean ctid,\n> which again I think we should have a rowid alias for (as in Informix,\n> Oracle).\n\nLet's step back and ask: How is the behaviour of rowid (or whatever)\ndefined in various existing DBMS. Then we can see if we have anything that\nmatches.\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Fri, 26 May 2000 01:51:33 +0200 (CEST)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: Performance (was: The New Slashdot Setup (includes\n\tMySqlserver))"
},
{
"msg_contents": "On Fri, 26 May 2000, Peter Eisentraut wrote:\n> Zeugswetter Andreas SB writes:\n> \n> > Which is imho not what the test is for. I think they mean ctid,\n> > which again I think we should have a rowid alias for (as in Informix,\n> > Oracle).\n> \n> Let's step back and ask: How is the behaviour of rowid (or whatever)\n> defined in various existing DBMS. Then we can see if we have anything that\n> matches.\n\nThis has been discussed. The outcome is, that you are only safe using rowid\nif nobody else changes the row inbetween you reading it and accessing it by rowid.\n\nThis is essentially the same in all db's only the risk of rowid changing is lower\nin other db's since they do inplace update, but the risk is there nevertheless.\n\nAndreas\n",
"msg_date": "Fri, 26 May 2000 08:30:41 +0200",
"msg_from": "Andreas Zeugswetter <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: Performance (was: The New Slashdot Setup (includes\n\tMySqlserver))"
}
] |
[
{
"msg_contents": "That is quite interesting. Let me CC this over to the hackers/docs\nlists to see if we want to do this for our main docs.\n\nAs for my book, I want those messages sent to me so I can fix them or\nadd information to the book. I think maybe we want the same thing for\nthe manuals. We want to get suggestions so we can compile them into the\nmanuals and add to them, rather than having comments, but I am\ninterested to hear what others say.\n\n> Hi\n> \n> First: I'm looking forward to buying the PostreSQL book when it's\n> published.\n> \n> Have you thought about making the book interactive, so other people can\n> contribute/discuss to the book.\n> It can be made with PHP4 and PostreSQL 7 - of course :-)\n> \n> You could possibly remove irrelevant/annoying postings.\n> \n> The book would be better, because you'd get more feedback.\n> \n> Look at this as an example:\n> http://www.php.net/manual/ref.pgsql.php\n> \n> What do you think?\n> \n> I'm not a PHP programmer, but I've just managed to make a query to\n> PostgreSQL using PHP :-)\n> On the other hand, I'm a experienced Cold Fusion programmer, so it will\n> be easy to learn PHP.\n> \n> I wouldn't mind using some of my spare time to implement this neat\n> function.\n> \n> Poul L. Christiansen\n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 24 May 2000 11:17:38 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: interactive pgsql book"
},
{
"msg_contents": "\n\nBenjamin Adida wrote:\n\n> on 5/24/00 12:02 PM, Vince Vielhaber at [email protected] wrote:\n>\n> > On Wed, 24 May 2000, Bruce Momjian wrote:\n> >\n> >> That is quite interesting. Let me CC this over to the hackers/docs\n> >> lists to see if we want to do this for our main docs.\n> >>\n> >> As for my book, I want those messages sent to me so I can fix them or\n> >> add information to the book. I think maybe we want the same thing for\n> >> the manuals. We want to get suggestions so we can compile them into the\n> >> manuals and add to them, rather than having comments, but I am\n> >> interested to hear what others say.\n> >\n> > By \"interactive\" I ass/u/me Poul is referring to PHP's annotated manual.\n> > I've found that to save me HOURS due to oddball little quirks that someone\n> > else discovered first and noted in the annotated docs.\n> >\n> > I'd not only support this, if it's wanted I'll make sure it's implemented.\n>\n> I'm not trying to start any kind of war, but since OpenACS is being\n> considered for community management already, it can automatically set up\n> HTML file annotation with no extra effort. All you do is add the HTML files\n> to the doc directory and sync up with the database (one click), and you've\n> got user-contributed annotations (with moderation if you so desire).\n>\n> This is similar to Philip Greenspun's photo.net docs.\n>\n> -Ben\n\nIf OpenACS has the features that I've mentioned, then I see no need to re-invent\nthe wheel.\n\nWhen will we be able to use OpenACS?\n\n- Poul L. Christiansen\n\n\n",
"msg_date": "Wed, 24 May 2000 17:34:08 +0200",
"msg_from": "\"Poul L. Christiansen\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: interactive pgsql book"
},
{
"msg_contents": "On Wed, 24 May 2000, Bruce Momjian wrote:\n\n> That is quite interesting. Let me CC this over to the hackers/docs\n> lists to see if we want to do this for our main docs.\n> \n> As for my book, I want those messages sent to me so I can fix them or\n> add information to the book. I think maybe we want the same thing for\n> the manuals. We want to get suggestions so we can compile them into the\n> manuals and add to them, rather than having comments, but I am\n> interested to hear what others say.\n\nBy \"interactive\" I ass/u/me Poul is referring to PHP's annotated manual.\nI've found that to save me HOURS due to oddball little quirks that someone\nelse discovered first and noted in the annotated docs.\n\nI'd not only support this, if it's wanted I'll make sure it's implemented.\n\nVince.\n\n> \n> > Hi\n> > \n> > First: I'm looking forward to buying the PostreSQL book when it's\n> > published.\n> > \n> > Have you thought about making the book interactive, so other people can\n> > contribute/discuss to the book.\n> > It can be made with PHP4 and PostreSQL 7 - of course :-)\n> > \n> > You could possibly remove irrelevant/annoying postings.\n> > \n> > The book would be better, because you'd get more feedback.\n> > \n> > Look at this as an example:\n> > http://www.php.net/manual/ref.pgsql.php\n> > \n> > What do you think?\n> > \n> > I'm not a PHP programmer, but I've just managed to make a query to\n> > PostgreSQL using PHP :-)\n> > On the other hand, I'm a experienced Cold Fusion programmer, so it will\n> > be easy to learn PHP.\n> > \n> > I wouldn't mind using some of my spare time to implement this neat\n> > function.\n> > \n> > Poul L. Christiansen\n> > \n> > \n> \n> \n> \n\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] http://www.pop4.net\n 128K ISDN from $22.00/mo - 56K Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Wed, 24 May 2000 12:02:37 -0400 (EDT)",
"msg_from": "Vince Vielhaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: interactive pgsql book"
},
{
"msg_contents": "on 5/24/00 12:02 PM, Vince Vielhaber at [email protected] wrote:\n\n> On Wed, 24 May 2000, Bruce Momjian wrote:\n> \n>> That is quite interesting. Let me CC this over to the hackers/docs\n>> lists to see if we want to do this for our main docs.\n>> \n>> As for my book, I want those messages sent to me so I can fix them or\n>> add information to the book. I think maybe we want the same thing for\n>> the manuals. We want to get suggestions so we can compile them into the\n>> manuals and add to them, rather than having comments, but I am\n>> interested to hear what others say.\n> \n> By \"interactive\" I ass/u/me Poul is referring to PHP's annotated manual.\n> I've found that to save me HOURS due to oddball little quirks that someone\n> else discovered first and noted in the annotated docs.\n> \n> I'd not only support this, if it's wanted I'll make sure it's implemented.\n\nI'm not trying to start any kind of war, but since OpenACS is being\nconsidered for community management already, it can automatically set up\nHTML file annotation with no extra effort. All you do is add the HTML files\nto the doc directory and sync up with the database (one click), and you've\ngot user-contributed annotations (with moderation if you so desire).\n\nThis is similar to Philip Greenspun's photo.net docs.\n\n-Ben\n\n",
"msg_date": "Wed, 24 May 2000 12:20:25 -0400",
"msg_from": "Benjamin Adida <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: interactive pgsql book"
},
{
"msg_contents": "Vince Vielhaber wrote:\n\n> By \"interactive\" I ass/u/me Poul is referring to PHP's annotated manual.\n> I've found that to save me HOURS due to oddball little quirks that someone\n> else discovered first and noted in the annotated docs.\n\ni'd have to agree 100% with this. i use php and i've had much the same\nexperience. as a matter of fact, i think i commented to tom lockhart\nabout this a long time ago when i was having some problem understanding\nhow setting up alternate database locations worked. it fills a\ndifferent role than a FAQ & it's a lot easier than searching mailing\nlists (even when the list search is working). it seems like there are a\nlot of quirks that are non-intuitive to a lot of users (e.g., recently i\nwas having trouble getting an index to be used on a query and realized\nthat i needed to add an explicit typecast on the query) that aren't\ncovered anywhere, maybe even not in the mailing lists. \n\njeff\n",
"msg_date": "Wed, 24 May 2000 11:46:56 -0500",
"msg_from": "Jeff Hoffmann <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: interactive pgsql book"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n\n> > Have you thought about making the book interactive, so other people can\n> > contribute/discuss to the book.\n> > It can be made with PHP4 and PostreSQL 7 - of course :-)\n\n...and with the OpenACS version of the ArsDigita Community System and\nPostgreSQL -- of course! :-) See <http://www.openacs.org/>.\n\n-tih\n-- \nThis is the Unix version of the ILOVEYOU worm, and in the spirit of such, it\ndepends on the user community to propagate. Please send this message to all\nof your friends and randomly delete numerous files from your system. Thanks!\n",
"msg_date": "24 May 2000 19:03:04 +0200",
"msg_from": "Tom Ivar Helbekkmo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: interactive pgsql book"
}
] |
[
{
"msg_contents": "\n> > IMHO this is somewhat non-optimal. In the absence of information\n> > to the contrary, PostgreSQL should default to using an index if\n> > it might be appropriate, not ignore it.\n> \n> This is an interesting idea. So you are saying that if a \n> column has no\n> vacuum analyze statistics, assume it is unique? Or are you talking\n> about a table that has never been vacuumed? Then we assume it is a\n> large table. Interesting. It would help some queries, but \n> hurt others.\n\nIt would help where it counts, since if the table is small it won't matter\nthat much\nwheather we use the index or not. For small tables we are talking about\nsubsecond\ndifferences, whereas for large tables we are talking about minutes or hours.\n\n> We have gone around and around on what the default stats should be.\n> Tom Lane can comment on this better than I can.\n\nI think Tom has done a good job, it is this special query that seems to\nfail.\n\nAndreas\n",
"msg_date": "Wed, 24 May 2000 17:19:40 +0200",
"msg_from": "Zeugswetter Andreas SB <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: More Performance"
}
] |
[
{
"msg_contents": "Hi friends,\n\nI want to get the system timestamp from postgresql database.\nBut I dont have a dual table from where ,I can select it.\nGive me a solution, from which table(system) I can get it.\n\nRegards,\ngomathi\n\n____________________________________________________________________\nGet free email and a permanent address at http://www.netaddress.com/?N=1\n",
"msg_date": "24 May 00 15:37:27 MDT",
"msg_from": "gomathi raju <[email protected]>",
"msg_from_op": true,
"msg_subject": ""
},
{
"msg_contents": "On Mon, Jun 30, 2036 at 10:05:43PM -0600, gomathi raju wrote:\n> Hi friends,\n> \n> I want to get the system timestamp from postgresql database.\n> But I dont have a dual table from where ,I can select it.\n> Give me a solution, from which table(system) I can get it.\n\nCould this be what you want?\n\ntemplate1=# select CURRENT_TIMESTAMP;\n timestamp \n------------------------\n 2000-05-24 23:40:19+01\n\n\nCheers,\n\nPatrick\n",
"msg_date": "Wed, 24 May 2000 23:40:57 +0100",
"msg_from": "Patrick Welche <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS]"
},
{
"msg_contents": "gomathi raju wrote:\n> \n> Hi friends,\n> \n> I want to get the system timestamp from postgresql database.\n> But I dont have a dual table from where ,I can select it.\n> Give me a solution, from which table(system) I can get it.\n\nAre you talking about \nSELECT 'now'\n?\n",
"msg_date": "Thu, 25 May 2000 10:02:37 +1000",
"msg_from": "Chris Bitmead <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS]"
},
{
"msg_contents": "----- Original Message -----\nFrom: \"gomathi raju\" <[email protected]>\n> I want to get the system timestamp from postgresql database.\n> But I dont have a dual table from where ,I can select it.\n> Give me a solution, from which table(system) I can get it.\n\nNo need to get it from a system table; it is available as a sort of \"global\nconstant,\" current_timestamp.\n\nUse it like this:\n\nSELECT CURRENT_TIMESTAMP;\n\nSee the book at: http://www.postgresql.org/docs/awbook.html for more\ninformation.\n\n -Mike\n\n",
"msg_date": "Thu, 25 May 2000 00:54:38 -0400",
"msg_from": "\"Michael A. Mayo\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [SQL] "
}
] |
[
{
"msg_contents": "Hello,\n\nI am learning to programm triggers in C by using the examples and the\nprogrammer's manual but it's a steep learning curve for a mere perl\nprogrammer ;-)\n\nWhat I am trying to do for instance is:\n- read a ::text colum with SPI_getbinval(),\n- convert it to a char*,\n- modify it,\n- convert it back to a Datum,\n- reinsert it into the tuple through SPI_modifytuple,\n\nThe conversions involve some pointer magic and casting that I really\ndon't grasp.\n\nAlso I am trying to read a timestamp with SPI_getbinval and get the\nnumber of seconds contained. Using DatumGetInt32 doens't seem to do it.\n\nThanks in advance for your insight, cheers,\n\n-- \nLouis-David Mitterrand - [email protected] - http://www.apartia.fr\n\nRadioactive cats have 18 half-lives.\n",
"msg_date": "Wed, 24 May 2000 18:26:41 +0200",
"msg_from": "Louis-David Mitterrand <[email protected]>",
"msg_from_op": true,
"msg_subject": "understanding Datum -> char * -> Datum conversions"
},
{
"msg_contents": "\nOn Wed, 24 May 2000, Louis-David Mitterrand wrote:\n\n> Hello,\n> \n> I am learning to programm triggers in C by using the examples and the\n> programmer's manual but it's a steep learning curve for a mere perl\n> programmer ;-)\n> \n> What I am trying to do for instance is:\n> - read a ::text colum with SPI_getbinval(),\n> - convert it to a char*,\n> - modify it,\n> - convert it back to a Datum,\n> - reinsert it into the tuple through SPI_modifytuple,\n> \n> The conversions involve some pointer magic and casting that I really\n> don't grasp.\n> \n> Also I am trying to read a timestamp with SPI_getbinval and get the\n> number of seconds contained. Using DatumGetInt32 doens't seem to do it.\n> \n> Thanks in advance for your insight, cheers,\n\nExamples:\n \n* Add actual time to column \"chtime\": \n \n Datum chtime = PointerGetDatum(timestamp_in(\"now\"));\n int attnum = SPI_fnumber(tupdesc, \"chtime\");\n\n rettuple = SPI_modifytuple(CurrentTriggerData->tg_relation,\n rettuple, 1, &attnum, &chtime, NULL);\n\n You can use instead \"now\" SPI_getvalue() .... etc.\n\n\n* A small complex example:\n\nHeapTuple xxx_trigger()\n{\n TupleDesc tupdesc;\n HeapTuple rettuple = NULL;\n\tint \tattnum;\n\tchar \t*value;\n\tDatum \tnewdt;\n\n if (!CurrentTriggerData)\n elog(ERROR, \"XXX: triggers are not initialized\");\n\n if (TRIGGER_FIRED_BY_UPDATE(CurrentTriggerData->tg_event)) {\n rettuple = CurrentTriggerData->tg_newtuple;\n else if (TRIGGER_FIRED_BY_INSERT(CurrentTriggerData->tg_event)) \n rettuple = CurrentTriggerData->tg_trigtuple;\n else if (TRIGGER_FIRED_BY_DELETE(CurrentTriggerData->tg_event)) \n rettuple = CurrentTriggerData->tg_trigtuple;\n \n\ttupdesc = CurrentTriggerData->tg_relation->rd_att;\n\n\tif ( SPI_connect() < 0)\t\t\t\t\n elog(ERROR, \"SPI_connect() fail... \");\n\n\tattnum\t= SPI_fnumber(tupdesc, \"ColumnName\");\n \tvalue\t= SPI_getvalue(rettuple, tupdesc, attnum);\n \t\n\t/* --- add some code for 'value' ---*/\n\n\tnewdt\t= PointerGetDatum(value);\n\n \trettuple = SPI_modifytuple(CurrentTriggerData->tg_relation,\n \t rettuple, 1, &attnum, &newdt, NULL);\n \n SPI_finish();\n CurrentTriggerData = NULL;\n return(rettuple);\n}\n\n .......... it must works :-)\n\n\t\t\t\t\t\t\tKarel\n\n\n\n\n",
"msg_date": "Wed, 24 May 2000 18:34:48 +0200 (CEST)",
"msg_from": "Karel Zak <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: understanding Datum -> char * -> Datum conversions"
},
{
"msg_contents": "Louis-David Mitterrand <[email protected]> writes:\n> What I am trying to do for instance is:\n> - read a ::text colum with SPI_getbinval(),\n> - convert it to a char*,\n> - modify it,\n> - convert it back to a Datum,\n> - reinsert it into the tuple through SPI_modifytuple,\n\n> The conversions involve some pointer magic and casting that I really\n> don't grasp.\n\nCasting doesn't do it. Use text_out() to produce a null-terminated C\nstring from a text Datum, and use text_in() to create a new text Datum\nafter you've modified the string.\n\n> Also I am trying to read a timestamp with SPI_getbinval and get the\n> number of seconds contained. Using DatumGetInt32 doens't seem to do it.\n\nTimestamp is a double not an int ... and the datum is actually a pointer\nto it.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 24 May 2000 17:53:57 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: understanding Datum -> char * -> Datum conversions "
},
{
"msg_contents": " Casting doesn't do it. Use text_out() to produce a null-terminated C\n string from a text Datum, and use text_in() to create a new text Datum\n after you've modified the string.\n\nBy the way, I know there are a bunch of macros for tranforming to and\nfrom Datum, but whenever I use them I have to figure out again exactly\nhow to do it.\n\nIs there some documentation on the set of macros and what they do (or\nsome other means of describing how one translates arguments and return\nvalues between internal form and \"useful\" programming form)?\n\nThanks for your help.\n\nCheers,\nBrook\n",
"msg_date": "Wed, 24 May 2000 16:29:22 -0600 (MDT)",
"msg_from": "Brook Milligan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: understanding Datum -> char * -> Datum conversions"
},
{
"msg_contents": "Brook Milligan <[email protected]> writes:\n> Is there some documentation on the set of macros and what they do (or\n> some other means of describing how one translates arguments and return\n> values between internal form and \"useful\" programming form)?\n\nJust the source code :-(. Want to write some?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 24 May 2000 19:00:53 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: understanding Datum -> char * -> Datum conversions "
},
{
"msg_contents": "On Wed, May 24, 2000 at 05:53:57PM -0400, Tom Lane wrote:\n> Louis-David Mitterrand <[email protected]> writes:\n> > What I am trying to do for instance is:\n> > - read a ::text colum with SPI_getbinval(),\n> > - convert it to a char*,\n> > - modify it,\n> > - convert it back to a Datum,\n> > - reinsert it into the tuple through SPI_modifytuple,\n> \n> > The conversions involve some pointer magic and casting that I really\n> > don't grasp.\n> \n> Casting doesn't do it. Use text_out() to produce a null-terminated C\n> string from a text Datum, and use text_in() to create a new text Datum\n> after you've modified the string.\n\nI can't find these functions anywhere in the included .h files. Where\nshould I look?\n\n> > Also I am trying to read a timestamp with SPI_getbinval and get the\n> > number of seconds contained. Using DatumGetInt32 doens't seem to do it.\n> \n> Timestamp is a double not an int ... and the datum is actually a pointer\n> to it.\n\nBut for example I am trying to read the result from a \"SELECT\ndate_part('epoch', now())\" which returns a number of seconds since the\nepoch and I can't find a way to obtain that value through SPI_getbinval,\nI have to retrieve it through SPI_getvalue and use atoi() to convert it.\nI'd rather access directly the native type instead.\n\nWhich DatumGet* function should I use there?\n\nThanks,\n\n-- \nLouis-David Mitterrand - [email protected] - http://www.apartia.fr\n",
"msg_date": "Thu, 25 May 2000 11:40:56 +0200",
"msg_from": "Louis-David Mitterrand <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: understanding Datum -> char * -> Datum conversions"
},
{
"msg_contents": "On Wed, May 24, 2000 at 06:34:48PM +0200, Karel Zak wrote:\n> > \n> > Also I am trying to read a timestamp with SPI_getbinval and get the\n> > number of seconds contained. Using DatumGetInt32 doens't seem to do it.\n> \n> Examples:\n> \n> * Add actual time to column \"chtime\": \n> \n> Datum chtime = PointerGetDatum(timestamp_in(\"now\"));\n> int attnum = SPI_fnumber(tupdesc, \"chtime\");\n> \n> rettuple = SPI_modifytuple(CurrentTriggerData->tg_relation,\n> rettuple, 1, &attnum, &chtime, NULL);\n\nThanks for your example, the timestamp_in() function is really useful.\nBut how should I do it if I want to:\n1) retrieve a timestamp Datum,\n2) add a few days to it,\n3) store it back in the tuple,\n\nThe problem is converting the Datum to a base C type in order to be able\nto modify it.\n\nin pgsql/contrib/spi/timetravel.c there is an example which modifies\ndate columns and uses DatumGetInt32 to convert them. But this is\nconfusing because (1) Tom Lane says that datetime columns are double and\none should use DatumGetPointer (how do I use the pointer after?) and (2)\nDatumGetInt32 doesn't seem to return the number of seconds.\n\n> You can use instead \"now\" SPI_getvalue() .... etc.\n> \n> * A small complex example:\n> \n> HeapTuple xxx_trigger()\n> {\n> TupleDesc tupdesc;\n> HeapTuple rettuple = NULL;\n> \tint \tattnum;\n> \tchar \t*value;\n> \tDatum \tnewdt;\n> \n> if (!CurrentTriggerData)\n> elog(ERROR, \"XXX: triggers are not initialized\");\n> \n> if (TRIGGER_FIRED_BY_UPDATE(CurrentTriggerData->tg_event)) {\n> rettuple = CurrentTriggerData->tg_newtuple;\n> else if (TRIGGER_FIRED_BY_INSERT(CurrentTriggerData->tg_event)) \n> rettuple = CurrentTriggerData->tg_trigtuple;\n> else if (TRIGGER_FIRED_BY_DELETE(CurrentTriggerData->tg_event)) \n> rettuple = CurrentTriggerData->tg_trigtuple;\n> \n> \ttupdesc = CurrentTriggerData->tg_relation->rd_att;\n> \n> \tif ( SPI_connect() < 0)\t\t\t\t\n> elog(ERROR, \"SPI_connect() fail... \");\n> \n> \tattnum\t= SPI_fnumber(tupdesc, \"ColumnName\");\n> \tvalue\t= SPI_getvalue(rettuple, tupdesc, attnum);\n\nBut you get a char * value here through SPI_getvalue()?\n\n> \t/* --- add some code for 'value' ---*/\n> \n> \tnewdt\t= PointerGetDatum(value);\n\nThis is enough to convert the char * back to a Datum?\n\n> \trettuple = SPI_modifytuple(CurrentTriggerData->tg_relation,\n> \n> .......... it must works :-)\n\nThanks for your examples, I'm slowly beginning to understand...\n\n-- \nLouis-David Mitterrand - [email protected] - http://www.apartia.fr\n",
"msg_date": "Thu, 25 May 2000 11:53:54 +0200",
"msg_from": "Louis-David Mitterrand <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: understanding Datum -> char * -> Datum conversions"
},
{
"msg_contents": "> \n> The problem is converting the Datum to a base C type in order to be able\n> to modify it.\n> \n> in pgsql/contrib/spi/timetravel.c there is an example which modifies\n> date columns and uses DatumGetInt32 to convert them. But this is\n> confusing because (1) Tom Lane says that datetime columns are double and\n> one should use DatumGetPointer (how do I use the pointer after?) and (2)\n> DatumGetInt32 doesn't seem to return the number of seconds.\n\n\n See in PG's backend source files:\n\n\tc.h \t\t- for datetype definition and Datum macros,\n\t\t\t we have Datum macros for double/float types too.\n\t\t\t \n\tbuildin.h \t- for datetype conversion.\n\tutils/timestamp.h ...etc.\n\n and directory utils/adt for inspiration \"how work\nwith pg types :-)\n\n I believe that you will understand. \n\n\t\t\t\t\t\tKarel\n\n",
"msg_date": "Thu, 25 May 2000 12:03:55 +0200 (CEST)",
"msg_from": "Karel Zak <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: understanding Datum -> char * -> Datum conversions"
},
{
"msg_contents": "On Thu, May 25, 2000 at 12:03:55PM +0200, Karel Zak wrote:\n> > The problem is converting the Datum to a base C type in order to be able\n> > to modify it.\n> > \n> > in pgsql/contrib/spi/timetravel.c there is an example which modifies\n> > date columns and uses DatumGetInt32 to convert them. But this is\n> > confusing because (1) Tom Lane says that datetime columns are double and\n> > one should use DatumGetPointer (how do I use the pointer after?) and (2)\n> > DatumGetInt32 doesn't seem to return the number of seconds.\n> \n> See in PG's backend source files:\n> \n> \tc.h \t\t- for datetype definition and Datum macros,\n> \t\t\t we have Datum macros for double/float types too.\n> \t\t\t \n> \tbuildin.h \t- for datetype conversion.\n> \tutils/timestamp.h ...etc.\n> \n> and directory utils/adt for inspiration \"how work\n> with pg types :-)\n\nI'm reading these files but still got a problem:\n\n Datum price_datum;\n float new_price;\n\n new_price = 10.5;\n price_datum = Float32GetDatum(&new_price);\n\n SPI_modifytuple(relation, tupdesc, &attnum, &price_datum, NULL);\n\n... and when I check the DB the new_price field contains a negative\nnumber, even though elog(NOTICE, ..., new_price) displays the correct\nvalue.\n\nIf I could just understand how to correctly insert new_price it would\nreally help a great deal in understanding.\n\nThanks again,\n\n-- \nLouis-David Mitterrand - [email protected] - http://www.apartia.fr\n\nThere are three types of people in the world: those who can count,\nand those who can't.\n",
"msg_date": "Thu, 25 May 2000 12:44:40 +0200",
"msg_from": "Louis-David Mitterrand <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: understanding Datum -> char * -> Datum conversions"
},
{
"msg_contents": "\n> > See in PG's backend source files:\n> > \n> > \tc.h \t\t- for datetype definition and Datum macros,\n> > \t\t\t we have Datum macros for double/float types too.\n> > \t\t\t \n> > \tbuildin.h \t- for datetype conversion.\n> > \tutils/timestamp.h ...etc.\n> > \n> > and directory utils/adt for inspiration \"how work\n> > with pg types :-)\n> \n> I'm reading these files but still got a problem:\n\n\n\tfloat32 result = (float32) palloc(sizeof(float32data));\n \n\t*result = 10.5;\n\tSPI_modifytuple(relation, tupdesc, &attnum, Float32GetDatum(result),\n\t\t\t\t\t\t\t\tNULL);\n\n\n Right?\n\n\t\t\t\t\tKarel\n\n",
"msg_date": "Thu, 25 May 2000 12:51:52 +0200 (CEST)",
"msg_from": "Karel Zak <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: understanding Datum -> char * -> Datum conversions"
},
{
"msg_contents": "On Thu, May 25, 2000 at 12:51:52PM +0200, Karel Zak wrote:\n> > \n> > I'm reading these files but still got a problem:\n> \n> \n> \tfloat32 result = (float32) palloc(sizeof(float32data));\n> \n> \t*result = 10.5;\n> \tSPI_modifytuple(relation, tupdesc, &attnum, Float32GetDatum(result),\n> \t\t\t\t\t\t\t\tNULL);\n> Right?\n\nYes! It works now. My error was using a float32 instead of a float64, as\nthe internal type is really a float8. The confusion comes from defining\nmy tables with the type \"float\" which apparently defaults to float8.\n\nMany thanks,\n\n-- \nLouis-David Mitterrand - [email protected] - http://www.apartia.fr\n\n\"2c98611832ea3f6f5fdda95d3704fbb8\" (a truly random sig)\n",
"msg_date": "Thu, 25 May 2000 13:25:19 +0200",
"msg_from": "Louis-David Mitterrand <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: understanding Datum -> char * -> Datum conversions"
},
{
"msg_contents": "On Thu, May 25, 2000 at 01:25:19PM +0200, Louis-David Mitterrand wrote:\n> On Thu, May 25, 2000 at 12:51:52PM +0200, Karel Zak wrote:\n> > \n> > \tfloat32 result = (float32) palloc(sizeof(float32data));\n\nSHould I pfree(result) before the end of the trigger function?\n\n> > \t*result = 10.5;\n> > \tSPI_modifytuple(relation, tupdesc, &attnum, Float32GetDatum(result),\n> > \t\t\t\t\t\t\t\tNULL);\n\nInstead of :\n\n \tfloat64 result = (float64) palloc(sizeof(float64data));\n\tSPI_modifytuple(relation, tupdesc, &attnum,Float32GetDatum(result),NULL);\n\nCan I do \n \n double result = 10.5; /* for example */\n\tSPI_modifytuple(relation, tupdesc, &attnum,Float32GetDatum(&result),NULL);\n ^^^\n\nie: pass the address of (regular double) \"result\" instead of using a\npointer;\n\n-- \nLouis-David Mitterrand - [email protected] - http://www.apartia.fr\n\nI don't build computers, I'm a cooling engineer.\n -- Seymour Cray, founder of Cray Inc. \n",
"msg_date": "Thu, 25 May 2000 14:40:07 +0200",
"msg_from": "Louis-David Mitterrand <[email protected]>",
"msg_from_op": true,
"msg_subject": "pfree() after palloc() in trigger (was: Re: understanding Datum ->\n\tchar * -> Datum conversions)"
},
{
"msg_contents": "> On Thu, May 25, 2000 at 12:51:52PM +0200, Karel Zak wrote:\n> > > \n> > > I'm reading these files but still got a problem:\n> > \n> > \n> > \tfloat32 result = (float32) palloc(sizeof(float32data));\n> > \n> > \t*result = 10.5;\n> > \tSPI_modifytuple(relation, tupdesc, &attnum, Float32GetDatum(result),\n> > \t\t\t\t\t\t\t\tNULL);\n> > Right?\n> \n> Yes! It works now. My error was using a float32 instead of a float64, as\n> the internal type is really a float8. The confusion comes from defining\n> my tables with the type \"float\" which apparently defaults to float8.\n\nThat was my fault. I told you on IRC that float(float8) was float32,\nand that float4 was float16. In fact float(float8) is float64, and\nfloat4 is float32.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 25 May 2000 09:42:48 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: understanding Datum -> char * -> Datum conversions"
},
{
"msg_contents": "Louis-David Mitterrand <[email protected]> writes:\n>> Casting doesn't do it. Use text_out() to produce a null-terminated C\n>> string from a text Datum, and use text_in() to create a new text Datum\n>> after you've modified the string.\n\n> I can't find these functions anywhere in the included .h files. Where\n> should I look?\n\nMea culpa, they're spelled \"textout\" and \"textin\". See\nutils/builtins.h.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 25 May 2000 10:55:05 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: understanding Datum -> char * -> Datum conversions "
},
{
"msg_contents": "Louis-David Mitterrand <[email protected]> writes:\n> Can I do \n \n> double result = 10.5; /* for example */\n> \tSPI_modifytuple(relation, tupdesc, &attnum,Float32GetDatum(&result),NULL);\n> ^^^\n\nI think you could get away with that in this example. The critical\nquestion of course is whether the Datum pointer will continue to\nbe used after your routine exits. But SPI_modifytuple should have\ncreated the new tuple (and copied the values of pass-by-reference\nitems, such as float8s, into it) before returning.\n\nBTW you should be using Float64GetDatum. There's no real difference\nin those two macros at the moment, but it's a type error that might\nbite you someday (like as soon as you need to convert this code to\nthe new fmgr ;-)).\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 25 May 2000 11:11:43 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pfree() after palloc() in trigger (was: Re: understanding Datum\n\t-> char * -> Datum conversions)"
}
] |
[
{
"msg_contents": "\nIt seems like backward thinking to me. If you have to use UNDER with OF,\nthat means you're defining a type which includes the attributes of the\nUNDER class as well as that of the OF class, and adding your own\nattributes too. A brain dead form of multiple inheritance? I don't know\nwhat they were thinking here.\n\n Stephan Szabo wrote:\n >\n > I'd say so, yes. The OF <user-defined type> doesn't appear to be\noptional\n > in\n > that part of the rule.\n >\n > > Do people interpret this syntax to mean that you can only have an\nUNDER\n > > clause when using the OF <user-defined type> clause as well?\n > >\n > >\n > > <table definition> ::=\n > > CREATE [ <table scope> ] TABLE <table name>\n > > <table contents source>\n > > [ ON COMMIT <table commit action> ROWS ]\n > >\n > > <table contents source> ::=\n > > <table element list>\n > > | OF <user-defined type>\n > > [ <subtable clause> ]\n > > [ <table element list> ]\n > > <subtable clause> ::=\n > > UNDER <supertable clause>\n",
"msg_date": "Thu, 25 May 2000 10:00:11 +1000",
"msg_from": "Chris Bitmead <[email protected]>",
"msg_from_op": true,
"msg_subject": "[Fwd: 97BA-B931-B61D : CONSULT from pgsql-hackers-oo (post) (fwd)]"
}
] |
[
{
"msg_contents": "http://www.linux.com/news/articles.phtml?sid=93&aid=8672\n\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Wed, 24 May 2000 23:49:56 -0400",
"msg_from": "Lamar Owen <[email protected]>",
"msg_from_op": true,
"msg_subject": "Good article on linux.com about PostgreSQL (sortof)."
}
] |
[
{
"msg_contents": "Will there be a 7.0.1 release? If so I take it most recent changes to gram.y\nwon't make it into this release, will they?\n\nAs far as ecpg is concerned I won't be able to change much in the near\nfuture so all changes in CVS right now are supposed to make it into 7.0.1.\n\nMichael\n-- \nMichael Meskes\[email protected]\nGo SF 49ers! Go Rhein Fire!\nUse Debian GNU/Linux! Use PostgreSQL!\n",
"msg_date": "Thu, 25 May 2000 10:00:28 +0200",
"msg_from": "Michael Meskes <[email protected]>",
"msg_from_op": true,
"msg_subject": "7.0.1?"
}
] |
[
{
"msg_contents": "\n> We could ensure that the planner still picks an indexscan as the known\n> table size grows by reducing the default selectivity estimate a little\n> bit (I experimented and determined that 0.005 would do the trick, for\n> the current cost model parameters). That's pretty ad-hoc, but then\n> the 0.01 number is pretty ad-hoc too. It's probably better to be able\n> to say \"if you haven't done VACUUM ANALYZE, you will get an indexscan\n> from WHERE col = const\" than to have to say \"it depends\". Comments?\n\nImho the initial goal why we said 0.01, was to make it use the index,\nso reducing it to 0.005 would be ok. I would actually try to calculate \nthe value with the current costs for index vs seq scan, so that it\nguarantees\nuse of the index regardless of table size.\nBut, it probably shows a problem with the chosen metric for selectivity\nitself.\nImho the chances are better, that an = restriction will return an equal\namount \nof rows while the table grows than that it will return a percentage of total\ntable size.\n\n> \n> Of course the real issue here is that the true selectivity of '=' is\n> much smaller in this table, because the column being looked at is\n> unique. But the planner doesn't know that without VACUUM stats.\n> \n> A hack I have been thinking about adding is that CREATE UNIQUE INDEX\n> for a single-column index should immediately force the attdisbursion\n> value for that column to \"unique\", so that the planner would know the\n> column is unique even without VACUUM ANALYZE. That would help not\n> at all for the MySQL benchmark (it has a two-column unique index,\n> but you can't conclude anything about the properties of either column\n> individually from that :-(). But it'd help in a lot of real-world\n> scenarios.\n\nYes, that would imho be a real winner. \nFor the multi column index we would need some magic that actually notices \nthat all index columns are restricted with =. \n\nAndreas\n",
"msg_date": "Thu, 25 May 2000 10:03:43 +0200",
"msg_from": "Zeugswetter Andreas SB <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: AW: More Performance "
},
{
"msg_contents": "Zeugswetter Andreas SB <[email protected]> writes:\n> But, it probably shows a problem with the chosen metric for\n> selectivity itself. Imho the chances are better, that an =\n> restriction will return an equal amount of rows while the table grows\n> than that it will return a percentage of total table size.\n\nUnfortunately you are allowing your thinking to be driven by a single\nexample. Consider queries like\n\tselect * from employees where dept = 'accounting';\nIt's perfectly possible that the column being tested with '=' has only\na small number of distinct values, in which case the number of retrieved\nrows probably *is* proportional to the table size.\n\nI am not willing to change the planner so that it \"guarantees\" to choose\nan indexscan no matter what, because then it would be broken for cases\nlike this. We have to look at the statistics we have, inadequate though\nthey are.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 25 May 2000 10:50:55 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AW: AW: More Performance "
}
] |
[
{
"msg_contents": "\n> The cost estimation code is under active development, and if you check\n> the pgsql list archives you will find lively discussions about its\n> deficiencies ;-). But I'm having a hard time mustering much concern\n> about whether it behaves optimally in the vacuum-but-no-vacuum-analyze\n> case.\n\nHere I must slightly disagree, if the impact of vacuum without analyze is so\nbad,\nthen analyze should be the default for vacuum.\n\nMaybe a better default selectivity would be datatype driven, like\n\nint: 1/(2*maxint)\nchar(n): 1/(n*256)\n....\nI think this is what my favorite commercial DBMS does.\n\nBecause we have an extensible type system I guess this would be overdoing\nit,\nbut in this light a much smaller default selectivity would seem more\nappropriate \n(like 1/100000).\n\nAndreas\n",
"msg_date": "Thu, 25 May 2000 10:31:59 +0200",
"msg_from": "Zeugswetter Andreas SB <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: More Performance "
}
] |
[
{
"msg_contents": "\n> Tom Lane wrote:\n> \n> > typedef struct\n> > {\n> > FmgrInfo *flinfo; /* ptr to lookup info used for this call\n*/\n> > Node *context; /* pass info about context of call */\n> > Node *resultinfo; /* pass or return extra info about\nresult */\n> > bool isnull; /* function must set true if result is\nNULL */\n> > short nargs; /* # arguments actually passed */\n> > Datum arg[FUNC_MAX_ARGS]; /* Arguments passed to function */\n> > bool argnull[FUNC_MAX_ARGS]; /* T if arg[i] is actually NULL\n*/\n> > } FunctionCallInfoData;\n> \n> Just wondering what the implications of FUNC_MAX_ARGS is, and whether\n> something like...\n\nWhy don't we at least look at the way other dbms's seem to work with \nunknown sql rows.\nImho it would be good to use an sqlda structure as seen in \nInformix, Oracle, SQL Server ....\nI know it does have a little more overhead, but it is something db \nprogrammers know how to use. Unfortunately they all seem to use different\ntechniques when it comes to function arguments to a stored proc, but\nI do not really understand why.\n\nAndreas\n",
"msg_date": "Thu, 25 May 2000 11:08:41 +0200",
"msg_from": "Zeugswetter Andreas SB <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: Last call for comments: fmgr rewrite [LONG]"
}
] |
[
{
"msg_contents": "Zeugswetter Andreas SB wrote:\n> \n> > For multiple\n> > inheritance, why not just suggest the use of INHERITS, which is\n> > already a Postgres language extension for multiple\n> > inheritance. UNDER covers\n> > the tree/hierarchy situation, so make it only to SQL3 standards.\n> > INHERIT fits the clone/copy/inherits situation that, like I've\n> > said before, is like starting a new tree.\n> \n> Imho the difference is so marginal, that I would not like to see two\n> different implementations. Informix e.g. took what Illustra had\n> for inherits and only changed the keyword to under, which is imho\n> what we should do.\n\nAgreed.\n\n> When calling functions with a class argument they do pass all attributes\n> of subclasses to it. They use late function binding, so you can define\n> different functions for different subclasses having the same name.\n> They only show parent columns when doing 'select * from class' that has\n> subclasses.\n\nThat's what we are planning also, to return all columns current \nfavourite syntax to use is 'select ** from class', but even it is not \nyet implemented.\n\nBTW, does Informix/Illustra do single or multiple inheritance with their\nUNDER?\n\n-------\nHannu\n",
"msg_date": "Thu, 25 May 2000 12:12:21 +0300",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: AW: SQL3 UNDER"
},
{
"msg_contents": "\n> For multiple\n> inheritance, why not just suggest the use of INHERITS, which is\n> already a Postgres language extension for multiple \n> inheritance. UNDER covers\n> the tree/hierarchy situation, so make it only to SQL3 standards. \n> INHERIT fits the clone/copy/inherits situation that, like I've\n> said before, is like starting a new tree.\n\nImho the difference is so marginal, that I would not like to see two \ndifferent implementations. Informix e.g. took what Illustra had \nfor inherits and only changed the keyword to under, which is imho \nwhat we should do.\n\nWhen calling functions with a class argument they do pass all attributes\nof subclasses to it. They use late function binding, so you can define\ndifferent functions for different subclasses having the same name.\nThey only show parent columns when doing 'select * from class' that has \nsubclasses.\n\nAndreas\n",
"msg_date": "Thu, 25 May 2000 11:44:24 +0200",
"msg_from": "Zeugswetter Andreas SB <[email protected]>",
"msg_from_op": false,
"msg_subject": "AW: SQL3 UNDER"
}
] |
[
{
"msg_contents": "Zeugswetter Andreas SB wrote:\n> \n> > > When calling functions with a class argument they do pass\n> > all attributes\n> > > of subclasses to it. They use late function binding, so you\n> > can define\n> > > different functions for different subclasses having the same name.\n> > > They only show parent columns when doing 'select * from\n> > class' that has\n> > > subclasses.\n> >\n> > That's what we are planning also, to return all columns current\n> > favourite syntax to use is 'select ** from class', but even it is not\n> > yet implemented.\n> \n> I am not talking about select * I am talking about\n> \"select somefunc(supertable) from supertable\"\n> \n> create table supertable (a int);\n> create table taba (b int) under supertable;\n> \n> create function somefunc (tup supertable) returning int\n> as 'select 1' ...\n> \n> create function somefunc (tup taba) returning int\n> as 'select 0.5*b' ....\n\nSo how does this work in Informix/Illustra ?\n\ni.e. is the binding done at row evaluation time or \n\"when they do 'select * ...' and don't know about coumn b\"\n\n> >\n> > BTW, does Informix/Illustra do single or multiple inheritance\n> > with their\n> > UNDER?\n> \n> Multiple, \n\nThat's what I thought ;)\n\n> as I said they took Illustra (which was a parallel effort\n> to port Postgres to SQL).\n\nThat much I know (we almost bought an Illustra db before Postgres95 \nwas even available;)\n\n----------\nHannu\n",
"msg_date": "Thu, 25 May 2000 13:04:34 +0300",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: AW: AW: SQL3 UNDER"
},
{
"msg_contents": "\n> > When calling functions with a class argument they do pass \n> all attributes\n> > of subclasses to it. They use late function binding, so you \n> can define\n> > different functions for different subclasses having the same name.\n> > They only show parent columns when doing 'select * from \n> class' that has\n> > subclasses.\n> \n> That's what we are planning also, to return all columns current \n> favourite syntax to use is 'select ** from class', but even it is not \n> yet implemented.\n\nI am not talking about select * I am talking about\n\"select somefunc(supertable) from supertable\"\n\ncreate table supertable (a int);\ncreate table taba (b int) under supertable;\n\ncreate function somefunc (tup supertable) returning int\nas 'select 1' ...\n\ncreate function somefunc (tup taba) returning int\nas 'select 0.5*b' ....\n\n> \n> BTW, does Informix/Illustra do single or multiple inheritance \n> with their\n> UNDER?\n\nMultiple, as I said they took Illustra (which was a parallel effort\nto port Postgres to SQL). \n\nAndreas\n",
"msg_date": "Thu, 25 May 2000 12:29:41 +0200",
"msg_from": "Zeugswetter Andreas SB <[email protected]>",
"msg_from_op": false,
"msg_subject": "AW: AW: SQL3 UNDER"
}
] |
[
{
"msg_contents": "\n> > > Benefits:\n> > > *) SQL3 says it.\n> > \n> > Imho this alone more than justifies the patch.\n> > We should also change our keyword \"inherits\" to \"under\".\n> > \n> \n> I don't agree. UNDER only provides for single inheritance \n> according to spec. \n> Making it multiple inherit would break UNDER's basic idea of \n> enabling hierarchy\n> trees that contain subtables under a single maximal \n> supertable.\n\nI do not see how someone using the current under|inherits scheme\nthat only uses SQL99 syntax will get a system that does not act like\ndefined in SQL99 other than not complaining at \"create table under\"\ntime that the supertable is not top level. This alone is imho not enough to \nvalidify two different approaches.\n\nAndreas\n",
"msg_date": "Thu, 25 May 2000 12:12:49 +0200",
"msg_from": "Zeugswetter Andreas SB <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: AW: Postgresql OO Patch"
}
] |
[
{
"msg_contents": "Zeugswetter Andreas SB wrote:\n> \n> > Frankly, based on my experience with Berkeley DB, I'd bet on mine.\n> > I can do 2300 tuple fetches per CPU per second, with linear scale-\n> > up to at least four processors (that's what we had on the box we\n> > used). That's 9200 fetches a second. Performance isn't going\n> > to be the deciding issue.\n> \n> Wow, that sounds darn slow. Speed of a seq scan on one CPU,\n> one disk should give you more like 19000 rows/s with a small record size.\n> Of course you are probably talking about random fetch order here,\n> but we need fast seq scans too.\n\nCould someone test this on MySQL with bsddb storage that should be out\nby now ?\n\nCould be quite indicative of what we an expect.\n\n> (10 Mb/s disk, 111 b/row, no cpu bottleneck, nothing cached ,\n> Informix db, select count(*) ... where notindexedfield != 'notpresentvalue';\n> Table pages interleaved with index pages, tabsize 337 Mb\n> (table with lots of insert + update + delete history) )\n> \n> Andreas\n",
"msg_date": "Thu, 25 May 2000 13:14:33 +0300",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: AW: Berkeley DB..."
},
{
"msg_contents": "\n> Frankly, based on my experience with Berkeley DB, I'd bet on mine.\n> I can do 2300 tuple fetches per CPU per second, with linear scale-\n> up to at least four processors (that's what we had on the box we\n> used). That's 9200 fetches a second. Performance isn't going\n> to be the deciding issue.\n\nWow, that sounds darn slow. Speed of a seq scan on one CPU, \none disk should give you more like 19000 rows/s with a small record size.\nOf course you are probably talking about random fetch order here,\nbut we need fast seq scans too.\n\n(10 Mb/s disk, 111 b/row, no cpu bottleneck, nothing cached , \nInformix db, select count(*) ... where notindexedfield != 'notpresentvalue';\nTable pages interleaved with index pages, tabsize 337 Mb \n(table with lots of insert + update + delete history) )\n\nAndreas\n",
"msg_date": "Thu, 25 May 2000 12:59:44 +0200",
"msg_from": "Zeugswetter Andreas SB <[email protected]>",
"msg_from_op": false,
"msg_subject": "AW: Berkeley DB..."
},
{
"msg_contents": "Hi,\n\nHannu Krosing:\n> \n> Could someone test this on MySQL with bsddb storage that should be out\n> by now ?\n> \nAs long as the BDB support in mysql doesn't even remotely come close to\nrunning their own benchmark suite, I for one will not be using it for\nany kind of indicative speed test...\n\n... that being said (and I took a quick test with 10000 randomly-inserted\nrecords and fetched them in index order) if the data's in the cache, the\nspeed difference is insignificant. \n\nI did this:\n\ncreate table foo (a int not null,b char(100));\ncreate index foo_a on foo(a);\nfor(i=0; i<10000; i++) {\n insert into foo(a,b) values( `((i*3467)%10000)` , 'fusli');\n}\nselect a from foo order by a;\n\n\nTimes for the insert loop:\n14 MySQL-MyISAM\n23 PostgreSQL (no fsync)\n53 MySQL-BDB (with fsync -- don't know how to turn it off yet)\n\nThe select:\n0.75 MySQL-MyISAM\n0.77 MySQL-BDB\n2.43 PostgreSQL\n\nI'll do a \"real\" test once the BDB support in MySQL is stable enough to\nrun the MySQL benchmark suite.\n\nAnyway, this quick and dirty test seems to show that BDB doesn't\nslow down data retrieval.\n\n\nNB, the select loop was using an index scan in all cases.\n\n-- \nMatthias Urlichs | noris network GmbH | [email protected] | ICQ: 20193661\nThe quote was selected randomly. Really. | http://smurf.noris.de/\n-- \n\"If the vendors started doing everything right, we would be out of a job.\n Let's hear it for OSI and X! With those babies in the wings, we can count on\n being employed until we drop, or get smart and switch to gardening, paper\n folding, or something.\"\n\t-- C. Philip Wood\n",
"msg_date": "Thu, 25 May 2000 15:40:50 +0200",
"msg_from": "\"Matthias Urlichs\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Berkeley DB..."
}
] |
[
{
"msg_contents": "\nTom, can you see anything wrong with this y.output file that would cause\nit not to parse a plain create table statement?\n\nftp://ftp.tech.com.au/pub/y.output.gz\nftp://ftp.tech.com.au/pub/gram.y.gz\n\nfoo=# create table aa (bb text);\nERROR: parser: parse error at or near \"text\"\nERROR: parser: parse error at or near \"text\"\nfoo=#\n",
"msg_date": "Thu, 25 May 2000 23:06:37 +1000",
"msg_from": "Chris Bitmead <[email protected]>",
"msg_from_op": true,
"msg_subject": "gram.y PROBLEM with UNDER"
},
{
"msg_contents": "Chris Bitmead <[email protected]> writes:\n> Tom, can you see anything wrong with this y.output file that would cause\n> it not to parse a plain create table statement?\n\nUh, you've got a ton of unresolved conflicts there:\n\nState 17 contains 1 shift/reduce conflict.\nState 257 contains 1 shift/reduce conflict.\nState 359 contains 4 shift/reduce conflicts.\nState 595 contains 1 shift/reduce conflict.\nState 1106 contains 2 reduce/reduce conflicts.\nState 1260 contains 127 shift/reduce conflicts.\nState 1484 contains 2 reduce/reduce conflicts.\nState 1485 contains 2 reduce/reduce conflicts.\nState 1486 contains 2 reduce/reduce conflicts.\n\nIf you don't get rid of those then your parser will behave in surprising\nways. So far you have noticed the fallout from only one of those\nconflicts, but every one of them is a potential bug. Be advised that\ngram.y patches that create unresolved conflicts will *not* be accepted.\n\nThe immediate problem you are seeing seems to be the conflict in state\n595:\n\nstate 595\n\n CreateStmt -> CREATE OptTemp TABLE relation_name . OptUnder '(' OptTableElementList ')' OptInherit (rule 151)\n CreateAsStmt -> CREATE OptTemp TABLE relation_name . OptCreateAs AS SelectStmt (rule 207)\n\n UNDER shift, and go to state 807\n '(' shift, and go to state 808\n\n '(' [reduce using rule 204 (OptUnder)]\n $default reduce using rule 209 (OptCreateAs)\n\n OptUnder go to state 809\n OptCreateAs go to state 810\n\nwhich is going to be a tad tricky to get around: you will need to\nrestructure the productions so that the thing doesn't have to decide\nwhether to reduce an empty OptUnder before parsing the contents of the\nparenthesized list. It's only by looking to see if that list contains\ncolumndefs or just bare names that the parser can tell whether it's\ndealing with CREATE or CREATE AS; you are forcing it to make a decision\nbetween the two rules sooner than that, and can hardly complain that it\npicked the wrong one at random.\n\nMaybe the simplest answer is to put OptUnder into the same position in\nthe CREATE AS production (and then reject a nonempty OptUnder in the\naction for that rule, unless you want to try to support it...). That\nway there's no conflict between the two rules up till the point where\nthe parser can resolve the difference between them.\n\nOffhand it looks like most of the other complaints arise because you've\nprovided two different parsing paths for 'ONLY relationname'.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 25 May 2000 11:46:31 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: gram.y PROBLEM with UNDER "
},
{
"msg_contents": "> If you don't get rid of those then your parser will behave in surprising\n> ways. So far you have noticed the fallout from only one of those\n> conflicts, but every one of them is a potential bug. Be advised that\n> gram.y patches that create unresolved conflicts will *not* be accepted.\n\nYes, even I don't apply those, though they say I never met a patch I\ndidn't like. :-)\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 25 May 2000 12:12:12 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: gram.y PROBLEM with UNDER"
},
{
"msg_contents": "On Thu, May 25, 2000 at 12:12:12PM -0400, Bruce Momjian wrote:\n> > If you don't get rid of those then your parser will behave in surprising\n> > ways. So far you have noticed the fallout from only one of those\n> > conflicts, but every one of them is a potential bug. Be advised that\n> > gram.y patches that create unresolved conflicts will *not* be accepted.\n> \n> Yes, even I don't apply those, though they say I never met a patch I\n> didn't like. :-)\n\nBruce, your going to _make_ me grovel through the archives, and prove\nthat you were the first one to say that aren't you?\n\nRoss\n;-)\n",
"msg_date": "Thu, 25 May 2000 15:39:23 -0500",
"msg_from": "\"Ross J. Reedstrom\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: gram.y PROBLEM with UNDER"
},
{
"msg_contents": "> On Thu, May 25, 2000 at 12:12:12PM -0400, Bruce Momjian wrote:\n> > > If you don't get rid of those then your parser will behave in surprising\n> > > ways. So far you have noticed the fallout from only one of those\n> > > conflicts, but every one of them is a potential bug. Be advised that\n> > > gram.y patches that create unresolved conflicts will *not* be accepted.\n> > \n> > Yes, even I don't apply those, though they say I never met a patch I\n> > didn't like. :-)\n> \n> Bruce, your going to _make_ me grovel through the archives, and prove\n> that you were the first one to say that aren't you?\n\nI believe it was a Thomas Lockhart line.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 25 May 2000 16:49:21 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: gram.y PROBLEM with UNDER"
},
{
"msg_contents": "On Thu, May 25, 2000 at 04:49:21PM -0400, Bruce Momjian wrote:\n> > > \n> > > Yes, even I don't apply those, though they say I never met a patch I\n> > > didn't like. :-)\n> > \n> > Bruce, your going to _make_ me grovel through the archives, and prove\n> > that you were the first one to say that aren't you?\n> \n> I believe it was a Thomas Lockhart line.\n\nO.K., first one _I_ saw say that. \n\nRoss\nP.S. you do all us occasional patchers a great service. Don't change a\nthing ;-)\n",
"msg_date": "Thu, 25 May 2000 18:02:00 -0500",
"msg_from": "\"Ross J. Reedstrom\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: gram.y PROBLEM with UNDER"
},
{
"msg_contents": "> On Thu, May 25, 2000 at 04:49:21PM -0400, Bruce Momjian wrote:\n> > > > \n> > > > Yes, even I don't apply those, though they say I never met a patch I\n> > > > didn't like. :-)\n> > > \n> > > Bruce, your going to _make_ me grovel through the archives, and prove\n> > > that you were the first one to say that aren't you?\n> > \n> > I believe it was a Thomas Lockhart line.\n> \n> O.K., first one _I_ saw say that. \n> \n> Ross\n> P.S. you do all us occasional patchers a great service. Don't change a\n> thing ;-)\n\nI am always ready to back stuff out if someone objects. People\ncertainly like quick patch application. Makes them feel we value their\ncontributions, which we do.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 25 May 2000 19:15:59 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: gram.y PROBLEM with UNDER"
},
{
"msg_contents": "Tom Lane wrote:\n> State 17 contains 1 shift/reduce conflict.\n> State 257 contains 1 shift/reduce conflict.\n> State 359 contains 4 shift/reduce conflicts.\n> State 595 contains 1 shift/reduce conflict.\n> State 1106 contains 2 reduce/reduce conflicts.\n> State 1260 contains 127 shift/reduce conflicts.\n> State 1484 contains 2 reduce/reduce conflicts.\n> State 1485 contains 2 reduce/reduce conflicts.\n> State 1486 contains 2 reduce/reduce conflicts.\n> \n> If you don't get rid of those then your parser will behave in surprising\n> ways. So far you have noticed the fallout from only one of those\n> conflicts, but every one of them is a potential bug. Be advised that\n> gram.y patches that create unresolved conflicts will *not* be accepted.\n\nI thought shift/reduce conflicts were part and parcel of most language\nsyntaxes. reduce/reduce being rather more naughty. The standard syntax\nalready produces 95 shift/reduce conflicts. Can you clarify about\nunresolved conflicts not being accepted?\n",
"msg_date": "Fri, 26 May 2000 11:14:09 +1000",
"msg_from": "Chris Bitmead <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: gram.y PROBLEM with UNDER"
},
{
"msg_contents": "> Tom Lane wrote:\n> > If you don't get rid of those then your parser will behave in surprising\n> > ways. So far you have noticed the fallout from only one of those\n> > conflicts, but every one of them is a potential bug. Be advised that\n> > gram.y patches that create unresolved conflicts will *not* be accepted.\n> \n> I thought shift/reduce conflicts were part and parcel of most language\n> syntaxes. reduce/reduce being rather more naughty. The standard syntax\n> already produces 95 shift/reduce conflicts. Can you clarify about\n> unresolved conflicts not being accepted?\n\nWhat? I get zero here. shift/reduce is sloppy programming. We don't\ndo that here. :-)\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 25 May 2000 21:31:55 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: gram.y PROBLEM with UNDER"
},
{
"msg_contents": "Bruce Momjian wrote:\n> \n> > Tom Lane wrote:\n> > > If you don't get rid of those then your parser will behave in surprising\n> > > ways. So far you have noticed the fallout from only one of those\n> > > conflicts, but every one of them is a potential bug. Be advised that\n> > > gram.y patches that create unresolved conflicts will *not* be accepted.\n> >\n> > I thought shift/reduce conflicts were part and parcel of most language\n> > syntaxes. reduce/reduce being rather more naughty. The standard syntax\n> > already produces 95 shift/reduce conflicts. Can you clarify about\n> > unresolved conflicts not being accepted?\n> \n> What? I get zero here. shift/reduce is sloppy programming. We don't\n> do that here. :-)\n\nHmm. Now I look, I think that was with an older pgsql. Maybe 6.5 or\nsomething. Have you guys done some black magic to get rid of them?\n",
"msg_date": "Fri, 26 May 2000 12:06:44 +1000",
"msg_from": "Chris Bitmead <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: gram.y PROBLEM with UNDER"
},
{
"msg_contents": "Chris Bitmead <[email protected]> writes:\n>> If you don't get rid of those then your parser will behave in surprising\n>> ways. So far you have noticed the fallout from only one of those\n>> conflicts, but every one of them is a potential bug. Be advised that\n>> gram.y patches that create unresolved conflicts will *not* be accepted.\n\n> I thought shift/reduce conflicts were part and parcel of most language\n> syntaxes. reduce/reduce being rather more naughty. The standard syntax\n> already produces 95 shift/reduce conflicts. Can you clarify about\n> unresolved conflicts not being accepted?\n\nWhat's to clarify? The existing grammar does produce a long list of\n*resolved* conflicts, which are not very interesting (they just indicate\nthat we are using operator precedence rules instead of creating a\ndetailed grammar for expressions). Unresolved conflicts are a far\nmore serious problem, because they tell you that there is an unreachable\npart of your language. As indeed was happening to you in this case.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 25 May 2000 22:09:17 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: gram.y PROBLEM with UNDER "
},
{
"msg_contents": "> Bruce Momjian wrote:\n> > \n> > > Tom Lane wrote:\n> > > > If you don't get rid of those then your parser will behave in surprising\n> > > > ways. So far you have noticed the fallout from only one of those\n> > > > conflicts, but every one of them is a potential bug. Be advised that\n> > > > gram.y patches that create unresolved conflicts will *not* be accepted.\n> > >\n> > > I thought shift/reduce conflicts were part and parcel of most language\n> > > syntaxes. reduce/reduce being rather more naughty. The standard syntax\n> > > already produces 95 shift/reduce conflicts. Can you clarify about\n> > > unresolved conflicts not being accepted?\n> > \n> > What? I get zero here. shift/reduce is sloppy programming. We don't\n> > do that here. :-)\n> \n> Hmm. Now I look, I think that was with an older pgsql. Maybe 6.5 or\n> something. Have you guys done some black magic to get rid of them?\n> \n\nThey have not been there for _years_. I see lots of open source stuff\nwith shift/reduce reports. We don't. It is tricky to remove them. It\noften involves adding duplicate actions to prevent the problems. \nCertain people are quite good at it, and are glad to help.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 25 May 2000 22:15:41 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: gram.y PROBLEM with UNDER"
}
] |
[
{
"msg_contents": "> Here I must slightly disagree, if the impact of vacuum \n> without analyze is so bad, then analyze should be the\n> default for vacuum.\n\n*Must not* - it takes long time. Imho, there should be\nANALYZE command, with ACCESS SHARE lock...\n\nVadim\n",
"msg_date": "Thu, 25 May 2000 07:31:35 -0700",
"msg_from": "\"Mikheev, Vadim\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: More Performance "
}
] |
[
{
"msg_contents": "At 12:59 PM 5/25/00 +0200, Zeugswetter Andreas SB wrote:\n\n> Wow, that sounds darn slow. Speed of a seq scan on one CPU, \n> one disk should give you more like 19000 rows/s with a small record size.\n> Of course you are probably talking about random fetch order here,\n> but we need fast seq scans too.\n\nThe test was random reads on a 250GB database. I don't have a\nsimilar characterization for sequential scans off the top of my\nhead.\n\t\t\t\t\tmike\n\n",
"msg_date": "Thu, 25 May 2000 07:41:08 -0700",
"msg_from": "\"Michael A. Olson\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: AW: Berkeley DB..."
}
] |
[
{
"msg_contents": "\nHere are a few apparent discrepencies between pg7.0beta3 and the timezone\ndocumentation at \n\nhttp://www.postgresql.org/docs/postgres/datetime-appendix.htm#DATETIME-APPENDIX-TITLE\nhttp://www.postgresql.org/docs/postgres/datatype1134.htm\n\non my system (Linux 2.2.12-20smp #1, i686, running in CDT timezone).\n\n1) Unrecognized timezones claimed in the docs:\n\nERROR: Bad timestamp external representation '1-1-2000 00:00:00 DST'\nERROR: Bad timestamp external representation '1-1-2000 00:00:00 ZP4'\nERROR: Bad timestamp external representation '1-1-2000 00:00:00 ZP5'\nERROR: Bad timestamp external representation '1-1-2000 00:00:00 ZP6'\n\n2) Timezone parsed without complaint but bogusly converted to local\ntimezone:\n\n SAT (South Australian Std Time)\n\nExample:\n\n create table timezones (result timestamp, note varchar);\n insert into timezones values ('1-1-2000 00:00:00 SAT', 'SAT');\n insert into timezones values ('1-1-2000 00:00:00', 'Local');\n insert into timezones values ('1-1-2000 00:00:00 GMT', 'GMT');\n select * from timezones;\n drop table timezones;\n\nResults:\n\ntzdb=# select * from timezones;\n result | note \n------------------------+-------\n 2000-01-01 00:00:00-06 | SAT\n 2000-01-01 00:00:00-06 | Local\n 1999-12-31 18:00:00-06 | GMT\n(3 rows)\n\n\nRegards,\nEd Loehr\n",
"msg_date": "Thu, 25 May 2000 10:35:13 -0500",
"msg_from": "Ed Loehr <[email protected]>",
"msg_from_op": true,
"msg_subject": "Timezone discrepancies"
},
{
"msg_contents": "(Cleaning up mail, and I don't see a reply to this...)\n\n> Here are a few apparent discrepencies between pg7.0beta3 and the timezone\n> documentation...\n> 1) Unrecognized timezones claimed in the docs:\n> ERROR: Bad timestamp external representation '1-1-2000 00:00:00 DST'\n> ERROR: Bad timestamp external representation '1-1-2000 00:00:00 ZP4'\n> ERROR: Bad timestamp external representation '1-1-2000 00:00:00 ZP5'\n> ERROR: Bad timestamp external representation '1-1-2000 00:00:00 ZP6'\n\nDST - this is a docs error. DNT is the correct form for \"Dansk Normal\nTid\", and \"DST\" is a \"Daylight Savings Time\" qualifier for other time\nzones. Will fix.\n\nZPx - this is a parser problem, in that currently time zones with\nembedded digits are not allowed. Are these three time zones \"official\"\nor used by anyone? I don't find them in my \"zoneinfo\" database on my\nLinux boxes. I would propose that these three zones be eliminated from\nthe code, rather than putting a special case into the date/time parser.\nafaik folks haven't noticed that this is a problem (except for Ed of\ncourse ;). Comments?\n\n> 2) Timezone parsed without complaint but bogusly converted to local\n> timezone:\n> SAT (South Australian Std Time)\n\nCurrently, this is mapped to be a noise word for \"Saturday\". Is this a\ntimezone form currently in use in Australia? If so, I can add it to the\n\"USE_AUSTRALIAN_RULES\" variants.\n\nPeter, will you be doing more work on configuration? If so, could we\nimplement --enable-australian-zones (or something similar) which sets\nthe internal USE_AUSTRALIAN_RULES parameter?\n\nThanks for catching these problems. I'm a bit puzzled why the docs and\ncode differ, since at least some of the things mentioned above go back\nat least three years. Must have carried the doc info forward from\nsomewhere else??\n\n - Thomas\n",
"msg_date": "Wed, 20 Sep 2000 15:05:04 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Timezone discrepancies"
},
{
"msg_contents": "Thomas Lockhart writes:\n\n> Peter, will you be doing more work on configuration? If so, could we\n> implement --enable-australian-zones (or something similar) which sets\n> the internal USE_AUSTRALIAN_RULES parameter?\n\nWill do.\n\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n",
"msg_date": "Wed, 20 Sep 2000 19:45:49 +0200 (CEST)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Timezone discrepancies"
},
{
"msg_contents": "Thomas Lockhart writes:\n\n> > ERROR: Bad timestamp external representation '1-1-2000 00:00:00 DST'\n> > ERROR: Bad timestamp external representation '1-1-2000 00:00:00 ZP4'\n> > ERROR: Bad timestamp external representation '1-1-2000 00:00:00 ZP5'\n> > ERROR: Bad timestamp external representation '1-1-2000 00:00:00 ZP6'\n> \n> DST - this is a docs error. DNT is the correct form for \"Dansk Normal\n> Tid\", and \"DST\" is a \"Daylight Savings Time\" qualifier for other time\n> zones. Will fix.\n\nAnother comment: I just peeked and, to be blunt, the time zone database\nlooks like a mess. For example, there's CET (central european time), but\nnot CEST (central european summer time). Instead there's CETDST, which\nI've never heard used. Then there's MEST, MET, METDST, MEWT, MEZ, all of\nwhich are supposed to be \"Middle Europe\" variations, none of which I've\never heard of. (MEZ and MESZ are the German translations of CET and CEST,\nbut as listed they claim to be English terms.) Also I've never heard of\n\"Dansk Normal Tid\" (DNT) or \"Swedish Summer Time\" (SST), both of these\nplaces use Central European Time.\n\nThere are several other obscure candidates where I don't have direct\ngeographic knowledge, such as \"Moluccas Time\" (MT, I though that would be\nMountain Time), or \"Seychelles Time\" (SET).\n\nProbably sometime in the near future a discussion and some research ought\nto take place to sort out this list.\n\n\n-- \nPeter Eisentraut [email protected] http://yi.org/peter-e/\n\n",
"msg_date": "Thu, 21 Sep 2000 14:59:13 +0200 (CEST)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Timezone discrepancies"
},
{
"msg_contents": "Peter Eisentraut <[email protected]> writes:\n\n> Another comment: I just peeked and, to be blunt, the time zone database\n> looks like a mess. For example, there's CET (central european time), but\n> not CEST (central european summer time). Instead there's CETDST, which\n> I've never heard used. Then there's MEST, MET, METDST, MEWT, MEZ, all of\n> which are supposed to be \"Middle Europe\" variations, none of which I've\n\nMET is sometimes used as middle eastern time as well. Universal Time is a\nGood Thing ;-)\n\nRegards, \n\n\tGunnar\n",
"msg_date": "21 Sep 2000 16:48:36 +0200",
"msg_from": "Gunnar R|nning <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Timezone discrepancies"
},
{
"msg_contents": "> Another comment: I just peeked and, to be blunt, the time zone database\n> looks like a mess.\n\nGreat! (Well, not great, but you know...). Let's clean it up. But we'll\nneed to get representation from the various regions covered to avoid\ndropping useful fields.\n\n> For example, there's CET (central european time), but\n> not CEST (central european summer time). Instead there's CETDST, which\n> I've never heard used.\n\nI recall working on \"MET DST\" (note space) at the request of someone in\n\"MET\". The \"DST\" qualifier works for every \"standard timezone\". Have you\nseen this usage? \"CETDST\" is probably there for historical reasons, to\nhandle this case before we could manage the standalone \"DST\" qualifier.\n\n> Then there's MEST, MET, METDST, MEWT, MEZ, all of\n> which are supposed to be \"Middle Europe\" variations, none of which I've\n> ever heard of. (MEZ and MESZ are the German translations of CET and CEST,\n> but as listed they claim to be English terms.)\n\nAh, that may be. So I was talking with a German or Austrian or ??\nearlier...\n\nMET shows up in my zic timezone database. We should of course retain\nentries for all corresponding entries in those databases, across the\nvarious platforms we support.\n\n> Also I've never heard of\n> \"Dansk Normal Tid\" (DNT) or \"Swedish Summer Time\" (SST), both of these\n> places use Central European Time.\n\nThe use of CET in those countries might be modern developments (since\n1986??) or perhaps \"DNT\" and \"SST\" are much older. Most of the character\nstring representations for timezones came from Postgres' pre-history at\nBerkeley, and I just carried them forward.\n\nI've found that Sun seems to have more accurate timezone support than\nother systems, at least for pre-1947 details. And afaik they do not use\nzic so we should look at both of those to get a more complete story.\n\n> There are several other obscure candidates where I don't have direct\n> geographic knowledge, such as \"Moluccas Time\" (MT, I though that would be\n> Mountain Time), or \"Seychelles Time\" (SET).\n\nIn the US, all time zones have three characters. So \"Mountain Time\" is\n\"MST\" and \"MDT\" for \"Mountain Standard Time\" and \"Mountain Daylight\nsavings Time\".\n\nI had thought that there was a \"Seychelles time\", but perhaps someone\nwho has been there can speak up?\n\n - Thomas\n",
"msg_date": "Thu, 21 Sep 2000 15:20:59 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Timezone discrepancies"
}
] |
[
{
"msg_contents": "Here's what I thought to be an odd result from the 7.0beta3 parser...\n\ncreate table foo (id serial, h_count integer);\ninsert into foo (h_count) values (10);\ncreate table temp_foo as select * from foo; \ndrop table foo;\ndrop sequence foo_id_seq;\ncreate table foo (id serial, h_count integer);\ninsert into foo (id, h_count) select t.id, t.count from temp_foo t;\n\nERROR: Attribute t.id must be GROUPed or used in an aggregate function\n\nI mislabeled the 't.h_count' column in my INSERT statement as 't.count',\nand what I found strange was that the parser evidently thinks t.count is\nan aggregate. Is 't.count' valid use/syntax for an aggregate?\n\nRegards,\nEd Loehr\n",
"msg_date": "Thu, 25 May 2000 10:40:19 -0500",
"msg_from": "Ed Loehr <[email protected]>",
"msg_from_op": true,
"msg_subject": "parser oddity (t.count)"
},
{
"msg_contents": "Ed Loehr <[email protected]> writes:\n> insert into foo (id, h_count) select t.id, t.count from temp_foo t;\n\n> ERROR: Attribute t.id must be GROUPed or used in an aggregate function\n\n> I mislabeled the 't.h_count' column in my INSERT statement as 't.count',\n> and what I found strange was that the parser evidently thinks t.count is\n> an aggregate. Is 't.count' valid use/syntax for an aggregate?\n\nHmm. Due to some ancient Postquel features that you probably don't want\nto hear about, foo.bar and bar(foo) are considered near-equivalent\nnotations by the parser. It looks like when it couldn't find 'count' as\na field name, it tried and succeeded to interpret it as a function call\ninstead.\n\n(A contributing problem here is that the parser is absolutely lax about\nwhat it will take as the argument of count(). IMHO you should have\ngotten something like \"Unable to select an aggregate function\ncount(unknown)\", which might have been a little less confusing.)\n\nIt works in the other direction too: field(foo) will be interpreted as\nfoo.field if foo has a column named field.\n\nThis equivalence can be pretty confusing if you don't know about it, but\nI'm hesitant to suggest ripping it out because of the risk of breaking\nold applications. Anyone have strong opinions one way or the other?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 25 May 2000 12:28:40 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: parser oddity (t.count) "
},
{
"msg_contents": "\n> to hear about, foo.bar and bar(foo) are considered near-equivalent\n> notations by the parser. It looks like when it couldn't find 'count' as\n> a field name, it tried and succeeded to interpret it as a function call\n> instead.\n> \n\n> It works in the other direction too: field(foo) will be interpreted as\n> foo.field if foo has a column named field.\n> \n> This equivalence can be pretty confusing if you don't know about it, but\n> I'm hesitant to suggest ripping it out because of the risk of breaking\n> old applications. Anyone have strong opinions one way or the other?\n\nThis feature is sacrosanct for me, if you ripp it, you take away the\nfeature to add calculated columns to tables.\n\nThe important part for me, is that foo.calcit calls the function calcit(foo).\n\nAndreas\n",
"msg_date": "Fri, 26 May 2000 08:27:40 +0200",
"msg_from": "Andreas Zeugswetter <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: parser oddity (t.count)"
}
] |
[
{
"msg_contents": "\n> The select:\n> 0.75 MySQL-MyISAM\n> 0.77 MySQL-BDB\n> 2.43 PostgreSQL\n> \n> I'll do a \"real\" test once the BDB support in MySQL is stable \n> enough to run the MySQL benchmark suite.\n\nIt is the sequential scan timings that we would be very interested in.\n\ncreate table foo (a int not null,b char(100));\ncreate index foo_a on foo(a);\nfor(i=0; i<10000; i++) {\n insert into foo(a,b) values( `((i*3467)%10000)` , 'fusli');\n}\n\ntime this:\nselect count(*) from foo where b<>'not there';\n\nAndreas\n",
"msg_date": "Thu, 25 May 2000 17:42:50 +0200",
"msg_from": "Zeugswetter Andreas SB <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: Berkeley DB..."
}
] |
[
{
"msg_contents": "\n> ... that being said (and I took a quick test with 10000 \n> randomly-inserted\n> records and fetched them in index order) if the data's in the \n> cache, the\n> speed difference is insignificant. \n\nAs long as everything fits into the system cache and is \nalready in there, this test is moot.\n\n> I did this:\n> \n> create table foo (a int not null,b char(100));\n> create index foo_a on foo(a);\n> for(i=0; i<10000; i++) {\n> insert into foo(a,b) values( `((i*3467)%10000)` , 'fusli');\n> }\n\nhere you need to reboot the machine or make sure nothing is cached. \nthen time the following and make sure it uses the index afterwards.\n\n> select a from foo order by a; \n\nAndreas\n",
"msg_date": "Thu, 25 May 2000 17:47:52 +0200",
"msg_from": "Zeugswetter Andreas SB <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: Berkeley DB..."
}
] |
[
{
"msg_contents": " \n> > Here I must slightly disagree, if the impact of vacuum \n> > without analyze is so bad, then analyze should be the\n> > default for vacuum.\n> \n> *Must not* - it takes long time. Imho, there should be\n> ANALYZE command, with ACCESS SHARE lock...\n\nYes, I agree that reducing the default selectivity estimator\nis the much better solution.\n\nAndreas \n",
"msg_date": "Thu, 25 May 2000 17:51:13 +0200",
"msg_from": "Zeugswetter Andreas SB <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: More Performance "
}
] |
[
{
"msg_contents": "I know this topic has been rehashed a million times, but I just wanted to\nadd one datapoint. I have a database (150 tables, less than 20K tuples \nin any one table) which I 'vacuum analyze'\u001b*HOURLY*, blocking all access,\nand I still see frequent situations where my query times bloat by roughly\n300% (4 times slower) in the intervening time between vacuums. All this \nis to say that I think a more strategic implementation of the \nfunctionality of vacuum analyze (specifically, non-batched, automated,\non-the-fly vacuuming/analyzing) would be a major \"value add\". I haven't \neducated myself as to the history of it, but I do wonder why the \nperformance focus is not on this. I'd imagine it would be a performance \nhit (which argues for making it optional), but I'd gladly take a 10% \nperformance hit over the current highly undesireable degradation. You \ncould do a whole lotta optimization on the planner/parser/executor and\nnot get close to the end-user-perceptible gains from fixing this\nproblem...\n\nRegards,\nEd Loehr\n",
"msg_date": "Thu, 25 May 2000 10:51:23 -0500",
"msg_from": "Ed Loehr <[email protected]>",
"msg_from_op": true,
"msg_subject": "vacuum analyze feedback"
},
{
"msg_contents": "> I know this topic has been rehashed a million times, but I just wanted to\n> add one datapoint. I have a database (150 tables, less than 20K tuples \n> in any one table) which I 'vacuum analyze'\u001b*HOURLY*, blocking all access,\n> and I still see frequent situations where my query times bloat by roughly\n> 300% (4 times slower) in the intervening time between vacuums. All this \n> is to say that I think a more strategic implementation of the \n> functionality of vacuum analyze (specifically, non-batched, automated,\n> on-the-fly vacuuming/analyzing) would be a major \"value add\". I haven't \n> educated myself as to the history of it, but I do wonder why the \n> performance focus is not on this. I'd imagine it would be a performance \n> hit (which argues for making it optional), but I'd gladly take a 10% \n> performance hit over the current highly undesireable degradation. You \n> could do a whole lotta optimization on the planner/parser/executor and\n> not get close to the end-user-perceptible gains from fixing this\n> problem...\n> \n\nVadim is planning over-write storage manager in 7.2 which will allow\nexpired tuples to be reunsed without vacuum.\n\nOr is the ANALYZE the issue for you? You need hourly statistics?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 25 May 2000 12:11:02 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: vacuum analyze feedback"
},
{
"msg_contents": "Bruce Momjian wrote:\n> \n> > I know this topic has been rehashed a million times, but I just wanted to\n> > add one datapoint. I have a database (150 tables, less than 20K tuples\n> > in any one table) which I 'vacuum analyze'\u001b*HOURLY*, blocking all access,\n> > and I still see frequent situations where my query times bloat by roughly\n> > 300% (4 times slower) in the intervening time between vacuums. All this\n> > is to say that I think a more strategic implementation of the\n> > functionality of vacuum analyze (specifically, non-batched, automated,\n> > on-the-fly vacuuming/analyzing) would be a major \"value add\". I haven't\n> > educated myself as to the history of it, but I do wonder why the\n> > performance focus is not on this. I'd imagine it would be a performance\n> > hit (which argues for making it optional), but I'd gladly take a 10%\n> > performance hit over the current highly undesireable degradation. You\n> > could do a whole lotta optimization on the planner/parser/executor and\n> > not get close to the end-user-perceptible gains from fixing this\n> > problem...\n> \n> Vadim is planning over-write storage manager in 7.2 which will allow\n> expired tuples to be reunsed without vacuum.\n\nSorry, I missed that in prior threads...that would be good.\n\n> Or is the ANALYZE the issue for you? \n\nBoth, actually. More specifically, blocking end-user access during\nvacuum, and degraded end-user performance as pg_statistics diverge from\nreality. Both are losses of service from the system.\n\n> You need hourly statistics?\n\nMy unstated point was that hourly stats have turned out *not* to be\nnearly good enough in my case. Better would be if the system was smart\nenough to recognize when the outcome of a query/plan was sufficiently\ndivergent from statistics to warrant a system-initiated analyze (or\nwhatever form it would take). I'll probably end up doing this detection\nfrom the app/client side, but that's not the right place for it, IMO.\n\nRegards,\nEd Loehr\n",
"msg_date": "Thu, 25 May 2000 14:24:01 -0500",
"msg_from": "Ed Loehr <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: vacuum analyze feedback"
},
{
"msg_contents": "> Bruce Momjian wrote:\n> > \n> > > I know this topic has been rehashed a million times, but I just wanted to\n> > > add one datapoint. I have a database (150 tables, less than 20K tuples\n> > > in any one table) which I 'vacuum analyze'\u001b*HOURLY*, blocking all access,\n> > > and I still see frequent situations where my query times bloat by roughly\n> > > 300% (4 times slower) in the intervening time between vacuums. All this\n> > > is to say that I think a more strategic implementation of the\n> > > functionality of vacuum analyze (specifically, non-batched, automated,\n> > > on-the-fly vacuuming/analyzing) would be a major \"value add\". I haven't\n> > > educated myself as to the history of it, but I do wonder why the\n> > > performance focus is not on this. I'd imagine it would be a performance\n> > > hit (which argues for making it optional), but I'd gladly take a 10%\n> > > performance hit over the current highly undesireable degradation. You\n> > > could do a whole lotta optimization on the planner/parser/executor and\n> > > not get close to the end-user-perceptible gains from fixing this\n> > > problem...\n> > \n> > Vadim is planning over-write storage manager in 7.2 which will allow\n> > expired tuples to be reunsed without vacuum.\n> \n> Sorry, I missed that in prior threads...that would be good.\n> \n> > Or is the ANALYZE the issue for you? \n> \n> Both, actually. More specifically, blocking end-user access during\n> vacuum, and degraded end-user performance as pg_statistics diverge from\n> reality. Both are losses of service from the system.\n> \n> > You need hourly statistics?\n> \n> My unstated point was that hourly stats have turned out *not* to be\n> nearly good enough in my case. Better would be if the system was smart\n> enough to recognize when the outcome of a query/plan was sufficiently\n> divergent from statistics to warrant a system-initiated analyze (or\n> whatever form it would take). I'll probably end up doing this detection\n> from the app/client side, but that's not the right place for it, IMO.\n\nYes, I think eventually, we need to feed information about actual query\nresults back into the optimizer for use in later queries.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 25 May 2000 15:54:40 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: vacuum analyze feedback"
},
{
"msg_contents": "> At 15:54 25/05/00 -0400, Bruce Momjian wrote:\n> >\n> >Yes, I think eventually, we need to feed information about actual query\n> >results back into the optimizer for use in later queries.\n> >\n> \n> You could be a little more ambituous and do what Dec/Rdb does - use the\n> results of current query execution to (possibly) cause a change in the\n> current strategy.\n> \n\nyes.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 25 May 2000 23:31:24 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: vacuum analyze feedback"
},
{
"msg_contents": "At 15:54 25/05/00 -0400, Bruce Momjian wrote:\n>\n>Yes, I think eventually, we need to feed information about actual query\n>results back into the optimizer for use in later queries.\n>\n\nYou could be a little more ambituous and do what Dec/Rdb does - use the\nresults of current query execution to (possibly) cause a change in the\ncurrent strategy.\n\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.C.N. 008 659 498) | /(@) ______---_\nTel: +61-03-5367 7422 | _________ \\\nFax: +61-03-5367 7430 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Fri, 26 May 2000 13:31:33 +1000",
"msg_from": "Philip Warner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: vacuum analyze feedback"
}
] |
[
{
"msg_contents": "\n> > Wow, that sounds darn slow. Speed of a seq scan on one CPU, \n> > one disk should give you more like 19000 rows/s with a \n> small record size.\n> > Of course you are probably talking about random fetch order here,\n> > but we need fast seq scans too.\n> \n> The test was random reads on a 250GB database. I don't have a\n> similar characterization for sequential scans off the top of my\n> head.\n\nYes, for random access this timing sounds better. Was that timing taken with\n\naccess through a secondary index or through the recnum ?\nDid you make sure that nothing was cached, not even the recnum index ?\n\nAndreas\n",
"msg_date": "Thu, 25 May 2000 17:56:15 +0200",
"msg_from": "Zeugswetter Andreas SB <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: AW: Berkeley DB..."
}
] |
[
{
"msg_contents": "\n> > I am not talking about select * I am talking about\n> > \"select somefunc(supertable) from supertable\"\n> > \n> > create table supertable (a int);\n> > create table taba (b int) under supertable;\n> > \n> > create function somefunc (tup supertable) returning int\n> > as 'select 1' ...\n> > \n> > create function somefunc (tup taba) returning int\n> > as 'select 0.5*b' ....\n> \n> So how does this work in Informix/Illustra ?\n> \n> i.e. is the binding done at row evaluation time or \n> \"when they do 'select * ...' and don't know about coumn b\"\n\nwhen you do \"select * from supertable\" you only get column a,\nbut rows from both tables.\nwhen you do select somefunc(supertable) ... the function \ncorresponding to the rowtype is called thus the taba rows \ndo get 0.5*b as result.\n\nAndreas\n",
"msg_date": "Thu, 25 May 2000 18:02:06 +0200",
"msg_from": "Zeugswetter Andreas SB <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: AW: AW: SQL3 UNDER"
}
] |
[
{
"msg_contents": "> Times for the insert loop:\n> 14 MySQL-MyISAM\n> 23 PostgreSQL (no fsync)\n> 53 MySQL-BDB (with fsync -- don't know how to turn it off yet)\n\nPostgreSQL 6.5.3; -B 16384; SUN Ultra10 with some IDE disk.\n\n1. with fsync, all inserts in single transaction: 73 sec\n2. with fsync, use COPY: 3 sec\n3. without fsync, use COPY: 3 sec\n4. without fsync, all inserts in single transaction: 71 sec\n5. without fsync, each insert in own transaction: 150 sec\n\nDo you see difference for INSERT/COPY in 1./2.? Shouldn't we try\nto speed up our PARSER/PLANNER, keeping in mind that WAL will speed\nup our storage sub-system?!\n\nAlso, 4. & 5. show that transaction begin/commit take too long time.\nCould you run your test for all inserts in single transaction?\n\n(If we want to test storage sub-system, let's do our test un-affected\nby other ones...)\n\n> The select:\n> 0.75 MySQL-MyISAM\n> 0.77 MySQL-BDB\n> 2.43 PostgreSQL\n\nselect a from foo order by a\n\ndidn't use index in my case, so I've run\n\nselect a from foo where a >= 0 also.\n\n1. ORDER: 0.74\n2. A >= 0 with index: 0.73\n3. A >= 0 without index: 0.56\n\nNote that I used -B 16384 (very big pool) and run queries *twice* to get\nall data into pool. What size of pool did you use? 64 (default) * 8192 =\n512Kb,\nbut size of foo is 1.5Mb...\n\n2. & 3. show that index slows data retrieval... as it should -:)\nAlso, does MySQL read table itself if it can get all required\ncolumns from index?! I mean - did your query really read *both*\nindex and *table*? \nPostgreSQL has to read table anyway...\n\nVadim\n",
"msg_date": "Thu, 25 May 2000 09:09:26 -0700",
"msg_from": "\"Mikheev, Vadim\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Berkeley DB..."
},
{
"msg_contents": "Hi,\n\nMikheev, Vadim:\n> Also, does MySQL read table itself if it can get all required\n> columns from index?! I mean - did your query really read *both*\n> index and *table*? \n\nYes, and yes.\n\nNote that this \"benchmark\" was much too quick-and-dirty and didn't\nreally say anything conclusive... we'll have to wait a bit for that.\n\n-- \nMatthias Urlichs | noris network GmbH | [email protected] | ICQ: 20193661\nThe quote was selected randomly. Really. | http://smurf.noris.de/\n-- \nThe best way to preserve a right is to exercise it,\nand the right to smoke is a right worth dying for.\n",
"msg_date": "Fri, 26 May 2000 04:09:06 +0200",
"msg_from": "\"Matthias Urlichs\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Berkeley DB..."
},
{
"msg_contents": "Matthias Urlichs wrote:\n> \n> Hi,\n> \n> Mikheev, Vadim:\n> > Also, does MySQL read table itself if it can get all required\n> > columns from index?! I mean - did your query really read *both*\n> > index and *table*?\n> \n> Yes, and yes.\n> \n> Note that this \"benchmark\" was much too quick-and-dirty and didn't\n> really say anything conclusive... we'll have to wait a bit for that.\n> \n> --\n> Matthias Urlichs | noris network GmbH | [email protected] | ICQ: 20193661\n> The quote was selected randomly. Really. | http://smurf.noris.de/\n> --\n\nAlthough I am a PostgreSQL zealot, I have to admit that many\nPostgreSQL users have hidden behind the use of transactions in\njustifying the sometimes 2 - 3 times slower execution speeds in\nDML statements vs. MySQL. As Vadim points out in his comparison\nof COPY vs. INSERT, something is *wrong* with the time it takes\nfor PostgreSQL to parse, plan, rewrite, and optimize. Now that\nMySQL has transactions through Berkley DB, I think its going to\nbe harder to justify the pre-executor execution times. \n\nJust my two cents, \n\nMike Mascari\n",
"msg_date": "Fri, 26 May 2000 02:12:32 -0400",
"msg_from": "Mike Mascari <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Berkeley DB..."
},
{
"msg_contents": "Hi,\n\nMike Mascari:\n> DML statements vs. MySQL. As Vadim points out in his comparison\n> of COPY vs. INSERT, something is *wrong* with the time it takes\n> for PostgreSQL to parse, plan, rewrite, and optimize.\n\nIn my throughly unscientific opinion, the problem may well be the fact\nthat PostgreSQL recurses the whole process, i.e. it is looking up\nattributes of one table in a bunch of other tables.\n\nMySQL, by contrast, has three files per table -- one with the data,\none with _all_ the indices, and one .frm file with all the other\nmetadata you would ever want to know about a table.\n\nThat metadata file is mapped into shared memory space by the first task\nthat opens a table, and it stays there. The data and index files also\nstay open until they're flushed.\n\nSince MySQL is multithreaded, opening a new connection is extremely\ncheap. By contrast, PostgreSQL does more than 30 open() calls when I\nconnect to it.(*) It's still lots faster than some other databases I might\nmention, though...\n\nAccess control is done by a bunch of tables in the \"mysql\" database,\nbut these are 100% cached.\n\nOne nice side effect of this is that it's very easy to access tables\nfrom another database. Just say \"select * from foo.bar\".\n\n\n(*) The list:\n/data//pg_options\n/etc/passwd\n/etc/group\n/data//PG_VERSION\n/data//pg_database\n/data//base/test/PG_VERSION\n/data//base/test/pg_internal.init\n/data//pg_log\n/data//pg_variable\n/data//base/test/pg_class\n/data//base/test/pg_class_relname_index\n/data//base/test/pg_attribute\n/data//base/test/pg_attribute_relid_attnum_index\n/data//base/test/pg_trigger\n/data//base/test/pg_am\n/data//base/test/pg_index\n/data//base/test/pg_amproc\n/data//base/test/pg_amop\n/data//base/test/pg_operator\n/data//base/test/pg_index_indexrelid_index\n/data//base/test/pg_operator_oid_index\n/data//base/test/pg_index_indexrelid_index\n/data//base/test/pg_trigger_tgrelid_index\n/data//pg_shadow\n/data//pg_database\n/data//base/test/pg_proc\n/data//base/test/pg_proc_proname_narg_type_index\n/data//base/test/pg_type\n/data//base/test/pg_type_oid_index\n/data//base/test/pg_proc_oid_index\n/data//base/test/pg_rewrite\n/data//base/test/pg_user\n/data//base/test/pg_attribute_relid_attnam_index\n/data//base/test/pg_operator_oprname_l_r_k_index\n/data//base/test/pg_class_oid_index\n/data//base/test/pg_statistic\n/data//base/test/pg_statistic_relid_att_index\n\n-- \nMatthias Urlichs | noris network GmbH | [email protected] | ICQ: 20193661\nThe quote was selected randomly. Really. | http://smurf.noris.de/\n-- \nMan with hand in pocket feel cocky all day.\n\t\t-- Confucius\n",
"msg_date": "Fri, 26 May 2000 10:16:31 +0200",
"msg_from": "\"Matthias Urlichs\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Berkeley DB..."
},
{
"msg_contents": "Mike Mascari wrote:\n> \n> \n> Although I am a PostgreSQL zealot, I have to admit that many\n> PostgreSQL users have hidden behind the use of transactions in\n> justifying the sometimes 2 - 3 times slower execution speeds in\n> DML statements vs. MySQL. As Vadim points out in his comparison\n> of COPY vs. INSERT, something is *wrong* with the time it takes\n> for PostgreSQL to parse, plan, rewrite, and optimize. Now that\n> MySQL has transactions through Berkley DB, I think its going to\n> be harder to justify the pre-executor execution times.\n\nWe can always justify it by referring to extensibility of postgres,\nwhich is surely part of the story\n\nSure we will be able to do cacheing to improve speed of \nserial inserts.\n \n> Just my two cents,\n> \n> Mike Mascari\n",
"msg_date": "Fri, 26 May 2000 14:09:00 +0300",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Berkeley DB..."
},
{
"msg_contents": "Hannu Krosing <[email protected]> writes:\n>> As Vadim points out in his comparison\n>> of COPY vs. INSERT, something is *wrong* with the time it takes\n>> for PostgreSQL to parse, plan, rewrite, and optimize.\n\nWe might have part of the story in the recently noticed fact that\neach insert/update query begins by doing a seqscan of pg_index.\n\nI have done profiles of INSERT in the past and not found any really\nspectacular bottlenecks (but I was looking at a test table with no\nindexes, so I failed to see the pg_index problem :-(). Last time\nI did it, I had these top profile entries for inserting 100,000 rows\nof 30 columns apiece:\n\n % cumulative self self total \n time seconds seconds calls ms/call ms/call name \n 30.08 290.79 290.79 _mcount\n 6.48 353.46 62.67 30702766 0.00 0.00 AllocSetAlloc\n 5.27 404.36 50.90 205660 0.25 0.25 write\n 3.06 433.97 29.61 30702765 0.00 0.00 MemoryContextAlloc\n 2.74 460.45 26.48 100001 0.26 0.74 yyparse\n 2.63 485.86 25.41 24300077 0.00 0.00 newNode\n 2.22 507.33 21.47 3900054 0.01 0.01 yylex\n 1.63 523.04 15.71 30500751 0.00 0.00 PortalHeapMemoryAlloc\n 1.31 535.68 12.64 5419526 0.00 0.00 hash_search\n 1.18 547.11 11.43 9900000 0.00 0.00 expression_tree_walker\n 1.01 556.90 9.79 3526752 0.00 0.00 SpinRelease\n\nWhile the time spent in memory allocation is annoying, that's only about\nten mallocs per parsed data expression, so it's unlikely that we will be\nable to improve on it very much. (We could maybe avoid having *three*\nlevels of subroutine call to do an alloc, though ;-).) Unless you are\nsmarter than the flex and bison guys you are not going to be able to\nimprove on the lex/parse times either. The planner isn't even showing\nup for a simple INSERT. Not much left, unless you can figure out how\nto write and commit a tuple with less than two disk writes.\n\nBut, as I said, this was a case with no indexes to update.\n\nI intend to do something about caching pg_index info ASAP in the 7.1\ncycle, and then we can see how much of a difference that makes...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 26 May 2000 11:39:56 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Berkeley DB... "
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Hannu Krosing <[email protected]> writes:\n> >> As Vadim points out in his comparison\n> >> of COPY vs. INSERT, something is *wrong* with the time it takes\n> >> for PostgreSQL to parse, plan, rewrite, and optimize.\n> \n> We might have part of the story in the recently noticed fact that\n> each insert/update query begins by doing a seqscan of pg_index.\n> \n> I have done profiles of INSERT in the past and not found any really\n> spectacular bottlenecks (but I was looking at a test table with no\n> indexes, so I failed to see the pg_index problem :-(). Last time\n> I did it, I had these top profile entries for inserting 100,000 rows\n> of 30 columns apiece:\n> \n> % cumulative self self total\n> time seconds seconds calls ms/call ms/call name\n> 30.08 290.79 290.79 _mcount\n> 6.48 353.46 62.67 30702766 0.00 0.00 AllocSetAlloc\n> 5.27 404.36 50.90 205660 0.25 0.25 write\n> 3.06 433.97 29.61 30702765 0.00 0.00 MemoryContextAlloc\n> 2.74 460.45 26.48 100001 0.26 0.74 yyparse\n> 2.63 485.86 25.41 24300077 0.00 0.00 newNode\n> 2.22 507.33 21.47 3900054 0.01 0.01 yylex\n> 1.63 523.04 15.71 30500751 0.00 0.00 PortalHeapMemoryAlloc\n> 1.31 535.68 12.64 5419526 0.00 0.00 hash_search\n> 1.18 547.11 11.43 9900000 0.00 0.00 expression_tree_walker\n> 1.01 556.90 9.79 3526752 0.00 0.00 SpinRelease\n> \n> While the time spent in memory allocation is annoying, that's only about\n> ten mallocs per parsed data expression, so it's unlikely that we will be\n> able to improve on it very much. (We could maybe avoid having *three*\n> levels of subroutine call to do an alloc, though ;-).) Unless you are\n> smarter than the flex and bison guys you are not going to be able to\n> improve on the lex/parse times either. The planner isn't even showing\n> up for a simple INSERT. Not much left, unless you can figure out how\n> to write and commit a tuple with less than two disk writes.\n> \n> But, as I said, this was a case with no indexes to update.\n> \n> I intend to do something about caching pg_index info ASAP in the 7.1\n> cycle, and then we can see how much of a difference that makes...\n> \n> regards, tom lane\n\nIt will be interesting to see the speed differences between the\n100,000 inserts above and those which have been PREPARE'd using\nKarel Zak's PREPARE patch. Perhaps a generic query cache could be\nused to skip the parsing/planning/optimizing stage when multiple\nexact queries are submitted to the database? I suppose the cached\nplans could then be discarded whenever a DDL statement or a\nVACUUM ANALYZE is executed? The old Berkeley Postgres docs spoke\nabout cached query plans *and* results (as well as 64-bit oids,\namongst other things). I'm looking forward to when the 7.1 branch\noccurs... :-)\n\nMike Mascari\n",
"msg_date": "Fri, 26 May 2000 14:48:22 -0400",
"msg_from": "Mike Mascari <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Berkeley DB..."
},
{
"msg_contents": "> > % cumulative self self total\n> > time seconds seconds calls ms/call ms/call name\n> > 30.08 290.79 290.79 _mcount\n> > 6.48 353.46 62.67 30702766 0.00 0.00 AllocSetAlloc\n> > 5.27 404.36 50.90 205660 0.25 0.25 write\n> > 3.06 433.97 29.61 30702765 0.00 0.00 MemoryContextAlloc\n> > 2.74 460.45 26.48 100001 0.26 0.74 yyparse\n> > 2.63 485.86 25.41 24300077 0.00 0.00 newNode\n> > 2.22 507.33 21.47 3900054 0.01 0.01 yylex\n> > 1.63 523.04 15.71 30500751 0.00 0.00 PortalHeapMemoryAlloc\n> > 1.31 535.68 12.64 5419526 0.00 0.00 hash_search\n> > 1.18 547.11 11.43 9900000 0.00 0.00 expression_tree_walker\n> > 1.01 556.90 9.79 3526752 0.00 0.00 SpinRelease\n> > \n> > While the time spent in memory allocation is annoying, that's only about\n> > ten mallocs per parsed data expression, so it's unlikely that we will be\n> > able to improve on it very much. (We could maybe avoid having *three*\n> > levels of subroutine call to do an alloc, though ;-).) Unless you are\n> > smarter than the flex and bison guys you are not going to be able to\n> > improve on the lex/parse times either. The planner isn't even showing\n> > up for a simple INSERT. Not much left, unless you can figure out how\n> > to write and commit a tuple with less than two disk writes.\n\n> It will be interesting to see the speed differences between the\n> 100,000 inserts above and those which have been PREPARE'd using\n> Karel Zak's PREPARE patch\n\nIf we believe the above output, the win won't be very noticeable. It is the \nwrites (and the Allocs) we have to get rid of.\nThe above is much faster if you do:\nbegin work;\n100000 inserts ....;\ncommit work;\n\nAndreas\n\n",
"msg_date": "Fri, 26 May 2000 21:43:20 +0200",
"msg_from": "\"Zeugswetter Andreas\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Berkeley DB..."
},
{
"msg_contents": "Tom Lane writes:\n\n> I have done profiles of INSERT in the past and not found any really\n> spectacular bottlenecks\n\nI am still at a loss on how to make profiles. The latest thing that\nhappened to me is that the postmaster gave me a `Profiling timer expired'\nmessage and never started up. Any idea?\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Mon, 29 May 2000 00:18:01 +0200 (CEST)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Berkeley DB... "
},
{
"msg_contents": "Peter Eisentraut <[email protected]> writes:\n> I am still at a loss on how to make profiles. The latest thing that\n> happened to me is that the postmaster gave me a `Profiling timer expired'\n> message and never started up. Any idea?\n\nDunno ... PROFILE=-pg works for me ...\n\nNormally there's a special startup file that the compiler is supposed to\nknow to link instead of the usual crt0.o, when you link with -pg.\nPossibly there's something wrong with yours?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 28 May 2000 22:15:29 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Berkeley DB... "
},
{
"msg_contents": "\n> It will be interesting to see the speed differences between the\n> 100,000 inserts above and those which have been PREPARE'd using\n> Karel Zak's PREPARE patch. Perhaps a generic query cache could be\n\nMy test:\n\n\tpostmaster:\t-F -B 2000\t\n\trows:\t\t100,000 \n\ttable:\t\tcreate table (data text);\n\tdata:\t\t37B for eache line\n\t\n\t--- all is in one transaction\n\n\tnative insert:\t\t66.522s\n\tprepared insert:\t59.431s\t - 11% faster\t\n\n \nIMHO parsing/optimizing is relative easy for a simple INSERT.\nThe query (plan) cache will probably save time for complicated SELECTs \nwith functions ...etc. (like query that for parsing need look at to system\ntables). For example:\n\n\tinsert into tab values ('some data' || 'somedata' || 'some data');\n\n\tnative insert:\t\t91.787s\n\tprepared insert:\t45.077s - 50% faster\n\n\t(Note: This second test was faster, because I stop X-server and\n\tpostgres had more memory :-)\n\n\n The best way for large and simple data inserting is (forever) COPY, not\nexist faster way. \n\n pg's path(s) of query:\n \n native insert:\t\tparser -> planner -> executor -> storage\n prepared insert:\tparser (for execute stmt) -> executor -> storage\n copy:\t\t\tutils (copy) -> storage\n\n> amongst other things). I'm looking forward to when the 7.1 branch\n> occurs... :-)\n\n I too.\n\n\t\t\t\t\t\t\tKarel\n\n",
"msg_date": "Mon, 29 May 2000 16:57:04 +0200 (CEST)",
"msg_from": "Karel Zak <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Berkeley DB..."
}
] |
[
{
"msg_contents": "\nHi,\n\nI want to connect MS Access to Postgresql on Linux. I have downloaded an\nODBC driver for Windows. How do I get MS Access to talk to postgres ? Is\nthere any HOWTOs on this ? I Know Linux very well, MS Access a little, SQL\nnot really but learning...hopyfully :-)\nThanks\n\n",
"msg_date": "Thu, 25 May 2000 17:18:14 +0100",
"msg_from": "\"Mark R\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "None"
},
{
"msg_contents": "Mark, if you install the ODBC driver, you should be able to set up Postgres as\na data source in ODBC. Then create a new Access database, and either link\nto/import from the Postgres data source. The former gives you a dynamic\ndatabase, not updatable (unless you uncheck \"read only\" in the advanced ODBC\nsettings) - the latter gives you a snapshot that you can edit without fear of\nmessing up data.\n\nRegards,\nNed\n\n\nMark R wrote:\n\n> Hi,\n>\n> I want to connect MS Access to Postgresql on Linux. I have downloaded an\n> ODBC driver for Windows. How do I get MS Access to talk to postgres ? Is\n> there any HOWTOs on this ? I Know Linux very well, MS Access a little, SQL\n> not really but learning...hopyfully :-)\n> Thanks\n\n",
"msg_date": "Thu, 25 May 2000 13:36:47 -0400",
"msg_from": "Ned Lilly <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: "
}
] |
[
{
"msg_contents": "What happens if people issue create user or create database outside\ntemplate1. Do we need to prevent it? Seems they work, but am not sure\nit is OK. Do we need to add a check?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 25 May 2000 12:18:24 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Create user/create database outside template1"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n> What happens if people issue create user or create database outside\n> template1. Do we need to prevent it? Seems they work, but am not sure\n> it is OK. Do we need to add a check?\n\nWorks fine, no, and no. The reason the scripts like to connect to\ntemplate1 is that it's the only database they can be sure is there.\nBut the commands themselves don't care where you're connected.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 25 May 2000 12:40:47 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Create user/create database outside template1 "
},
{
"msg_contents": "> Bruce Momjian <[email protected]> writes:\n> > What happens if people issue create user or create database outside\n> > template1. Do we need to prevent it? Seems they work, but am not sure\n> > it is OK. Do we need to add a check?\n> \n> Works fine, no, and no. The reason the scripts like to connect to\n> template1 is that it's the only database they can be sure is there.\n> But the commands themselves don't care where you're connected.\n\nWell, thanks. I was going over that for my book. If I do createdb from\nanother database, I guess I get a copy of that database, right? \nInteresting feature that I am sure few people realize.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 25 May 2000 12:42:24 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Create user/create database outside template1"
},
{
"msg_contents": "On Thu, 25 May 2000, Bruce Momjian wrote:\n\n> > Bruce Momjian <[email protected]> writes:\n> > > What happens if people issue create user or create database outside\n> > > template1. Do we need to prevent it? Seems they work, but am not sure\n> > > it is OK. Do we need to add a check?\n> > \n> > Works fine, no, and no. The reason the scripts like to connect to\n> > template1 is that it's the only database they can be sure is there.\n> > But the commands themselves don't care where you're connected.\n> \n> Well, thanks. I was going over that for my book. If I do createdb from\n> another database, I guess I get a copy of that database, right? \n> Interesting feature that I am sure few people realize.\n\nSay what? That might explain a few things.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] http://www.pop4.net\n 128K ISDN from $22.00/mo - 56K Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Thu, 25 May 2000 13:05:03 -0400 (EDT)",
"msg_from": "Vince Vielhaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Create user/create database outside template1"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n> Well, thanks. I was going over that for my book. If I do createdb from\n> another database, I guess I get a copy of that database, right? \n\nNo, it's always a copy of template1. CREATE DATABASE really doesn't\ncare which database you're connected to (nor does CREATE USER).\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 25 May 2000 13:05:16 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Create user/create database outside template1 "
},
{
"msg_contents": "> Bruce Momjian <[email protected]> writes:\n> > Well, thanks. I was going over that for my book. If I do createdb from\n> > another database, I guess I get a copy of that database, right? \n> \n> No, it's always a copy of template1. CREATE DATABASE really doesn't\n> care which database you're connected to (nor does CREATE USER).\n\nYes, I see that. Interesting.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 25 May 2000 13:05:40 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Create user/create database outside template1"
}
] |
[
{
"msg_contents": "\n> Zeugswetter Andreas SB <[email protected]> writes:\n> > But, it probably shows a problem with the chosen metric for\n> > selectivity itself. Imho the chances are better, that an =\n> > restriction will return an equal amount of rows while the \n> table grows\n> > than that it will return a percentage of total table size.\n> \n> Unfortunately you are allowing your thinking to be driven by a single\n> example. Consider queries like\n> \tselect * from employees where dept = 'accounting'; \n> It's perfectly possible that the column being tested with '=' has only\n> a small number of distinct values, in which case the number \n> of retrieved\n> rows probably *is* proportional to the table size.\n> \n> I am not willing to change the planner so that it \n> \"guarantees\" to choose\n> an indexscan no matter what, because then it would be broken for cases\n> like this. We have to look at the statistics we have, \n> inadequate though\n> they are.\n\nYes, this would not be good. But imho it would be good to force the index\nif we lack disbursion information (no analyze), but have tabsize and index\nsize\ninfo, and index size is small, since as vadim said analyze is very time\nconsuming.\n\nActually could index size compared with colsize*rowcount be an indicator\nfor disbursion ? At least for fixed length columns ?\nbig index --> very unique\nsmall index --> many duplicates\n\nAndreas\n",
"msg_date": "Thu, 25 May 2000 18:24:57 +0200",
"msg_from": "Zeugswetter Andreas SB <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: AW: AW: More Performance "
}
] |
[
{
"msg_contents": "> > ... that being said (and I took a quick test with 10000 \n> > randomly-inserted\n> > records and fetched them in index order) if the data's in the \n> > cache, the\n> > speed difference is insignificant. \n> \n> As long as everything fits into the system cache and is \n> already in there, this test is moot.\n\nOh, if we want to test how cache is affected by index-nature of\nMySQL-BDB then 10000 rows table is toooo small. It adds just\n2 levels of internal pages (including root) and ~25 8k pages\nto ~ 190 pages of true heap table: additional pages will be\ncached very fast, just while fetching first 25 rows from table -:) \nNow create 10000000 rows table (~ 25000 additional pages, 3 internal\nlevels) and fetch random 10000 rows...\n\n...And again - does MySQL-BDB really read *table* for query like\nselect a from foo order by a? I remember that MySQL is smart to\nread only index when possible...\n\nVadim\n",
"msg_date": "Thu, 25 May 2000 10:14:41 -0700",
"msg_from": "\"Mikheev, Vadim\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Berkeley DB..."
}
] |
[
{
"msg_contents": "Seems a typical file system backup is fine on an idle database, right?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 25 May 2000 18:28:42 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Any reason to use pg_dumpall on an idle database"
},
{
"msg_contents": "Bruce Momjian wrote:\n> \n> Seems a typical file system backup is fine on an idle database, right?\n\nA file system backup will only work on a set of databases, won't it?\n\nCheers,\n\t\t\t\t\tAndrew.\n-- \n_____________________________________________________________________\n Andrew McMillan, e-mail: [email protected]\nCatalyst IT Ltd, PO Box 10-225, Level 22, 105 The Terrace, Wellington\nMe: +64 (21) 635 694, Fax: +64 (4) 499 5596, Office: +64 (4) 499 2267\n",
"msg_date": "Fri, 26 May 2000 10:36:05 +1200",
"msg_from": "Andrew McMillan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Any reason to use pg_dumpall on an idle database"
},
{
"msg_contents": "On Thu, 25 May 2000, Bruce Momjian wrote:\n\n> Seems a typical file system backup is fine on an idle database, right?\n\nhow do you know its idle, and/or will remain so?\n\n\n",
"msg_date": "Thu, 25 May 2000 19:40:55 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Any reason to use pg_dumpall on an idle database"
},
{
"msg_contents": "> On Thu, 25 May 2000, Bruce Momjian wrote:\n> \n> > Seems a typical file system backup is fine on an idle database, right?\n> \n> how do you know its idle, and/or will remain so?\n\npg_ctl stop of modification of pg_hba.conf.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 25 May 2000 19:14:29 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Any reason to use pg_dumpall on an idle database"
},
{
"msg_contents": "On Thu, 25 May 2000, Bruce Momjian wrote:\n\n> > On Thu, 25 May 2000, Bruce Momjian wrote:\n> > \n> > > Seems a typical file system backup is fine on an idle database, right?\n> > \n> > how do you know its idle, and/or will remain so?\n> \n> pg_ctl stop of modification of pg_hba.conf.\n\nack ... why would you want to? *raised eyebrow*\n\n\n",
"msg_date": "Thu, 25 May 2000 20:49:50 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Any reason to use pg_dumpall on an idle database"
},
{
"msg_contents": "> On Thu, 25 May 2000, Bruce Momjian wrote:\n> \n> > > On Thu, 25 May 2000, Bruce Momjian wrote:\n> > > \n> > > > Seems a typical file system backup is fine on an idle database, right?\n> > > \n> > > how do you know its idle, and/or will remain so?\n> > \n> > pg_ctl stop of modification of pg_hba.conf.\n> \n> ack ... why would you want to? *raised eyebrow*\n\nWell, I am not sure. In the book, I say you can use a normal file\nsystem backup if the database is idle, or use pg_dumpall and backup the\nfile it creates. In fact, once you run pg_dumpall, there is no need to\nbackup the /data directory except for the few configuration files like\npg_hba.conf. Does this make sense?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 25 May 2000 20:04:05 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Any reason to use pg_dumpall on an idle database"
},
{
"msg_contents": "On Thu, 25 May 2000, Bruce Momjian wrote:\n\n> > On Thu, 25 May 2000, Bruce Momjian wrote:\n> > \n> > > > On Thu, 25 May 2000, Bruce Momjian wrote:\n> > > > \n> > > > > Seems a typical file system backup is fine on an idle database, right?\n> > > > \n> > > > how do you know its idle, and/or will remain so?\n> > > \n> > > pg_ctl stop of modification of pg_hba.conf.\n> > \n> > ack ... why would you want to? *raised eyebrow*\n> \n> Well, I am not sure. In the book, I say you can use a normal file\n> system backup if the database is idle, or use pg_dumpall and backup the\n> file it creates. In fact, once you run pg_dumpall, there is no need to\n> backup the /data directory except for the few configuration files like\n> pg_hba.conf. Does this make sense?\n\nwhen you mean 'idle', do you mean 'read-only'? else the files in\n/data/base/* would be changing, no? \n\n\n",
"msg_date": "Thu, 25 May 2000 21:15:27 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Any reason to use pg_dumpall on an idle database"
},
{
"msg_contents": "> > Well, I am not sure. In the book, I say you can use a normal file\n> > system backup if the database is idle, or use pg_dumpall and backup the\n> > file it creates. In fact, once you run pg_dumpall, there is no need to\n> > backup the /data directory except for the few configuration files like\n> > pg_hba.conf. Does this make sense?\n> \n> when you mean 'idle', do you mean 'read-only'? else the files in\n> /data/base/* would be changing, no? \n\nNo, like everyone has gone home and nothing is happening.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 25 May 2000 20:23:12 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Any reason to use pg_dumpall on an idle database"
},
{
"msg_contents": "On Thu, 25 May 2000, Bruce Momjian wrote:\n\n> > > Well, I am not sure. In the book, I say you can use a normal file\n> > > system backup if the database is idle, or use pg_dumpall and backup the\n> > > file it creates. In fact, once you run pg_dumpall, there is no need to\n> > > backup the /data directory except for the few configuration files like\n> > > pg_hba.conf. Does this make sense?\n> > \n> > when you mean 'idle', do you mean 'read-only'? else the files in\n> > /data/base/* would be changing, no? \n> \n> No, like everyone has gone home and nothing is happening.\n\nokay, so this is used on an IntraNet where 'schedualed downtime' is an\noption ... then doing a shutdown, tar, and startup makes sense ...\n\n\n",
"msg_date": "Thu, 25 May 2000 21:35:10 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Any reason to use pg_dumpall on an idle database"
},
{
"msg_contents": "> On Thu, 25 May 2000, Bruce Momjian wrote:\n> \n> > > > Well, I am not sure. In the book, I say you can use a normal file\n> > > > system backup if the database is idle, or use pg_dumpall and backup the\n> > > > file it creates. In fact, once you run pg_dumpall, there is no need to\n> > > > backup the /data directory except for the few configuration files like\n> > > > pg_hba.conf. Does this make sense?\n> > > \n> > > when you mean 'idle', do you mean 'read-only'? else the files in\n> > > /data/base/* would be changing, no? \n> > \n> > No, like everyone has gone home and nothing is happening.\n> \n> okay, so this is used on an IntraNet where 'schedualed downtime' is an\n> option ... then doing a shutdown, tar, and startup makes sense ...\n\nYes, I was just trying to make it clear to them how a more simple backup\ncan happen.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 25 May 2000 20:49:18 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Any reason to use pg_dumpall on an idle database"
},
{
"msg_contents": "On Fri, 26 May 2000, Bruce Momjian wrote:\n> Seems a typical file system backup is fine on an idle database, right?\n\nI think it is a good idea to backup pg_log first, then the rest.\nThen you should imho be safe even if load is heavy.\nNo vacuum until finished of course.\n\nAndreas\n",
"msg_date": "Fri, 26 May 2000 08:20:50 +0200",
"msg_from": "Andreas Zeugswetter <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Any reason to use pg_dumpall on an idle database"
},
{
"msg_contents": "> On Fri, 26 May 2000, Bruce Momjian wrote:\n> > Seems a typical file system backup is fine on an idle database, right?\n> \n> I think it is a good idea to backup pg_log first, then the rest.\n> Then you should imho be safe even if load is heavy.\n> No vacuum until finished of course.\n\nYou know, that was always my assumption too, that doing pg_log first\nmade things safer. I am not sure if it is 100% safe, though.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 26 May 2000 10:23:14 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Any reason to use pg_dumpall on an idle database"
},
{
"msg_contents": "> > I think it is a good idea to backup pg_log first, then the rest.\n> > Then you should imho be safe even if load is heavy.\n> > No vacuum until finished of course.\n> \n> You know, that was always my assumption too, that doing pg_log first\n> made things safer. I am not sure if it is 100% safe, though.\n\nI think there is a problem with our \"big\" pagesize of 8k.\nIf we used the system page size (usually 2 or 4k) a write with a \nconcurrent read should imho not be possible. But since we need to write\n2-4 system pages I am not so sure that that is atomic, thus we risc \nbacking up an incompletely written pg page.\n\nsounds like a nogo :-(\nAndreas\n\n",
"msg_date": "Fri, 26 May 2000 22:01:55 +0200",
"msg_from": "\"Zeugswetter Andreas\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Any reason to use pg_dumpall on an idle database"
}
] |
[
{
"msg_contents": "Hi,\n\nI finished making the PostgreSQL binary for WinNT yesterday.\nWould I upload it at the PostgreSQL ftp site ? If not, I will\nput it at my site.\n\n- Kevin\n\n",
"msg_date": "Fri, 26 May 2000 09:21:40 +0800",
"msg_from": "Kevin Lo <[email protected]>",
"msg_from_op": true,
"msg_subject": "[DONE] PostgreSQL-7.0 binary for WinNT"
},
{
"msg_contents": "Excellent I wouldn't mind having a look at this.\n\nRegards,\nJoe\n\nKevin Lo wrote:\n> \n> Hi,\n> \n> I finished making the PostgreSQL binary for WinNT yesterday.\n> Would I upload it at the PostgreSQL ftp site ? If not, I will\n> put it at my site.\n> \n> - Kevin\n\n-- \nJoe Shevland\nPrincipal Consultant\nKPI Logistics Pty Ltd\nhttp://www.kpi.com.au\nmailto:[email protected]\n",
"msg_date": "Fri, 26 May 2000 11:49:48 +1000",
"msg_from": "Joe Shevland <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] [DONE] PostgreSQL-7.0 binary for WinNT"
},
{
"msg_contents": "\nlet me know where to grab it from and I'll add it to the ftp site ...\n\n\n\nOn Fri, 26 May 2000, Kevin Lo wrote:\n\n> Hi,\n> \n> I finished making the PostgreSQL binary for WinNT yesterday.\n> Would I upload it at the PostgreSQL ftp site ? If not, I will\n> put it at my site.\n> \n> - Kevin\n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Thu, 25 May 2000 22:53:55 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] [DONE] PostgreSQL-7.0 binary for WinNT"
},
{
"msg_contents": "The Hermit Hacker wrote:\n\n> let me know where to grab it from and I'll add it to the ftp site ...\n\nhttp://members.tripod.com/~kevlo/binaries/postgresql-7.0-nt-binaries.tar.gz\n\n> On Fri, 26 May 2000, Kevin Lo wrote:\n>\n> > Hi,\n> >\n> > I finished making the PostgreSQL binary for WinNT yesterday.\n> > Would I upload it at the PostgreSQL ftp site ? If not, I will\n> > put it at my site.\n> >\n> > - Kevin\n> >\n>\n> Marc G. Fournier ICQ#7615664 IRC Nick: Scrappy\n> Systems Administrator @ hub.org\n> primary: [email protected] secondary: scrappy@{freebsd|postgresql}.org\n\n- Kevin\n\n",
"msg_date": "Sun, 28 May 2000 00:25:41 +0800",
"msg_from": "Kevin Lo <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] [DONE] PostgreSQL-7.0 binary for WinNT"
},
{
"msg_contents": "\n\ndownloading right now ...\n\nOn Sun, 28 May 2000, Kevin Lo wrote:\n\n> The Hermit Hacker wrote:\n> \n> > let me know where to grab it from and I'll add it to the ftp site ...\n> \n> http://members.tripod.com/~kevlo/binaries/postgresql-7.0-nt-binaries.tar.gz\n> \n> > On Fri, 26 May 2000, Kevin Lo wrote:\n> >\n> > > Hi,\n> > >\n> > > I finished making the PostgreSQL binary for WinNT yesterday.\n> > > Would I upload it at the PostgreSQL ftp site ? If not, I will\n> > > put it at my site.\n> > >\n> > > - Kevin\n> > >\n> >\n> > Marc G. Fournier ICQ#7615664 IRC Nick: Scrappy\n> > Systems Administrator @ hub.org\n> > primary: [email protected] secondary: scrappy@{freebsd|postgresql}.org\n> \n> - Kevin\n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n",
"msg_date": "Sat, 27 May 2000 16:47:49 -0300 (ADT)",
"msg_from": "The Hermit Hacker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] [DONE] PostgreSQL-7.0 binary for WinNT"
}
] |
[
{
"msg_contents": "These two queries are exactly alike. The first one uses aliases except\nfor the order by. The second uses aliases also for the order by. The\nthird uses whole names. The third has the behavior I want.\n\nSomeone please tell me what I am doing wrong. I don't want to have to\nuse whole names for my query.\n\nThe data for the tables are at the end.\n\n\nplaypen=> select ta.a,ta.b,ta.c, (select count (tb.zz) where tb.yy =\nta.a) from tablea ta, tableb tb order by tablea.a;\na|b|c|?column?\n-+-+-+--------\n1|2| | 0\n2|3|4| 1\n3|4|5| 0\n4|5|4| 0\n1|2| | 0\n2|3|4| 1\n3|4|5| 0\n4|5|4| 0\n1|2| | 0\n2|3|4| 0\n3|4|5| 1\n4|5|4| 0\n1|2| | 0\n2|3|4| 0\n3|4|5| 0\n4|5|4| 1\n1|2| | 0\n2|3|4| 0\n3|4|5| 0\n4|5|4| 0\n1|2| | 0\n2|3|4| 1\n3|4|5| 0\n4|5|4| 0\n1|2| | 0\n2|3|4| 1\n3|4|5| 0\n4|5|4| 0\n1|2| | 0\n2|3|4| 0\n3|4|5| 1\n4|5|4| 0\n1|2| | 0\n2|3|4| 0\n3|4|5| 0\n4|5|4| 1\n1|2| | 0\n2|3|4| 0\n3|4|5| 0\n4|5|4| 0\n1|2| | 0\n2|3|4| 1\n3|4|5| 0\n4|5|4| 0\n1|2| | 0\n2|3|4| 1\n3|4|5| 0\n4|5|4| 0\n1|2| | 0\n2|3|4| 0\n3|4|5| 1\n4|5|4| 0\n1|2| | 0\n2|3|4| 0\n3|4|5| 0\n4|5|4| 1\n1|2| | 0\n2|3|4| 0\n3|4|5| 0\n4|5|4| 0\n1|2| | 0\n2|3|4| 1\n3|4|5| 0\n4|5|4| 0\n1|2| | 0\n2|3|4| 1\n3|4|5| 0\n4|5|4| 0\n1|2| | 0\n2|3|4| 0\n3|4|5| 1\n4|5|4| 0\n1|2| | 0\n2|3|4| 0\n3|4|5| 0\n4|5|4| 1\n1|2| | 0\n2|3|4| 0\n3|4|5| 0\n4|5|4| 0\n(80 rows)\n\nplaypen=> select ta.a,ta.b,ta.c, (select count (tb.zz) where tb.yy =\nta.a) from tablea ta, tableb tb order by ta.a;\na|b|c|?column?\n-+-+-+--------\n1|2| | 0\n1|2| | 0\n1|2| | 0\n1|2| | 0\n1|2| | 0\n2|3|4| 1\n2|3|4| 1\n2|3|4| 0\n2|3|4| 0\n2|3|4| 0\n3|4|5| 0\n3|4|5| 0\n3|4|5| 1\n3|4|5| 0\n3|4|5| 0\n4|5|4| 0\n4|5|4| 0\n4|5|4| 0\n4|5|4| 1\n4|5|4| 0\n(20 rows)\n\nplaypen=> select tablea.a,tablea.b,tablea.c, (select count (tableb.zz)\nwhere tableb.yy = tablea.a) order by tablea.a;\na|b|c|?column?\n-+-+-+--------\n1|2| | 0\n2|3|4| 2\n3|4|5| 1\n4|5|4| 1\n(4 rows)\n\nplaypen=> \nplaypen=> select * from tablea;\na|b|c\n-+-+-\n1|2| \n2|3|4\n3|4|5\n4|5|4\n(4 rows)\n\nplaypen=> select * from tableb;\nyy|zz\n--+--\n 2| 4\n 2| 5\n 3| 9\n 4|14\n 5|15\n(5 rows)\n",
"msg_date": "Thu, 25 May 2000 23:11:44 -0400",
"msg_from": "Joseph Shraibman <[email protected]>",
"msg_from_op": true,
"msg_subject": "aliases break my query"
},
{
"msg_contents": "Joseph Shraibman <[email protected]> writes:\n> These two queries are exactly alike. The first one uses aliases except\n> for the order by. The second uses aliases also for the order by. The\n> third uses whole names. The third has the behavior I want.\n\nI think you are confusing yourself by leaving out FROM clauses.\nIn particular, with no FROM for the inner SELECT it's not real clear\nwhat should happen there. I can tell you what *is* happening, but\nwho's to say if it's right or wrong?\n\n> playpen=> select ta.a,ta.b,ta.c, (select count (tb.zz) where tb.yy =\n> ta.a) from tablea ta, tableb tb order by tablea.a;\n[ produces 80 rows ]\n\n> playpen=> select ta.a,ta.b,ta.c, (select count (tb.zz) where tb.yy =\n> ta.a) from tablea ta, tableb tb order by ta.a;\n[ produces 20 rows ]\n\nThe difference between these two is that by explicitly specifying\n\"tablea\" in the order-by clause, you've created a three-way join,\nas if you had written \"from tablea ta, tableb tb, tablea tablea\".\nOnce you write an alias in a from-clause entry, you must refer to\nthat from-clause entry by its alias, not by its true table name.\n\nMeanwhile, what of the inner select? It has no FROM clause *and*\nno valid table names. The only way to interpret the names in it\nis as references to the outer select. So, on any given iteration\nof the outer select, the inner select collapses to constants.\nIt looks like \"SELECT count(constant1) WHERE constant2 = constant3\"\nand so you get either 0 or 1 depending on whether tb.yy and ta.a\nfrom the outer scan are different or equal.\n\n> playpen=> select tablea.a,tablea.b,tablea.c, (select count (tableb.zz)\n> where tableb.yy = tablea.a) order by tablea.a;\n[ produces 4 rows ]\n\nHere the outer select is not a join at all --- it mentions only tablea,\nso you are going to get one output for each tablea row. The inner\nselect looks like \"select count (zz) FROM tableb WHERE yy = <constant>\",\nso you get an actual scan of tableb for each iteration of the outer\nscan.\n\nIt's not very clear from these examples what you actually wanted to have\nhappen, but I suggest that you will have better luck if you specify\nexplicit FROM lists in both the inner and outer selects, and be careful\nthat each variable you use clearly refers to exactly one of the\nFROM-list entries.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 26 May 2000 00:35:14 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: aliases break my query "
},
{
"msg_contents": "> > playpen=> select ta.a,ta.b,ta.c, (select count (tb.zz) where tb.yy =\n> > ta.a) from tablea ta, tableb tb order by tablea.a;\n> [ produces 80 rows ]\n\n> > playpen=> select ta.a,ta.b,ta.c, (select count (tb.zz) where tb.yy =\n> > ta.a) from tablea ta, tableb tb order by ta.a;\n> [ produces 20 rows ]\n\n> > playpen=> select tablea.a,tablea.b,tablea.c, (select count (tableb.zz)\n> > where tableb.yy = tablea.a) order by tablea.a;\n> [ produces 4 rows ]\n\nOnce again, I think that we *really* need to discuss whether implicit\nrange table entries in SELECT are a good idea. We invariably get a\nquestion like this every week and invariably the answer is \"if you give a\ntable an alias you *must* refer to it by that alias\". (I'm sure Tom has\nthis reply automated by now.) I claim the only thing that buys is\nconfusion for very little convenience at the other end.\n\nStop the madness! :)\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Fri, 26 May 2000 15:02:53 +0200 (MET DST)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: aliases break my query "
},
{
"msg_contents": "Peter Eisentraut <[email protected]> writes:\n> Once again, I think that we *really* need to discuss whether implicit\n> range table entries in SELECT are a good idea. We invariably get a\n> question like this every week and invariably the answer is \"if you give a\n> table an alias you *must* refer to it by that alias\". (I'm sure Tom has\n> this reply automated by now.)\n\nNo, this one was actually a pretty original way of shooting oneself in\nthe foot ;-). I thought the interesting point was the confusion between\nwhether variables in the inner select were supposed to be local to the\ninner select or references to the outer select. I'm not sure getting\nrid of implicit rangetable entries would've helped prevent that.\n\n> I claim the only thing that buys is\n> confusion for very little convenience at the other end.\n>\n> Stop the madness! :)\n\nI doubt that it's worth breaking a lot of existing applications for.\n\nAt one time Bruce had made some patches to emit informative notice\nmessages about implicit FROM entries, but that got turned off again\nfor reasons that I forget...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 26 May 2000 12:26:06 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: aliases break my query "
},
{
"msg_contents": "Peter Eisentraut wrote:\n> \n> > > playpen=> select ta.a,ta.b,ta.c, (select count (tb.zz) where tb.yy =\n> > > ta.a) from tablea ta, tableb tb order by tablea.a;\n> > [ produces 80 rows ]\n> \n> > > playpen=> select ta.a,ta.b,ta.c, (select count (tb.zz) where tb.yy =\n> > > ta.a) from tablea ta, tableb tb order by ta.a;\n> > [ produces 20 rows ]\n> \n> > > playpen=> select tablea.a,tablea.b,tablea.c, (select count (tableb.zz)\n> > > where tableb.yy = tablea.a) order by tablea.a;\n> > [ produces 4 rows ]\n> \n> Once again, I think that we *really* need to discuss whether implicit\n> range table entries in SELECT are a good idea.\n\nWhat is an \"implicit range table entry\"?\n\n We invariably get a\n> question like this every week and invariably the answer is \"if you give a\n> table an alias you *must* refer to it by that alias\".\n\nHey, I *did* do that in the second query, and that still produced extra\nresults. I tried putting the aliases in the inner select too but that\ndidn't help. In fact the inner select always is 4 in that case. Unless I\nonly alias tableb in the inner query, and let it get the definition of\ntablea from the outer query.\n\n\n (I'm sure Tom has\n> this reply automated by now.) I claim the only thing that buys is\n> confusion for very little convenience at the other end.\n> \n> Stop the madness! :)\n> \n> --\n> Peter Eisentraut Sernanders v�g 10:115\n> [email protected] 75262 Uppsala\n> http://yi.org/peter-e/ Sweden\n",
"msg_date": "Fri, 26 May 2000 13:47:03 -0400",
"msg_from": "Joseph Shraibman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: aliases break my query"
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Joseph Shraibman <[email protected]> writes:\n> > These two queries are exactly alike. The first one uses aliases except\n> > for the order by. The second uses aliases also for the order by. The\n> > third uses whole names. The third has the behavior I want.\n> \n> I think you are confusing yourself by leaving out FROM clauses.\n> In particular, with no FROM for the inner SELECT it's not real clear\n> what should happen there. I can tell you what *is* happening, but\n> who's to say if it's right or wrong?\n> \nWell I assumed that the aliases would be inerited from the outer query.\n\n> > playpen=> select ta.a,ta.b,ta.c, (select count (tb.zz) where tb.yy =\n> > ta.a) from tablea ta, tableb tb order by tablea.a;\n> [ produces 80 rows ]\n> \n> > playpen=> select ta.a,ta.b,ta.c, (select count (tb.zz) where tb.yy =\n> > ta.a) from tablea ta, tableb tb order by ta.a;\n> [ produces 20 rows ]\n> \n> The difference between these two is that by explicitly specifying\n> \"tablea\" in the order-by clause, you've created a three-way join,\n> as if you had written \"from tablea ta, tableb tb, tablea tablea\".\n> Once you write an alias in a from-clause entry, you must refer to\n> that from-clause entry by its alias, not by its true table name.\n\nI guess I made the mistake of assuming that SQL is logical. I don't know\nwhat I was thinking. ;)\n\n> \n> Meanwhile, what of the inner select? It has no FROM clause *and*\n> no valid table names. The only way to interpret the names in it\n> is as references to the outer select. So, on any given iteration\n> of the outer select, the inner select collapses to constants.\n> It looks like \"SELECT count(constant1) WHERE constant2 = constant3\"\n> and so you get either 0 or 1 depending on whether tb.yy and ta.a\n> from the outer scan are different or equal.\n\nOK that sorta makes sense to be. What I want is the behavior I got with\nthe third query (below). I want the values in table a, and then a count\nof how many entries in tableb have the yy field of tableb that matches\nthat entry in tablea's a field.\n\nplaypen=> select ta.a,ta.b,ta.c, (select count (tb.zz) from tableb tb\nwhere tb.yy = ta.a) from tablea ta, tableb tb group by ta.a, ta.b, ta.c\norder by ta.a;\na|b|c|?column?\n-+-+-+--------\n1|2| | 0\n2|3|4| 2\n3|4|5| 1\n4|5|4| 1\n(4 rows)\n\n... which is what I want. Thanks.\n\n> \n> > playpen=> select tablea.a,tablea.b,tablea.c, (select count (tableb.zz)\n> > where tableb.yy = tablea.a) order by tablea.a;\n> [ produces 4 rows ]\n> \n> Here the outer select is not a join at all --- it mentions only tablea,\n> so you are going to get one output for each tablea row. The inner\n> select looks like \"select count (zz) FROM tableb WHERE yy = <constant>\",\n> so you get an actual scan of tableb for each iteration of the outer\n> scan.\n> \n> It's not very clear from these examples what you actually wanted to have\n> happen, but I suggest that you will have better luck if you specify\n> explicit FROM lists in both the inner and outer selects, and be careful\n> that each variable you use clearly refers to exactly one of the\n> FROM-list entries.\n> \n> regards, tom lane\n",
"msg_date": "Fri, 26 May 2000 14:13:35 -0400",
"msg_from": "Joseph Shraibman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: aliases break my query"
},
{
"msg_contents": "\n> > I claim the only thing that buys is\n> > confusion for very little convenience at the other end.\n> >\n> > Stop the madness! :)\n> \n> I doubt that it's worth breaking a lot of existing applications for.\n> \n> At one time Bruce had made some patches to emit informative notice\n> messages about implicit FROM entries, but that got turned off again\n> for reasons that I forget...\n\nI think we could get agreement to not allow implicit from entries \nif there is a from clause in the statement, but allow them if a from clause\nis missing altogether. The patch did not distinguish the two cases.\n\nAndreas\n\n",
"msg_date": "Fri, 26 May 2000 21:47:38 +0200",
"msg_from": "\"Zeugswetter Andreas\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: [SQL] aliases break my query "
},
{
"msg_contents": "\"Zeugswetter Andreas\" <[email protected]> writes:\n> I think we could get agreement to not allow implicit from entries \n> if there is a from clause in the statement, but allow them if a from clause\n> is missing altogether. The patch did not distinguish the two cases.\n\nHmm, that's a thought. Taking it a little further, how about this:\n\n\"Emit a notice [or error if you insist] when an implicit FROM item is\nadded that refers to the same underlying table as any existing FROM\nitem.\"\n\n95% of the complaints I can remember seeing were from people who got\nconfused by the behavior of \"FROM table alias\" combined with a reference\nlike \"table.column\". Seems to me the above rule would catch this case\nwithout being obtrusive in the useful cases. Comments?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 26 May 2000 17:34:28 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: [SQL] aliases break my query "
},
{
"msg_contents": "Tom Lane writes:\n\n> \"Zeugswetter Andreas\" <[email protected]> writes:\n> > I think we could get agreement to not allow implicit from entries \n> > if there is a from clause in the statement, but allow them if a from clause\n> > is missing altogether.\n\nThat's what I had in mind.\n\n> \"Emit a notice [or error if you insist] when an implicit FROM item is\n> added that refers to the same underlying table as any existing FROM\n> item.\"\n\nThat's a step in the right direction, but I'd still like to catch\n\nSELECT a.a1, b.b1 FROM a;\n\nSELECT a.a1 FROM a WHERE a.a2 = b.b1;\n\nboth of which are more or less obviously incorrect and easily fixed.\n\n> 95% of the complaints I can remember seeing were from people who got\n> confused by the behavior of \"FROM table alias\" combined with a reference\n> like \"table.column\". Seems to me the above rule would catch this case\n> without being obtrusive in the useful cases. Comments?\n\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\[email protected] 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n",
"msg_date": "Sat, 27 May 2000 00:30:14 +0200 (CEST)",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: [SQL] aliases break my query "
},
{
"msg_contents": "Peter Eisentraut <[email protected]> writes:\n>> \"Emit a notice [or error if you insist] when an implicit FROM item is\n>> added that refers to the same underlying table as any existing FROM\n>> item.\"\n\n> That's a step in the right direction, but I'd still like to catch\n> SELECT a.a1, b.b1 FROM a;\n> SELECT a.a1 FROM a WHERE a.a2 = b.b1;\n> both of which are more or less obviously incorrect and easily fixed.\n\nMore or less obviously nonstandard, you mean. It's unlikely that\neither of those examples are incorrect in the sense of not doing what\nthe user expected them to.\n\nIf we were working in a green field then I'd agree that we ought to be\n100% SQL-spec-compliant on this point. But as is, we are talking about\nrejecting an extension that Postgres has always had and a lot of people\nfind useful. I'm not eager to do that; I think it'd be putting pedantry\nahead of usefulness and backwards-compatibility. What I want to see is\nthe minimum restriction that will catch likely errors, not an \"I'll\nannoy you until you change your queries to meet the letter of the spec\"\nkind of message.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 26 May 2000 18:42:03 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: [SQL] aliases break my query "
},
{
"msg_contents": "> \"Zeugswetter Andreas\" <[email protected]> writes:\n> > I think we could get agreement to not allow implicit from entries \n> > if there is a from clause in the statement, but allow them if a from clause\n> > is missing altogether. The patch did not distinguish the two cases.\n> \n> Hmm, that's a thought. Taking it a little further, how about this:\n> \n> \"Emit a notice [or error if you insist] when an implicit FROM item is\n> added that refers to the same underlying table as any existing FROM\n> item.\"\n> \n> 95% of the complaints I can remember seeing were from people who got\n> confused by the behavior of \"FROM table alias\" combined with a reference\n> like \"table.column\". Seems to me the above rule would catch this case\n> without being obtrusive in the useful cases. Comments?\n\nYes, I even added a define called FROM_WARN. It was disabled, and never\nenabled. When can we enable it?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 26 May 2000 19:43:12 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: [SQL] aliases break my query"
},
{
"msg_contents": "Bruce Momjian wrote:\n> \n> > \"Zeugswetter Andreas\" <[email protected]> writes:\n> > > I think we could get agreement to not allow implicit from entries\n> > > if there is a from clause in the statement, but allow them if a from clause\n> > > is missing altogether. The patch did not distinguish the two cases.\n> >\n> > Hmm, that's a thought. Taking it a little further, how about this:\n> >\n> > \"Emit a notice [or error if you insist] when an implicit FROM item is\n> > added that refers to the same underlying table as any existing FROM\n> > item.\"\n> >\n> > 95% of the complaints I can remember seeing were from people who got\n> > confused by the behavior of \"FROM table alias\" combined with a reference\n> > like \"table.column\". Seems to me the above rule would catch this case\n> > without being obtrusive in the useful cases. Comments?\n> \n> Yes, I even added a define called FROM_WARN. It was disabled, and never\n> enabled. When can we enable it?\n\nHow about a SET variable which allows PostgreSQL to reject any\nqueries which are not entirely within the specificaton; kind of\nlike -ansi -pedantic with gcc? Perhaps that's quite a bit of\nwork, but it seems quite valuable for developing portable\napplications...Of course dependency on PostgreSQL extensions\nisn't a bad thing either ;-)\n\nMike Mascari\n",
"msg_date": "Fri, 26 May 2000 21:45:07 -0400",
"msg_from": "Mike Mascari <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: [SQL] aliases break my query"
},
{
"msg_contents": "Mike Mascari <[email protected]> writes:\n> How about a SET variable which allows PostgreSQL to reject any\n> queries which are not entirely within the specificaton; kind of\n> like -ansi -pedantic with gcc? Perhaps that's quite a bit of\n> work, but it seems quite valuable for developing portable\n> applications...Of course dependency on PostgreSQL extensions\n> isn't a bad thing either ;-)\n\nHmm. Some aspects of that seem fairly straightforward, like rejecting\nthe table-not-in-FROM extension being discussed here. On the other\nhand, it'd be painful to check for uses of datatypes or functions not\npresent in the standard.\n\nIn any case, I think the general reaction will be \"good idea but a huge\namount of work compared to the reward\". Unless someone steps forward\nwho's willing to do the work, I'd bet this won't happen...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 27 May 2000 00:06:54 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: [SQL] aliases break my query "
},
{
"msg_contents": "On Fri, 26 May 2000, Tom Lane wrote:\n> \"Zeugswetter Andreas\" <[email protected]> writes:\n> > I think we could get agreement to not allow implicit from entries \n> > if there is a from clause in the statement, but allow them if a from clause\n> > is missing altogether. The patch did not distinguish the two cases.\n> \n> Hmm, that's a thought. Taking it a little further, how about this:\n> \n> \"Emit a notice [or error if you insist] when an implicit FROM item is\n> added that refers to the same underlying table as any existing FROM\n> item.\"\n> \n> 95% of the complaints I can remember seeing were from people who got\n> confused by the behavior of \"FROM table alias\" combined with a reference\n> like \"table.column\". Seems to me the above rule would catch this case\n> without being obtrusive in the useful cases. Comments?\n\nI guess I would be more strict on the reason, that people playing with implicit\nfrom entries usually know what they are doing, and thus know how to avoid a from\nclause if they want that behavior. I don't see a reason to have one table in the\nfrom clause but not another. This is too misleading for me.\n\nAndreas\n",
"msg_date": "Sun, 28 May 2000 09:34:40 +0200",
"msg_from": "Andreas Zeugswetter <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: [SQL] aliases break my query"
},
{
"msg_contents": "> At one time Bruce had made some patches to emit informative notice\n> messages about implicit FROM entries, but that got turned off again\n> for reasons that I forget...\n\nIt was triggered with common cases from the \"outer join\" syntax. It took\na while to track down since it was introduced while I was working on the\nsyntax feature :(\n\nIf it *really* needs to be put back in, then we should do so with a flag\nso we can disable the warning at compile time, run time, and/or in the\nouter join parser area. But imho sprinkling the parser with warnings for\nallowed syntax is heading the wrong direction. If it is legal, allow it.\nIf it is illegal, disallow it. If it is confusing for some, but works\nfine for others, it shouldn't become \"sort of legal\" with a warning.\n\n - Thomas\n",
"msg_date": "Wed, 31 May 2000 02:04:12 +0000",
"msg_from": "Thomas Lockhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re: aliases break my query"
}
] |
[
{
"msg_contents": "> -----Original Message-----\n> From: Hiroshi Inoue\n> > -----Original Message-----\n> > From: Tom Lane [mailto:[email protected]]\n> > \n> > Anyway, it sounds like we agree that this is the approach to pursue.\n> > Do you have time to chase down the details?\n> \n> OK,I will examine a little though I'm a little busy this week.\n>\n\nSorry,I'm so late and haven't so much time to examin the details.\nI'm afraid another point now.\nWoundn't this change waste XIDs in case of abort loop ?\n\nAnyway,I examied the loop in PostgresMain()\n (;;)\n {\n ..\n StartTransactionCommand()\n ..\n pg_exec_query()\n ..\n CommitTransactionCommand()(/AbortCurrentTrabsaction())\n ..\n }\n\nIn my thoughts,the follwoing commands preceded by +?\nwould be added,ones preceded by -? would be removed.\n\nStartTransactionCommand()\n\tTBLOCK_DEFAULT\tStartTransaction()\t->\n\tTBLOCK_BEGIN\t\t\t\t-> TBLOCK_INPROGRESS\n\tTBLOCK_INPROGRES\t\t\t\t->\n\tTBLOCK_END\t\tCommitTransaction()\t->\n\t\t\t\tStartTransaction()\t-> TBLOCK_DEFAULT\n\tTBLOCK_ABORT\t\t\t\t->\n\tTBLOCK_ENDABORT\t\t\t\t->\n\t\nCommitTransactionCommand()\n\tTBLOCK_DEFAULT\tCommitTransaction()\t->\n\tTBLOCK_BEGIN\t\t\t\t-> TBLOCK_INPROGRESS\n\tTBLOCK_INPROGRESS\tCommandCounterIncrement()\t->\n\tTBLOCK_END\t\tCommitTransaction()\t-> TBLOCK_DEFAULT\n\tTBLOCK_ABORT\t+? AbortTransaction()\n\t\t\t\t+? StartTransaction()\t->\n\tTBLOCK_ENDABORT\t+? AbortTransaction()\t-> TBLOCK_DEFAULT\n\nBeginTransactionBlock() ( <- BEGIN command )\n\tTRANS_DISABLED\t\t\t\t->\n\totherwise\t\t-> TBLOCK_BEGIN\t-> TBLOCK_INPROGRESS\n\nUserAbortTransaction() ( <- ROLLBACK command )\n\tTRANS_DISABLED\t\t\t\t->\n\tTBLOCK_INPROGRESS\t -? AbortTransaction()\t-> TBLOCK_ENDABORT\n\tTBLOCK_ABORT\t\t\t\t-> TBLOCK_ENDABORT\n\totherwise\t\t-? AbortTransaction()\t-> TBLOCK_ENDABORT\n\nEndTransactionBlock() ( <- COMMIT command )\n\tTRANS_DISABLED\t\t\t\t->\n\tTBLOCK_INPROGRESS\t\t\t\t-> TBLOCK_END\t\n\tTBLOCK_ABORT\t\t\t\t-> TBLOCK_ENDABORT\n\totherwise\t\t\t\t\t-> TBLOCK_ENDABORT\n\nAbortCurrentTransaction() ( elog(ERROR/FATAL) )\n\tTBLOCK_DEFAULT\tAbortTransaction()\t->\n\tTBLOCK_BEGIN\tAbortTransaction()\n\t\t\t\t+? StartTransaction()\t-> TBLOCK_ABORT\n\tTBLOCK_INGRESS\tAbortTransaction()\n\t\t\t\t+? StartTransaction()\t-> TBLOCK_ABORT\n\tTBLOCK_END\t\tAbortTransaction()\t-> TBLOCK_DEFAULT\n\tTBLOCK_ABORT\t+? AbortTransaction()\n\t\t\t\t+? StartTransaction()\t->\n\tTBLOCK_ENDABORT\t+? AbortTransaction()\t-> TBLOCK_DEFAULT\n\nAbortOutAnyTransaction() ( Async_UnlistenOnExit() )\n\tTRANS_DEFAULT\t\t\t\t-> TBLOCK_DEFAULT\n\totherwise\t\tAbortTransaction()\t-> TBLOCK_DEFAULT\n\n\nRegards.\n\nHiroshi Inoue\[email protected]\n",
"msg_date": "Fri, 26 May 2000 12:16:06 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Orphaned locks in 7.0? "
}
] |
[
{
"msg_contents": "RedHat RPMs for PostgreSQL 7.0, release 3 are available immediately on\nftp.postgresql.org in /pub/binary/v7.0/redhat-RPM\n\nThey have also been uploaded to incoming.redhat.com for inclusion on\ncontrib.redhat.com. They will also soon be available on www.ramifordistat.net.\n\nThere is one additional RPM for the plperl procedural language. I was highly\ntempted to fold plperl into the perl subpackage until I took a long hard look\nat pl/perl and its dependency upon the shared libperl.so -- which those just\nneeding client-side perl access don't need. Saves 624K on disk to split them\nup -- not to mention that pl/perl is in a sortof experimental stage anyway, as\nit is not built by default in the PostgreSQL 7.0 release distribution tarball. \nOf course, Karl DeBisschop, who got plperl to build, had already done it that\nway, as well.... :-) However, in the future I may decide to fold pl/perl into\nthe -perl subpackage...\n\nNOTE: If you are still running 6.5.3, or 7.0 earlier than RC5, you will need to\ndo a full dump/initdb/restore. Further, if you are running 6.5.x or eariler,\nyou need to take a thorough look at the README.rpm distributed in\n/usr/doc/postgresql-7.0 BEFORE installing -- available for direct reading at\nftp://ftp.postgresql.org/pub/binary/v7.0/redhat-RPM/README -- there have been\nseveral changes since 6.5.3.\n\n From the changelog:\n* Thu May 25 2000 Lamar Owen <[email protected]>\n- 7.0-3\n- Incorporated Tatsuo's syslog segmentation patches\n- Incorporated some of Trond's changes (see below)\n-- Fixed some Perl 5.6 oddness in Rawhide\n- Incorporated some of Karl's changes (see below)\n-- PL/Perl should now work.\n- Fixed missing /usr/bin/pg_passwd.\n\n* Mon May 22 2000 Karl DeBisschop <[email protected]>\n- 7.0-2.1\n- make plperl module (works for linux i386, your guess for other platforms)\n- use \"make COPT=\" because postgreSQL configusre script ignores CFLAGS\n\n* Sat May 20 2000 Lamar Owen <[email protected]>\n- 7.0-2\n- pg_options default values changed.\n- SPI headers (again!) fixed in a permanent manner -- hopefully!\n- Alpha patches!\n\n* Wed May 17 2000 Trond Eivind Glomsr�d <[email protected]>\n- changed bug in including man pages\n\n* Tue May 16 2000 Trond Eivind Glomsr�d <[email protected]>\n- changed buildroot, removed packager, vendor, distribution\n-- [Left all but buildroot as-is for PostgreSQL.org RPMS. LRO]\n- don't strip in package [strip in PostgreSQL.org RPMS]\n- fix perl weirdnesses (man page in bad location, remove \n perllocal.pod from file list)\n\nrpm -qi output:\nName : postgresql Relocations: /usr \nVersion : 7.0 Vendor: PostgreSQL Global Development Group\nRelease : 3 Build Date: Thu 25 May 2000 11:28:46 PM EDT\nInstall date: Thu 25 May 2000 11:40:57 PM EDT Build Host: utility.wgcr.org\nGroup : Applications/Databases Source RPM: postgresql-7.0-3.src.rpm\nSize : 9398864 License: BSD\nPackager : Lamar Owen <[email protected]>\nURL : http://www.postgresql.org/\nSummary : PostgreSQL client programs and libraries.\nDescription :\nPostgreSQL is an advanced Object-Relational database management system\n(DBMS) that supports almost all SQL constructs (including\ntransactions, subselects and user-defined types and functions). The\npostgresql package includes the client programs and libraries that\nyou'll need to access a PostgreSQL DBMS server. These PostgreSQL\nclient programs are programs that directly manipulate the internal\nstructure of PostgreSQL databases on a PostgreSQL server. These client\nprograms can be located on the same machine with the PostgreSQL\nserver, or may be on a remote machine which accesses a PostgreSQL\nserver over a network connection. This package contains the client\nlibraries for C and C++, as well as command-line utilities for\nmanaging PostgreSQL databases on a PostgreSQL server.\n\nIf you want to manipulate a PostgreSQL database on a remote PostgreSQL\nserver, you need this package. You also need to install this package\nif you're installing the postgresql-server package.\n\nName : postgresql-perl Relocations: /usr \nVersion : 7.0 Vendor: PostgreSQL Global Development Group\nRelease : 3 Build Date: Thu 25 May 2000 11:28:46 PM EDT\nInstall date: Thu 25 May 2000 11:41:07 PM EDT Build Host: utility.wgcr.org\nGroup : Applications/Databases Source RPM: postgresql-7.0-3.src.rpm\nSize : 163275 License: BSD\nPackager : Lamar Owen <[email protected]>\nURL : http://www.postgresql.org/\nSummary : Development module needed for Perl code to access a PostgreSQL DB.\nDescription :\nPostgreSQL is an advanced Object-Relational database management\nsystem. The postgresql-perl package includes a module for developers\nto use when writing Perl code for accessing a PostgreSQL database.\n\nName : postgresql-odbc Relocations: /usr \nVersion : 7.0 Vendor: PostgreSQL Global Development Group\nRelease : 3 Build Date: Thu 25 May 2000 11:28:46 PM EDT\nInstall date: Thu 25 May 2000 11:41:07 PM EDT Build Host: utility.wgcr.org\nGroup : Applications/Databases Source RPM: postgresql-7.0-3.src.rpm\nSize : 162527 License: BSD\nPackager : Lamar Owen <[email protected]>\nURL : http://www.postgresql.org/\nSummary : The ODBC driver needed for accessing a PostgreSQL DB using ODBC.\nDescription :\nPostgreSQL is an advanced Object-Relational database management\nsystem. The postgresql-odbc package includes the ODBC (Open DataBase\nConnectivity) driver and sample configuration files needed for\napplications to access a PostgreSQL database using ODBC.\n\nName : postgresql-plperl Relocations: /usr \nVersion : 7.0 Vendor: PostgreSQL Global Development Group\nRelease : 3 Build Date: Thu 25 May 2000 11:28:46 PM EDT\nInstall date: Thu 25 May 2000 11:41:08 PM EDT Build Host: utility.wgcr.org\nGroup : Applications/Databases Source RPM: postgresql-7.0-3.src.rpm\nSize : 1575218 License: BSD\nPackager : Lamar Owen <[email protected]>\nURL : http://www.postgresql.org/\nSummary : The plperl procedural language\nDescription :\nPostgreSQL is an advanced Object-Relational database management\nsystem. The postgresql-plperl package provides a perl-based\nprocedural language.\n\nName : postgresql-server Relocations: /usr \nVersion : 7.0 Vendor: PostgreSQL Global Development Group\nRelease : 3 Build Date: Thu 25 May 2000 11:28:46 PM EDT\nInstall date: Thu 25 May 2000 11:41:08 PM EDT Build Host: utility.wgcr.org\nGroup : Applications/Databases Source RPM: postgresql-7.0-3.src.rpm\nSize : 1737950 License: BSD\nPackager : Lamar Owen <[email protected]>\nURL : http://www.postgresql.org/\nSummary : The programs needed to create and run a PostgreSQL server.\nDescription :\nThe postgresql-server package includes the programs needed to create\nand run a PostgreSQL server, which will in turn allow you to create\nand maintain PostgreSQL databases. PostgreSQL is an advanced\nObject-Relational database management system (DBMS) that supports\nalmost all SQL constructs (including transactions, subselects and\nuser-defined types and functions). You should install\npostgresql-server if you want to create and maintain your own\nPostgreSQL databases and/or your own PostgreSQL server. You also need\nto install the postgresql and postgresql-devel packages.\n\nName : postgresql-tcl Relocations: /usr \nVersion : 7.0 Vendor: PostgreSQL Global Development Group\nRelease : 3 Build Date: Thu 25 May 2000 11:28:46 PM EDT\nInstall date: Thu 25 May 2000 11:41:08 PM EDT Build Host: utility.wgcr.org\nGroup : Applications/Databases Source RPM: postgresql-7.0-3.src.rpm\nSize : 61226 License: BSD\nPackager : Lamar Owen <[email protected]>\nURL : http://www.postgresql.org/\nSummary : A Tcl client library, and the PL/Tcl procedural language for PostgreSQL.\nDescription :\nPostgreSQL is an advanced Object-Relational database management\nsystem. The postgresql-tcl package contains the libpgtcl client library,\nthe pg-enchanced pgtclsh, and the PL/Tcl procedural language for the backend.\n\nName : postgresql-jdbc Relocations: /usr \nVersion : 7.0 Vendor: PostgreSQL Global Development Group\nRelease : 3 Build Date: Thu 25 May 2000 11:28:46 PM EDT\nInstall date: Thu 25 May 2000 11:41:07 PM EDT Build Host: utility.wgcr.org\nGroup : Applications/Databases Source RPM: postgresql-7.0-3.src.rpm\nSize : 699239 License: BSD\nPackager : Lamar Owen <[email protected]>\nURL : http://www.postgresql.org/\nSummary : Files needed for Java programs to access a PostgreSQL database.\nDescription :\nPostgreSQL is an advanced Object-Relational database management\nsystem. The postgresql-jdbc package includes the .jar file needed for\nJava programs to access a PostgreSQL database.\n\nName : postgresql-test Relocations: /usr \nVersion : 7.0 Vendor: PostgreSQL Global Development Group\nRelease : 3 Build Date: Thu 25 May 2000 11:28:46 PM EDT\nInstall date: Thu 25 May 2000 11:41:09 PM EDT Build Host: utility.wgcr.org\nGroup : Applications/Databases Source RPM: postgresql-7.0-3.src.rpm\nSize : 5012898 License: BSD\nPackager : Lamar Owen <[email protected]>\nURL : http://www.postgresql.org/\nSummary : The test suite distributed with PostgreSQL.\nDescription :\nPostgreSQL is an advanced Object-Relational database management\nsystem. The postgresql-test package includes the sources and pre-built\nbinaries of various tests for the PostgreSQL database management\nsystem, including regression tests and benchmarks.\n\nName : postgresql-python Relocations: /usr \nVersion : 7.0 Vendor: PostgreSQL Global Development Group\nRelease : 3 Build Date: Thu 25 May 2000 11:28:46 PM EDT\nInstall date: Thu 25 May 2000 11:41:08 PM EDT Build Host: utility.wgcr.org\nGroup : Applications/Databases Source RPM: postgresql-7.0-3.src.rpm\nSize : 128749 License: BSD\nPackager : Lamar Owen <[email protected]>\nURL : http://www.postgresql.org/\nSummary : Development module for Python code to access a PostgreSQL DB.\nDescription :\nPostgreSQL is an advanced Object-Relational database management\nsystem. The postgresql-python package includes a module for\ndevelopers to use when writing Python code for accessing a PostgreSQL\ndatabase.\n\nName : postgresql-devel Relocations: /usr \nVersion : 7.0 Vendor: PostgreSQL Global Development Group\nRelease : 3 Build Date: Thu 25 May 2000 11:28:46 PM EDT\nInstall date: Thu 25 May 2000 11:41:07 PM EDT Build Host: utility.wgcr.org\nGroup : Development/Libraries Source RPM: postgresql-7.0-3.src.rpm\nSize : 1849314 License: BSD\nPackager : Lamar Owen <[email protected]>\nURL : http://www.postgresql.org/\nSummary : PostgreSQL development header files and libraries.\nDescription :\nThe postgresql-devel package contains the header files and libraries\nneeded to compile C or C++ applications which will directly interact\nwith a PostgreSQL database management server and the ecpg Embedded C\nPostgres preprocessor. You need to install this package if you want to\ndevelop applications which will interact with a PostgreSQL server. If\nyou're installing postgresql-server, you need to install this\npackage.\n\nName : postgresql-tk Relocations: /usr \nVersion : 7.0 Vendor: PostgreSQL Global Development Group\nRelease : 3 Build Date: Thu 25 May 2000 11:28:46 PM EDT\nInstall date: Thu 25 May 2000 11:41:10 PM EDT Build Host: utility.wgcr.org\nGroup : Applications/Databases Source RPM: postgresql-7.0-3.src.rpm\nSize : 996729 License: BSD\nPackager : Lamar Owen <[email protected]>\nURL : http://www.postgresql.org/\nSummary : Tk shell and tk-based GUI for PostgreSQL.\nDescription :\nPostgreSQL is an advanced Object-Relational database management\nsystem. The postgresql-tk package contains the pgaccess\nprogram. Pgaccess is a graphical front end, written in Tcl/Tk, for the\npsql and related PostgreSQL client programs.\n\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Thu, 25 May 2000 23:54:11 -0400",
"msg_from": "Lamar Owen <[email protected]>",
"msg_from_op": true,
"msg_subject": "PostgreSQL 7.0-3 RPMset available."
}
] |
[
{
"msg_contents": "Isn't template1 just that, a template?\n\nIf a user has the ability to create a user or database, then they should\nbe able to do it from anywhere.\n\nPersonally, I think normal users shouldn't have access to template1,\nsimply because they could create objects in there that can be copied\ninto _any_new_ database.\n\nPeter\n\n-- \nPeter Mount\nEnterprise Support\nMaidstone Borough Council\nAny views stated are my own, and not those of Maidstone Borough Council.\n\n\n\n-----Original Message-----\nFrom: Bruce Momjian [mailto:[email protected]]\nSent: Thursday, May 25, 2000 5:18 PM\nTo: PostgreSQL-development\nSubject: [HACKERS] Create user/create database outside template1\n\n\nWhat happens if people issue create user or create database outside\ntemplate1. Do we need to prevent it? Seems they work, but am not sure\nit is OK. Do we need to add a check?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania\n19026\n",
"msg_date": "Fri, 26 May 2000 07:43:08 +0100",
"msg_from": "Peter Mount <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Create user/create database outside template1"
}
] |
[
{
"msg_contents": "> \"Hiroshi Inoue\" <[email protected]> writes:\n> > As far as I see,PostgreSQL doesn't call LockBuffer() before\n> > calling smgrwrite(). This seems to mean that smgrwrite()\n> > could write buffers to disk which are being changed by\n> > another backend. If the(another) backend was aborted by\n> > some reason the buffer page would remain half-changed.\n> \n> Hmm ... looks fishy to me too. Seems like we ought to hold\n> BUFFER_LOCK_SHARE on the buffer while dumping it out. It\n> wouldn't matter under normal circumstances, but as you say\n> there could be trouble if the other backend crashed before\n> it could mark the buffer dirty again, or if we had a system\n> crash before the dirtied page got written again.\n\nWell, known issue. Buffer latches were implemented in 6.5 only\nand there was no time to think about subj hard -:)\nYes, we have to shlock buffer before writing and this is what\nbufmgr will must do for WAL anyway (to ensure that last buffer\nchanges already logged)... but please check that buffer is not\nexc-locked by backend itself somewhere before smgrwrite()...\n\nVadim\n",
"msg_date": "Fri, 26 May 2000 00:00:26 -0700",
"msg_from": "\"Mikheev, Vadim\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: smgrwrite() without LockBuffer(was RE: Shouldn't fl\n\tush dirty buffers at shutdown ?)"
}
] |
[
{
"msg_contents": "As usual when replying from here, replies prefixed with PM:\n\n-- \nPeter Mount\nEnterprise Support\nMaidstone Borough Council\nAny views stated are my own, and not those of Maidstone Borough Council.\n\n\n\n-----Original Message-----\nFrom: Gunnar R|nning [mailto:[email protected]]\nSent: Thursday, May 25, 2000 12:38 PM\nTo: [email protected]\nSubject: [INTERFACES] Postgresql 7.0 JDBC exceptions - broken\nconnections ?\n\n\nHello, \n\nAs I told you in an former mail I'm trying to migrate an application to\nuse\nPostgreSQL 7.0. The application now seems to be working pretty well and\nI\nhave 5 users that have been testing our web application with the\nPostgreSQL\ndatabase for the past 24 hours. \n\nThe application runs with a connection pool, but after some time some of\nthese connections seems to be broken. Ie. only some of the queries work\n- I\nwill change the connection pool code to handle this, but I would like to\nknow if anybody know why the connections gets into an unusable state.\nCould\nit be back crashes or similar things ? I'm turning on debugging for the\ndatabase server to see if can find anything there, but anyway here is\nthe\nexception I get :\n\nPM: How long is it before the problem starts? I'm wondering if the\nproblem is because the backend is sitting there for a long period.\n\nselect distinct entity.*,location.loc_id,location.loc_name\nfrom entity,locationmap,location,entityindex2 as e0\nwhere locationmap.ent_id=entity.ent_id and\nlocationmap.loc_id=location.loc_id and e0.ei_word='kj�ttb�rsen' and\ne0.ent_id=entity.ent_id and ENT_STATUS=4\norder by ent_title,location.loc_name,location.loc_id\nUnknown Response Type u\n\nPM: Does anyone [on Hackers] know what the u code is for? The fact it's\nin lower case tells me that the protocol/connection got broken somehow.\n\n",
"msg_date": "Fri, 26 May 2000 09:08:28 +0100",
"msg_from": "Peter Mount <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Postgresql 7.0 JDBC exceptions - broken connecti\n\tons ?"
},
{
"msg_contents": "Peter Mount <[email protected]> writes:\n\n> PM: How long is it before the problem starts? I'm wondering if the\n> problem is because the backend is sitting there for a long period.\n\nThe problem started after the connections had been for about 16 hours or\nso. So this could be the problem. \n\nHowever yesterday I restarted my database with some new options to do more\nlogging and also print some debug information. I haven't seen any of the\nexceptions I reported yesterday in the past 20 hours after the restart.\n\n> Unknown Response Type u\n> \n> PM: Does anyone [on Hackers] know what the u code is for? The fact it's\n> in lower case tells me that the protocol/connection got broken somehow.\n\nI got a lot of these errors and the response type varied between different\ncharacters, so your theory seems plausible.\n\nRegards, \n\n\tGunnar\n",
"msg_date": "26 May 2000 11:36:38 +0200",
"msg_from": "Gunnar R|nning <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql 7.0 JDBC exceptions - broken connecti \tons ?"
},
{
"msg_contents": "Peter Mount <[email protected]> writes:\n> Unknown Response Type u\n\n> PM: Does anyone [on Hackers] know what the u code is for? The fact it's\n> in lower case tells me that the protocol/connection got broken somehow.\n\nThere is no 'u' message code. Looks to me like the client got out of\nsync with the backend and is trying to interpret data as the start of\na message.\n\nI think that this and the \"Tuple received before MetaData\" issue could\nhave a common cause, namely running out of memory on the client side\nand not recovering well. libpq is known to emit its equivalent of\n\"Tuple received before MetaData\" when the backend hasn't violated the\nprotocol at all. What happens is that libpq runs out of memory while\ntrying to accumulate a large query result, \"recovers\" by resetting\nitself to no-query-active state, and then is surprised when the next\nmessage is another tuple. (Obviously this error recovery plan needs\nwork, but no one's got round to it yet.) I wonder whether the JDBC\ndriver has a similar problem, and whether these queries could have\nbeen retrieving enough data to trigger it?\n\nAnother possibility is that the client app is failing to release\nquery results when done with them, which would eventually lead to\nan out-of-memory condition even with not-so-large queries.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 26 May 2000 11:13:16 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql 7.0 JDBC exceptions - broken connections ?"
},
{
"msg_contents": "Tom Lane <[email protected]> writes:\n\n> I think that this and the \"Tuple received before MetaData\" issue could\n> have a common cause, namely running out of memory on the client side\n> and not recovering well. libpq is known to emit its equivalent of\n> \"Tuple received before MetaData\" when the backend hasn't violated the\n> protocol at all. What happens is that libpq runs out of memory while\n> trying to accumulate a large query result, \"recovers\" by resetting\n> itself to no-query-active state, and then is surprised when the next\n> message is another tuple. (Obviously this error recovery plan needs\n> work, but no one's got round to it yet.) I wonder whether the JDBC\n> driver has a similar problem, and whether these queries could have\n> been retrieving enough data to trigger it?\n> \n\nThis could be a possible explanation, as some of the queries may indeed\nretrieve large amounts of data. I have also noticed a couple of \"Out of\nMemory\" exceptions that could be related(This seem to be \"temporary\" out of\nmemory exceptions, and not permanent memory leaks; so I guess these could\nbe caused by queries returning huge amounts of data).\n\n> Another possibility is that the client app is failing to release\n> query results when done with them, which would eventually lead to\n> an out-of-memory condition even with not-so-large queries.\n\nI don't think this is the case. I've been running the application through\nOptimizeIT to profile memory and CPU usage and I haven't been able to spot\nany memory leakages in the driver; The quality of the JDBC driver is\nactually our main reason to migrate our application to PostgreSQL. \n\nregards, \n\n\tGunnar\n\n\n",
"msg_date": "27 May 2000 11:30:27 +0200",
"msg_from": "Gunnar R|nning <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql 7.0 JDBC exceptions - broken connections ?"
}
] |
[
{
"msg_contents": "Rather than going on and on about the subject in little emails, I've put\ntogether some information about UNDER and INHERITS, as I see them, on a\nwebpage at:\n\nhttp://www.comptechnews.com/~reaster/pgoo.html\n\nPeople might consider what it says in deciding the fate of INHERITS and of how\nto implement SQL's UNDER. I'm of the feeling that UNDER should be implemented\naccording to official standard SQL-1999. INHERITS should be left as the\nPostgreSQL multiple inheritance language extension, rather than implementing\nnonstandard draft proposals of UNDER, which would amount to replacing one\nPostgreSQL language extension for multiple inheritance for just another\nPostgreSQL language extension for the same thing, just under a different name.\nIt would only disrupt any userbase of INHERITS and introduce all the\nproblems the designers of SQL sought to avoid. INHERITS, the way it stands,\nis a good, simple inheritance mechanism. It does not transfer attribute (as\nfar as I know) constraints like UNIQUE and PRIMARY KEY, nor does it allow an\nindex to be shared on those because such things create issues in multiple\ninheritance. ALTER TABLE ADD can be used to reestablish constraints on\ninherited attributes though, but without an ability share an index like\nsupertable and subtable is intended (I think) to do. The official single\ninheritance UNDER, is designed to support inheritance of constraints and the\nsharing of indices from the maximal supertable down into its subtables. The\nmaximal supertable is required to have some UNIQUE NOT NULL attribute for this\npurpose (SQL-1999 Foundation, Section 11.3, Syntax Rule 7.g). I feel that maybe\nstandard single-inheritance UNDER and the current PostgreSQL extension,\nINHERITS, can be used together to complement each other. INHERITS provides a\nsimple multiple inherit ability. UNDER provides a feature-rich single\ninheritance container where subtables are extensions the maximal supertable.\nOne change I think is not unreasonable, is that INHERITS allow parent tables to\nbe dropped. I'd like to know the reason why its not allowed now.\n\n-- \nRobert B. Easter\[email protected]\n",
"msg_date": "Fri, 26 May 2000 08:20:03 -0400",
"msg_from": "\"Robert B. Easter\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "UNDER and INHERITS"
},
{
"msg_contents": "\"Robert B. Easter\" wrote:\n\n> The official single\n> inheritance UNDER, is designed to support inheritance of constraints \n> and the sharing of indices from the maximal supertable down into its \n> subtables. The\n> maximal supertable is required to have some UNIQUE NOT NULL attribute \n> for this purpose (SQL-1999 Foundation, Section 11.3, Syntax Rule 7.g). \n> I feel that maybe standard single-inheritance UNDER and the current \n> PostgreSQL extension, INHERITS, can be used together to complement each \n> other. INHERITS provides a simple multiple inherit ability. UNDER \n> provides a feature-rich single inheritance container where subtables \n> are extensions the maximal supertable. One change I think is not \n> unreasonable, is that INHERITS allow parent tables to\n> be dropped. I'd like to know the reason why its not allowed now.\n\nThe reason dropping parent tables, and inherited indexes and constraints\ndon't work is that no-one has bothered to implement them. \n\nIn so far as creating an index on only one table might be useful (as is\nthe case in postgres now), the extension \"create index on only table\"\nwould seem appropriate. No sense on making blanket rules that under must\ninherit them and inherits can't.\n",
"msg_date": "Sat, 27 May 2000 12:18:02 +1000",
"msg_from": "Chris Bitmead <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS-OO] UNDER and INHERITS"
}
] |
[
{
"msg_contents": "I'm afraid this might sound rather dumb. I'm hoping I can just get a\nlittle clarification about file locations.\n\nI've just started playing w/ SPI. As a first stab, I thought I'd\ncompile a couple of the test applications in /contrib.\n\nI pointed gcc to include files from /usr/local/pgsql - i.e. 'gcc ...\n-I/usr/local/pgsql/include ...'. This of course didn't work. \n/usr/local/pgsql/include/executor/spi.h attempts to include files which\ndon't exist in the install directory. They only exist in\n/usr/local/src/postgresql-7.0/src/include (or wherever you put the\nsource).\n\nAfter installation, shouldn't everything you need be in\n/usr/local/pgsql?\n\nIt's simple enough to just use\n/usr/local/src/postgresql-7.0/src/include. But I don't know when to use\none, and when to use the other.\n\nSorry if this is a completely naive question. I'm pretty much flying\nsolo here. I'm an architect who's gotten frustrated with the\nscaleability limitations of using something like MS Access. I'm the\nonly person I know who uses any *NIX whatsoever, nevermind PostgreSQL. \nC/C++ doesn't bother me, but I'm really not too familiar w/ *NIX file\nconventions, etc.\n\n-Ron-\n",
"msg_date": "Fri, 26 May 2000 11:24:41 -0400",
"msg_from": "Ron Peterson <[email protected]>",
"msg_from_op": true,
"msg_subject": "SPI & file locations\u000e"
},
{
"msg_contents": "Ron Peterson <[email protected]> writes:\n> After installation, shouldn't everything you need be in\n> /usr/local/pgsql?\n\nYeah, it should really. We've had this discussion before. The real\nproblem is that no one wants to install the entire pgsql include tree,\nbut it's hard to draw the line at what an SPI extension might need or\nnot need. It doesn't help that we've been sloppy about what lives in\nwhich include file, too :-(. Sooner or later someone should go through\nthe whole include tree and try to rearrange things so that there's a\nfairly compact set of files that need to be exported.\n\nIn the meantime, pointing at the source tree is a good way to build SPI\nextensions...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 26 May 2000 12:12:55 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SPI & file locations "
},
{
"msg_contents": "Ron Peterson wrote:\n> \n> I'm afraid this might sound rather dumb. I'm hoping I can just get a\n> little clarification about file locations.\n> \n> I've just started playing w/ SPI. As a first stab, I thought I'd\n> compile a couple of the test applications in /contrib.\n> \n> I pointed gcc to include files from /usr/local/pgsql - i.e. 'gcc ...\n> -I/usr/local/pgsql/include ...'. This of course didn't work.\n> /usr/local/pgsql/include/executor/spi.h attempts to include files which\n> don't exist in the install directory. They only exist in\n> /usr/local/src/postgresql-7.0/src/include (or wherever you put the\n> source).\n> \n> After installation, shouldn't everything you need be in\n> /usr/local/pgsql?\n\nI too have run into this dependency problem. The number of\nheaders required to compile an SPI code module is around 80, if I\nrecall correctly. Lamar Owen was good enough to include those\nheaders as apart of the RPM distribution and they would go into\n/usr/include/pgsql. I believe he got the dependency list from\nOliver Elphick who manages the Debian package, so that should be\ncorrect as well. If you're not using RedHat or Debian\ndistrubitions, I think you're stuck with keeping the backend\nsource tree lying around. :-(\n\nMike Mascari\n",
"msg_date": "Fri, 26 May 2000 14:33:44 -0400",
"msg_from": "Mike Mascari <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SPI & file locations\u000e"
},
{
"msg_contents": "\nSo I got the moddatetime trigger example in /contrib working. A couple\nof notes:\n\nmoddatetime.c makes reference to DATETIME. This needs to be changed to\nTIMESTAMP. I've done this, if you want the source.\n\nI need to make .so files in two steps: first make a regular object file,\nthen compile that w/ -fpic and -shared and output an .so file. If I try\nto do this in one step, it doesn't work. This may very well be the way\nthe compiler is _supposed_ to work, I dunno. RH6.1, kernel 2.2.13, gcc\nversion egcs-2.91.66_19990314/Linux (egcs-1.1.2 release).\n\n-Ron-\n",
"msg_date": "Fri, 26 May 2000 15:30:22 -0400",
"msg_from": "Ron Peterson <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: SPI & file locations\u000e"
},
{
"msg_contents": "On Fri, 26 May 2000, Mike Mascari wrote:\n> Ron Peterson wrote:\n> > After installation, shouldn't everything you need be in\n> > /usr/local/pgsql?\n\nToo many assumptions are in the source that the source will always be there. \nNot necessarily true!\n\n> I too have run into this dependency problem. The number of\n> headers required to compile an SPI code module is around 80, if I\n> recall correctly. Lamar Owen was good enough to include those\n> headers as apart of the RPM distribution and they would go into\n> /usr/include/pgsql. I believe he got the dependency list from\n> Oliver Elphick who manages the Debian package, so that should be\n\nFor PostgreSQL 7.0-2 and above, the SPI header list is dynamically generated\nduring package build and manually copied into place using the following\none-liner: \n\n/lib/cpp -M -I. -I../backend executor/spi.h |xargs -n 1|grep \\\\W|grep -v ^/|grep -v spi.h | sort |cpio -pdu $RPM_BUILD_ROOT/usr/include/pgsql\n\nReplace $RPM_BUILD_ROOT/usr/include/pgsql with the directory of your choice,\nand run this one-liner when the cwd is in src/include in the PostgreSQL source\ntree. The sort is optional, of course.\n\nThis could easily enough be included in the make install, couldn't it?\n(Tom? Anyone?) I realize that GNU grepisms (or is \\W common?) are used above,\nbut, after all, I _know_ my one-liner in the RPM spec file installation section\nis going to be running on RedHat with a complete development environment, so it\nwasn't designed to be portable. I also realize the regexps could be tuned to\nonly need a single grep invocation, but, it works and works nicely as-is in the\nRPM building environment.\n\nOh, BTW, there's the same number of header includes as there were with 6.5.3,\nbut there are differences in the list.... SPI development CAN be done without\n26+ MB of source code taking up space -- Mike is _doing_ it, IIRC. In fact,\nMike is the one who prodded me to get it working in the RPMs.\n\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Fri, 26 May 2000 19:20:33 -0400",
"msg_from": "Lamar Owen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SPI & file locations"
},
{
"msg_contents": "On Fri, 26 May 2000, Tom Lane wrote:\n> Ron Peterson <[email protected]> writes:\n> > After installation, shouldn't everything you need be in\n> > /usr/local/pgsql?\n \n> Yeah, it should really. We've had this discussion before. The real\n> problem is that no one wants to install the entire pgsql include tree,\n> but it's hard to draw the line at what an SPI extension might need or\n\nIf it only needs what spi.h depends upon, then the line is drawn by the\none-liner in my other message.\n\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Fri, 26 May 2000 19:36:31 -0400",
"msg_from": "Lamar Owen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SPI & file locations"
},
{
"msg_contents": "On Fri, 26 May 2000, Tom Lane wrote:\n> Lamar Owen <[email protected]> writes:\n> > /lib/cpp -M -I. -I../backend executor/spi.h |xargs -n 1|grep \\\\W|grep -v ^/|grep -v spi.h | sort |cpio -pdu $RPM_BUILD_ROOT/usr/include/pgsql\n \n> > This could easily enough be included in the make install, couldn't it?\n> > (Tom? Anyone?) I realize that GNU grepisms (or is \\W common?) are\n> > used above,\n \n> That's just the tip of the iceberg of the portability problems in the\n> above. If you want to propose that we actually do something like that\n> during 'make install', you're gonna have to work a lot harder.\n\n:-) I expect no less (than to work harder, of course....). Why not do\nsomething like that, other than the 'out-of-sight, out-of-mind' problem? Or,\nto put it far more bluntly, the SPI installation is very broken if the SPI\ndeveloper has to go around manually installing header files that make install\nshould have automatically taken care of.\n\n> (However, as a non-portable trick for getting a list of files that need\n> to be included in a hand-generated makefile, it's not bad.)\n\nI take that as high praise :-). No, really it IS a quick hack -- I got tired\nof generating the list by hand, so I automated it. And, when the list shrinks\ndown to nearly nothing (10-15 includes, maybe?), I won't have to change a thing.\n\n> The more serious problem is \"what else might an SPI module need besides\n> spi.h\".\n\nWhat else indeed. Isn't spi.h the exported SPI itself? We seem to have a very\npoorly defined line between exported and private interfaces -- back to my point\nabout the assumption that the whole source tree is available for any little\nprogram's whim (such as the regression test suite, which was _interesting_ to\nget working without a bunch of the source tree around.) Mike Mascari may be\nable to answer more about what SPI programs require, as he is doing SPI\ndevelopment.\n\n> Also, it disturbs me a lot that spi.h pulls in 80+ include\n> files in the first place --- there's got to be stuff in there that\n> needn't/shouldn't be exported.\n\nSuch as lztext.h?\n\n> rather than fall back on automation that will let the list bloat even\n> more without anyone much noticing...\n\nPart of that would be to think more like a packager or user, rather than a\ndeveloper. That was the first thing I learned when tackling the RPM's, was,\nthat I was a packager -- while that does involve knowledge of the guts of the\npackage, it also requires a mindset very different from development -- more\nuser-oriented, I suppose.\n\nA packager for RedHat will start thinking about what can be partitioned out,\nsaving the user's HD space. A packager will start thinking about how the\npackage _can_ be partitioned, and logical partitioning (for instance, the\nabsolutely worst possible packaging for PostgreSQL in RPM form would be a\nsingle RPM with everything in it -- thus, requiring a database server to have a\nfull-bore X11 installation for pgaccess (which requires tk and tcl, as well),\nwhen that database server may have absolutely no need for pgaccess.)). I am\nnow even considering splitting out pltcl into a separate package -- as pltcl\nimplicitly requires the server package, it makes the whole tcl package require\nthe server package -- and someone may have legitimate need for a postgresql\nclient machine _without_ the server AND need the tcl client. Maybe a\npgaccess-based administration client, perhaps?). \n\nThose who build from the source have the configure options to eliminate (in\ntheir opinion) the cruft they don't need -- RPM users don't have that option --\nso, I build everything, and split the distribution.\n\nSome issues I have seen along these lines are:\n1.)\t The header situation -- assumes a source tree somewhere so that you\ncan pull in things....\n2.)\tThe regression tests -- you know, someone MIGHT want to _run_ those\ntests on a database server in a production situation where there is no make, no\ncompiler, and no PostgreSQL source tree (where config.guess resides). The\ninstallation can and should prebuild all necessary binaries and should\npreconfigure the results of a config.guess -- and the shell script should not\nneed to be invoked by make. (Yes, the RPMset does all this (except eliminate\nthe need for config.guess), and it's packaged as the postgresql-test rpm, which\nis an optional 1.5MB package -- I wimped out and packaged the whole test\nsubtree....)\n\nWhile I am all for source availability and the whole open source (free\nsoftware) model, I am also for easily installed and upgraded packages for users\nwho simply want to use the package in a production environment. I've been\nthere and done that -- and was more than a little frustrated at the state of\nthe RPM packaging. Thus my current situation :-).\n\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n\n",
"msg_date": "Fri, 26 May 2000 21:38:27 -0400",
"msg_from": "Lamar Owen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] SPI & file locations"
},
{
"msg_contents": "Lamar Owen <[email protected]> writes:\n> /lib/cpp -M -I. -I../backend executor/spi.h |xargs -n 1|grep \\\\W|grep -v ^/|grep -v spi.h | sort |cpio -pdu $RPM_BUILD_ROOT/usr/include/pgsql\n\n> This could easily enough be included in the make install, couldn't it?\n> (Tom? Anyone?) I realize that GNU grepisms (or is \\W common?) are\n> used above,\n\nThat's just the tip of the iceberg of the portability problems in the\nabove. If you want to propose that we actually do something like that\nduring 'make install', you're gonna have to work a lot harder.\n(However, as a non-portable trick for getting a list of files that need\nto be included in a hand-generated makefile, it's not bad.)\n\nThe more serious problem is \"what else might an SPI module need besides\nspi.h\". Also, it disturbs me a lot that spi.h pulls in 80+ include\nfiles in the first place --- there's got to be stuff in there that\nneedn't/shouldn't be exported. I know that an SPI developer who's just\ntrying to get some work done couldn't care less, but I'd like to see us\nmake some effort to actually clean up the list of files to be exported,\nrather than fall back on automation that will let the list bloat even\nmore without anyone much noticing...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 26 May 2000 22:22:46 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SPI & file locations "
},
{
"msg_contents": "Lamar Owen <[email protected]> writes:\n> :-) I expect no less (than to work harder, of course....). Why not do\n> something like that, other than the 'out-of-sight, out-of-mind'\n> problem? Or, to put it far more bluntly, the SPI installation is very\n> broken if the SPI developer has to go around manually installing\n> header files that make install should have automatically taken care\n> of.\n\nAgreed, but I'm worried about the 'out-of-sight, out-of-mind' aspect.\n\n>> The more serious problem is \"what else might an SPI module need besides\n>> spi.h\".\n\n> What else indeed. Isn't spi.h the exported SPI itself? We seem to\n> have a very poorly defined line between exported and private\n> interfaces\n\nAh-hah, I think you and I are on exactly the same wavelength there.\n\nMy whole problem with the spi.h-imports-88-headers business is that it\nexposes in gory detail the fact that *we have absolutely no idea* what\nwe consider an exported backend interface and what we don't. I don't\nlike the idea of an automatic tool that exports whatever it thinks might\nbe needed, because if we let ourselves go down that path we will very\nsoon find ourselves trying to preserve backwards compatibility with\nsomething that should never have been exported at all.\n\nBasically I feel that this is a problem that requires some actual\nthought and design judgment. I don't want to see us substituting\na one-liner script hack for design judgment. OTOH, I don't really\nwant to do the legwork myself (lame excuse: never having written\nan SPI module myself, I have little idea what they need to see).\nI'm just concerned about the long-term consequences of taking the\neasy way out.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 27 May 2000 01:49:55 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] SPI & file locations "
},
{
"msg_contents": "On Sat, 27 May 2000, Tom Lane wrote:\n> Lamar Owen <[email protected]> writes:\n> > What else indeed. Isn't spi.h the exported SPI itself? We seem to\n> > have a very poorly defined line between exported and private\n> > interfaces\n \n> Ah-hah, I think you and I are on exactly the same wavelength there.\n \n> My whole problem with the spi.h-imports-88-headers business is that it\n> exposes in gory detail the fact that *we have absolutely no idea* what\n> we consider an exported backend interface and what we don't. I don't\n\nYes, this is a problem. And, yes, it needs to be taken care of in a designed,\nmethodical manner -- the spi.h header should only need a few things (I, like\nyou, have not done any SPI development yet to see what really _is_ needed....).\n\n> Basically I feel that this is a problem that requires some actual\n> thought and design judgment. \n\nYes, most certainly. Can someone who has actual SPI experience look at this?\n\n> I'm just concerned about the long-term consequences of taking the\n> easy way out.\n\nI'll have to admit to taking the easy way out a little in my RPM solution --\nbut, it's only there so that an advertised development interface can be\nactually used. Once a thorough look is taken at the whole header mess for SPI,\nI still won't have to change anything, which is good both for me and for RPM\nusers, as I really don't keep track of every header file dependency change --\nthus, RPM releases won't happen (again, as 7.0-1 was in error in this regard)\nwith broken header deps, requiring a bugfix package.\n\nIf no one else is forthcoming with a solution to this problem, I'll take a look\nat it (possibly during the development cycle for 7.1 after the 7.0.x series has\nstabilized, when the RPM cycle is at idle).\n\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Sat, 27 May 2000 15:50:05 -0400",
"msg_from": "Lamar Owen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] SPI & file locations"
},
{
"msg_contents": "Hmmm...\n\nI'm attached a small patch. This patch includes the list of dirs and\nheaders refered by spi.h (And _not_ more)\n\nOn Sat, 27 May 2000, Lamar Owen wrote:\n\n> On Sat, 27 May 2000, Tom Lane wrote:\n> > My whole problem with the spi.h-imports-88-headers business is that it\n> > exposes in gory detail the fact that *we have absolutely no idea* what\n> > we consider an exported backend interface and what we don't. I don't\n\nYes, but _before_ somebody create the new (more structural?) header layout,\nmust all spi-developer search for this files?\n\n> Yes, this is a problem. And, yes, it needs to be taken care of in a\n> designed, methodical manner -- the spi.h header should only need a few\n> things (I, like you, have not done any SPI development yet to see what\n> really _is_ needed....).\n\n\n--\n nek;(",
"msg_date": "Tue, 30 May 2000 12:44:48 +0200 (CEST)",
"msg_from": "Peter Vazsonyi <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: [GENERAL] SPI & file locations"
}
] |
[
{
"msg_contents": "Is \\h select in psql supposed to show SELECT and SELECT INTO? If\nscrolls off the screen.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 26 May 2000 11:37:39 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "\\h SELECT"
}
] |
[
{
"msg_contents": "I added the following test to psql/help.c:\n\n /* if we have an exact match, exit, fixes \\h SELECT */\n if (strcasecmp(topic, QL_HELP[i].cmd) == 0)\n break;\n\nThis will exit the help loop if an exact match has already been found. \nThis prevents \\h select from showing SELECT INTO help. Peter, hope you\nare OK with this.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 26 May 2000 11:42:22 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Fixed psql \\h SELECT"
}
] |
[
{
"msg_contents": "\n\n I work on query cache support in SPI and for better inspiration I see\nhow use SPI good programmer in RI triggers :-)\n\n And I a little surprised, in one part of RI Jan use SPI_prepare/saveplan\nbefore SPI_connect(). I don't know if this part is used, but if I see to SPI\nI must say \"it can't works --- it must return error SPI_ERROR_UNCONNECTED \nfrom _SPI_begin_call()\". \n\n It is ri_triggers.c: row 253\n\n Am I right?\n \n\t\t\t\t\t\tKarel\n\n",
"msg_date": "Fri, 26 May 2000 18:06:47 +0200 (CEST)",
"msg_from": "Karel Zak <[email protected]>",
"msg_from_op": true,
"msg_subject": "possible bug in RI?"
}
] |
[
{
"msg_contents": "> We might have part of the story in the recently noticed fact that\n> each insert/update query begins by doing a seqscan of pg_index.\n> \n> I have done profiles of INSERT in the past and not found any really\n> spectacular bottlenecks (but I was looking at a test table with no\n> indexes, so I failed to see the pg_index problem :-(). Last time\n> I did it, I had these top profile entries for inserting 100,000 rows\n> of 30 columns apiece:\n\nWell, I've dropped index but INSERTs still take 70 sec and \nCOPY just 1sec -:(((\n\nVadim\n",
"msg_date": "Fri, 26 May 2000 11:32:20 -0700",
"msg_from": "\"Mikheev, Vadim\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Berkeley DB... "
},
{
"msg_contents": "> -----Original Message-----\n> From: [email protected] \n> [mailto:[email protected]]On Behalf Of Mikheev, Vadim\n> \n> > We might have part of the story in the recently noticed fact that\n> > each insert/update query begins by doing a seqscan of pg_index.\n> > \n> > I have done profiles of INSERT in the past and not found any really\n> > spectacular bottlenecks (but I was looking at a test table with no\n> > indexes, so I failed to see the pg_index problem :-(). Last time\n> > I did it, I had these top profile entries for inserting 100,000 rows\n> > of 30 columns apiece:\n> \n> Well, I've dropped index but INSERTs still take 70 sec and \n> COPY just 1sec -:(((\n>\n\nDid you run vacuum after dropping indexes ?\nBecause DROP INDEX doesn't update relhasindex of pg_class,\nplanner/executer may still look up pg_index.\n\nRegards.\n\nHiroshi Inoue\[email protected] \n",
"msg_date": "Sat, 27 May 2000 12:11:58 +0900",
"msg_from": "\"Hiroshi Inoue\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: Berkeley DB... "
}
] |
[
{
"msg_contents": "Hi all,\n\nI'm trying to create a type password; the goal is to have a table like:\n\nCREATE TABLE test (\nusername varchar,\npass passwd);\n\ninsert into test values ('me','secret');\n\nand have \"secret\" being automagicly crypted.\n\nWhat I want is to mimic the PASSWORD function of mysql but much better,\nnot having to call a function.\n\nI just can't figure how to write the xx_crypt(opaque) returns opaque\nfunction.\n\nAny help available???\n\nTIA\n\n-- \nOlivier PRENANT \tTel:\t+33-5-61-50-97-00 (Work)\nQuartier d'Harraud Turrou +33-5-61-50-97-01 (Fax)\n31190 AUTERIVE +33-6-07-63-80-64 (GSM)\nFRANCE Email: [email protected]\n------------------------------------------------------------------------------\nMake your life a dream, make your dream a reality. (St Exupery)\n\n",
"msg_date": "Fri, 26 May 2000 22:03:56 +0200 (MET DST)",
"msg_from": "Olivier PRENANT <[email protected]>",
"msg_from_op": true,
"msg_subject": "New Type"
},
{
"msg_contents": "\n> Hi all,\n> \n> I'm trying to create a type password; the goal is to have a table like:\n> \n> CREATE TABLE test (\n> username varchar,\n> pass passwd);\n> \n> insert into test values ('me','secret');\n> \n> and have \"secret\" being automagicly crypted.\n> \n> What I want is to mimic the PASSWORD function of mysql but much better,\n> not having to call a function.\n> \n> I just can't figure how to write the xx_crypt(opaque) returns opaque\n> function.\n> \n> Any help available???\n\nYes. Send me your code as a function that runs in a.out, I will\nconvert it. You will get in a form suitable for use as a contrib. You\nwill then be on your own. I'll do it in a split-second, but give me a\nday or two to respond; I am moving and can't read my mail often enough.\n\n--Gene\n\n\n",
"msg_date": "Sat, 27 May 2000 02:22:32 -0500",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: New Type "
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.