threads
listlengths
1
2.99k
[ { "msg_contents": "On Wed, Jun 30, 1999 at 01:44:05PM -0700, Stephen Boyle wrote:\n> \n> Subject: Postgres Upsizing Tool for MSAccess 97\n> \n> \n> I have today set up a web page to allow download of pgupt. The tool written in Access 97 provides the following functionality:\n> \n\nA little digging around reveals the correct url to be:\n\nhttp://dspace.dial.pipex.com/boylesa/pgupt/pgupt.shtml\n\nyou may want to throw and index.html in there with a redirect.\n\n> \n> I would wellcome any comments / suggestions (or even Bug Reports :-( )\n> \nConsider this Bug Report 0001 ;-)\n\nRoss\n", "msg_date": "Wed, 30 Jun 1999 10:17:32 -0500", "msg_from": "\"Ross J. Reedstrom\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [GENERAL] Postgres Upsizing Tool for MSAccess 97" }, { "msg_contents": "Subject: Postgres Upsizing Tool for MSAccess 97\n\n\nI have today set up a web page to allow download of pgupt. The tool written in Access 97 provides the following functionality:\n \nCreation of SQL / DDL statements to recreate database structure.\nAutomatic creation of triggers in plpgsql to enforce relational integrity.\nAutomated data export / data import.\nCreation of shell scripts to create pg database on target os.\n \nIts still under development (probably late beta would be fair).\n\nFor more details and download please go to http://dspace.dial.pipex.com/boylesa/pgupt/.\n \nI would wellcome any comments / suggestions (or even Bug Reports :-( )\n\nRegards\nSteve Boyle\nRoselink Systems Limited\[email protected]\n \n \n\n\n\n\n\n\n\nSubject: Postgres Upsizing Tool for \nMSAccess 97\nI have today set up a web page to allow download \nof pgupt.  The tool written in Access 97 provides the following \nfunctionality:\n \nCreation of SQL / DDL statements to recreate \ndatabase structure.\nAutomatic creation of triggers in plpgsql to \nenforce relational integrity.\nAutomated data export / data import.\nCreation of shell scripts to create pg database on target \nos.\n \nIts still under development (probably late beta \nwould be fair).\n \nFor more details and download please go to http://dspace.dial.pipex.com/boylesa/pgupt/.\n \nI would wellcome any comments / suggestions (or \neven Bug Reports :-( )\n \nRegards\nSteve Boyle\nRoselink Systems Limited\[email protected]", "msg_date": "Wed, 30 Jun 1999 13:44:05 -0700", "msg_from": "\"Stephen Boyle\" <[email protected]>", "msg_from_op": false, "msg_subject": "Postgres Upsizing Tool for MSAccess 97" } ]
[ { "msg_contents": "\nDarcy said: \n>Thus spake Michael Richards\n>> Here are some diffs that implement a function called TuplesAffected. It\n>> returns the number of tuples the last command affected, or 0 if the last\n>> command was a SELECT. I added it to the PgConnection because it contains\n>\n>Why not overload PGTuples() instead (assuming it doesn't already do this)?\n\nWhy can't I find PGTuples() anywhere? What/where is it?\n\nAnyway, I'm not sure I agree with TuplesAffected returning a 0 if \nPQcmdTuples() returns NULL. An UPDATE can return 0 if it doesn't\nupdate anything. It may be better to just wrap PQcmdTuples() in \npgdatabase.cc (since that's where PQntuples() is wrapped).\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> TEAM-OS2\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n", "msg_date": "Wed, 30 Jun 1999 14:37:42 -0400 (EDT)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PATCHES] Patches to get number of tuples affected" }, { "msg_contents": "Thus spake Vince Vielhaber\n> >Why not overload PGTuples() instead (assuming it doesn't already do this)?\n> \n> Why can't I find PGTuples() anywhere? What/where is it?\n\nSorry, I meant PQntuples().\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 424 2871 (DoD#0082) (eNTP) | what's for dinner.\n", "msg_date": "Wed, 30 Jun 1999 15:13:54 -0400 (EDT)", "msg_from": "\"D'Arcy\" \"J.M.\" Cain <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] Patches to get number of tuples affected" } ]
[ { "msg_contents": "Don Baccus Wrote...\n> \n> At 09:09 PM 6/29/99 -0400, Bruce Momjian wrote:\n> \n> >> Just out of curiosity, I did a DUMP on the database while running a script\n> >> that ran a pile of updates. When I restored the database files, it was so\n> >> corrupted that I couldn't even run a select. vacuum just core dumped...\n> \n> >When you say DUMP, you mean pg_dump, right? Are you using 6.5?\n> \n> In his first note, he was proposing a scheme that would allow either\n> filesystem dumps or pg_dumps, which I think a couple of respondents\n> missed.\n> \n> So I suspect he means a filesystem dump in this case. Which of course\n> won't work in postgres, or in Oracle.\n\nISTR...\n\nOracle has a way to mark tablespaces for backup. Once you do this you can\nthen copy the data files, and then release the tablespace. I guess while\nthe backup is happening all the transactions go to the redo log (or is\nit the other log). I'm a bit fuzzy about the details but it is a nice\nfeature. You don't have to dump the database, just copy off the data\nfiles to tape or whatever.\n\nWhen you restore from backups there are some incantations required\nto bring things back into sync.\n\nSomeone else can undoubtedly explain this much more clearly.\n\n-- cary\nCary O'Brien\[email protected]\n\n\n\n", "msg_date": "Wed, 30 Jun 1999 15:48:01 -0400 (EDT)", "msg_from": "\"Cary O'Brien\" <[email protected]>", "msg_from_op": true, "msg_subject": "Oracle and hot backups" } ]
[ { "msg_contents": "Hello,\n\nThe following is a patch which patches cleanly \nagainst the 6.5 release and which implement's\nOracle's TRUNCATE statement.\n\nThere are a few things to note about this patch:\n\n1. It mirrors the operation of VACUUM in that it \n acquires an Access Exclusive lock on the relation\n being TRUNCATE'd (as well as its indexes, as \n necessary).\n\n2. It currently does not update pg_class with either\n the \"bogus\" stats of 1000 tuples nor does it set \n the number of tuples to 0.\n\n3. You must have pg_ownercheck() permissions to \n TRUNCATE a table.\n\nI hope the PostgreSQL professionals can clean\nthis up sufficient enough to be useful to the rest\nof the world (I find it increasingly useful as we\nadd more and more users).\n\nMarcus \"Mike\" Mascari ([email protected])\n\n\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com", "msg_date": "Wed, 30 Jun 1999 13:23:03 -0700 (PDT)", "msg_from": "Marcus Mascari <[email protected]>", "msg_from_op": true, "msg_subject": "TRUNCATE statement patch" } ]
[ { "msg_contents": "\nWhat I would like to see (perhaps its possible \nright now, or a workaround is available) is a method\nfor yielding an elog() message when the number of\ntuples to be returned is over a specified USER limit.\n\nThe thrust of this issue is that we are rolling out \na PostgreSQL database which is driven by a \nWeb environment for most users. However, some users\nwill have ODBC access to some of the tables. One\nof these table has more than a million rows and is \na totally denormalized \"sales\" table for all\ntransactions performed. It is the main tables\nthe various managers will be attacking with ODBC.\n\nHowever, what we would like to prevent is some\nnew user, unfamiliar with how to design aggregate\nand restrive queries, from doing a SELECT * FROM sales\nand having 1 gig of data shipped across the network \nto their Excel, Access, or Crystal Report Writer\napplication. Instead, we would like to generate\nan elog() error in the backend like:\n\n\"You are allowed to return a maximum of 20,000 rows.\"\n\nor some such constraint, as a per-user attribute. We\ndon't want to restrict the tuples which can \nparticipate in their queries, just the number of rows\nreturned. So, SELECT SUM(price) FROM sales, would\nbe okay. What we don't want is a query limit, which,\nwhen reached, silently returns the number of rows\nwithout the user knowing they didn't get all the \ndata they requested.\n\nAny hints or tips to achieve this functionality?\n\nMarcus \"Mike\" Mascari \n([email protected])\n\n--- \"Jackson, DeJuan\" <[email protected]> wrote:\n> Try: select * from table LIMIT 100;\n> \n> > Hi\n> > \n> > We upgraded our system from 6.4 to the new 6.5\n> version. The set\n> > query_limit function is not working\n> > anymore in 6.5.\n> > \n> > db => set query_limit to '100';\n> > SET VARIABLE\n> > db => select * from table;\n> > \n> > statement is returning all records from the table.\n> What's wrong here?\n> > \n> > Herbie\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n", "msg_date": "Wed, 30 Jun 1999 17:12:09 -0700 (PDT)", "msg_from": "Marcus Mascari <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [GENERAL] urgent: problems with query_limit" }, { "msg_contents": "Marcus \"Mike\" Mascari wrote:\n\n> What I would like to see (perhaps its possible\n> right now, or a workaround is available) is a method\n> for yielding an elog() message when the number of\n> tuples to be returned is over a specified USER limit.\n>\n> [...]\n>\n> However, what we would like to prevent is some\n> new user, unfamiliar with how to design aggregate\n> and restrive queries, from doing a SELECT * FROM sales\n> and having 1 gig of data shipped across the network\n> to their Excel, Access, or Crystal Report Writer\n> application. Instead, we would like to generate\n> an elog() error in the backend like:\n>\n> \"You are allowed to return a maximum of 20,000 rows.\"\n\n I don't think that this is easy to implement because it is in\n conflict with the general design of the PostgreSQL\n optimizer/executor combo.\n\n If possible (no sorting or grouping), the execution plan is\n setup in a way that the data is directly fed from the table\n to the client. So when the maximum exceeds, the client\n already received that many rows.\n\n And how should the database know how many rows it WILL send\n before collecting them all? Well, in the case of a simple\n \"SELECT * FROM sales\" it could look at the statistics. But on\n a simple join between \"salesorg\" and \"sales\" it is only\n possible to guess how many rows this MIGHT produce (the\n optimizer uses this guessing to decide the joining and where\n indices might be helpful). But the exact number of rows is\n only known after the complete execution of the plan - and\n then they are already sent to the client.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Thu, 1 Jul 1999 02:47:56 +0200 (MET DST)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] RE: [GENERAL] urgent: problems with query_limit" } ]
[ { "msg_contents": "I have added two new files to the /doc directory called KNOWN_BUGS and\nMISSING_FEATURES. The files say:\n\n\tPostgreSQL has a single combined bugs, missing features, and todo list\n\tsimply called TODO, in this directory. A current copy is always\n\tavailable on our web site.\n\nPart of the problem is that our TODO list really fills all three\nfunctions, and I can understand why some people are not looking in the\nTODO files for know bugs or missing feature.\n\nI have no problem if someone wants to set up a more formal bug tracking\nsystem. Setting up the system is not hard. It is keeping it\nmaintained. Right now, I have a TODO file, and I modifiy it with a text\neditor, and run a script that ftp's it to our web site.\n\nI am willing to put that file in a common location so other people can\nmake changes to the file. I could check it into the cvs tree every time\nas doc/TODO, so anyone with CVS access can make modifications to it.\n\nThat may be a very easy way to go.\n\nRight now, I only check it into CVS right before a release, but I can\ncvs commit every time a change is made.\n\nComments?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 1 Jul 1999 01:36:17 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Bug tracking" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> I have no problem if someone wants to set up a more formal bug tracking\n> system. Setting up the system is not hard. It is keeping it\n> maintained. Right now, I have a TODO file, and I modifiy it with a text\n> editor, and run a script that ftp's it to our web site.\n\n... which is a good low-tech, low-maintenance solution.\n\n> I am willing to put that file in a common location so other people can\n> make changes to the file. I could check it into the cvs tree every time\n> as doc/TODO, so anyone with CVS access can make modifications to it.\n\nI think if we do anything at all in this area, we should set our sights\nmuch higher than just opening up the TODO file for community\nmaintenance. The bug tracking systems that I've dealt with keep *far*\nmore than one line of info about each bug. Ideally, all the info that\nyou might currently try to find out by digging through the archives of\npgsql-bugs and pgsql-hackers would be in the bugtrack database: original\nreport, test cases, status, who's working on it, cross-links to similar\nbugs, etc.\n\nNew-feature requests might be kept track of in the same way, although\nI haven't seen anyone using a bugtrack system for that purpose.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 01 Jul 1999 09:45:13 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Bug tracking " }, { "msg_contents": "On Thu, Jul 01, 1999 at 09:45:13AM -0400, Tom Lane wrote:\n> I think if we do anything at all in this area, we should set our sights\n> much higher than just opening up the TODO file for community\n> maintenance. The bug tracking systems that I've dealt with keep *far*\n> more than one line of info about each bug. Ideally, all the info that\n> you might currently try to find out by digging through the archives of\n> pgsql-bugs and pgsql-hackers would be in the bugtrack database: original\n> report, test cases, status, who's working on it, cross-links to similar\n> bugs, etc.\n> \n> New-feature requests might be kept track of in the same way, although\n> I haven't seen anyone using a bugtrack system for that purpose.\n\nThe Debian bugtrack system is in fact used that way. They've got\na 'severity' field, and one of the severities is 'wishlist'. (I think\nthe full list is 'grave', 'important', 'normal', and 'wishlist') The\nBTS doesn't have the prettiest web pages, but it seems pretty robust\n(lots of Debian users and developers worldwide using it, >40000 bugs\ntracked). All functions are handled by parsing emails. In fact, it'd be\nstraight forward to just continue using the existing email lists, and CC:\nthe bugtrack system. That way, the collection of emails discussing a\nbug would be archived separately.\n\nthe interface is at:\n\nhttp://www.debian.org/Bugs/\n\nIn fact, since there is a Debian PostgreSQL package, maintained by \n Oliver Elphick (thanks Oliver!), the system's already available:\n\nhttp://www.debian.org/Bugs/db/pa/lpostgresql.html\n\nIf the developer's want to try it out, anybody is free to post bugs to\nthe system, though technically they're out of scope if you're not running\na Debian Linux install. In fact, one 'state' for an open bug is 'forwarded\nto upstream developers'.\n\nThe whole system is designed around tracking bugs against a collection of\nmore or less loosely connected packages. The BTS knows about some 'packages'\nthat aren't actually packages, such as the web site, or the ftp site. There\nare facilities for bug maintainence, like reassigning bugs from one package\nto another.\n\nHmm, I've just pulled down and installed the debbugs package - seems to\nbe a pile-o-perl sort of thing, so it should run on just about any\nunix. It's using a flat-file backend (horrors!). Moving to a DB is on\nthe debbugs TODO. One reason given for putting this package together is\nto allow bugs to be filed against it so the developer can keep track\nof requested features. Kind of the ultimate in eat your own dogfood,\nI suppose.\n\nRoss\n", "msg_date": "Thu, 1 Jul 1999 13:57:45 -0500", "msg_from": "\"Ross J. Reedstrom\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Bug tracking" } ]
[ { "msg_contents": "Not sure if this is the right place to mention it, but I just tried to\nsearch hackers archive to find a tolower(varchar) function (I'm sure I saw\nit go by!), but:\n\n Search results for 'tolower'\n\n[htdig.gif] Search results for 'tolower'\n _________________________________________________________________\n \n Match: [All____] Format: [Long_] Sort by: [Score________]\n Refine search: tolower_______________________ Search \n _________________________________________________________________ \n \n Documents 1 - 10 of 36 matches. More * 's indicate a better match.\n _________________________________________________________________\n _________________________________________________________________\n \n Pages:\n 1 2 3 4 next \n _________________________________________________________________\n \n [htdig.gif] ht://Dig 3.1.1\n\n\nie., the results aren't actually being displayed!\n\nCheers,\n\nPatrick\n", "msg_date": "Thu, 1 Jul 1999 11:34:20 +0100 (BST)", "msg_from": "\"Patrick Welche\" <[email protected]>", "msg_from_op": true, "msg_subject": "Web archive searcher" } ]
[ { "msg_contents": "\tGreetings:\n\t\n\tI'm having the peculiar problem That Postgres Seems unable to deal with microseconds \nin a time field. I have the field defined as type time, and postgres accepts input of the form \n12:22:13.41 but appears to ose the microseconds withn the database itself? \n\tIs there some style variable that must be set, or have I simply missed something in \nthe documentation?\n\t\n\tThanks for your help in advance.\n\t\n\tCollin Lynch.\n", "msg_date": "Thu, 1 Jul 1999 16:48:53 -0400 (EDT)", "msg_from": "\"Collin F. Lynch\" <[email protected]>", "msg_from_op": true, "msg_subject": "Time and microseconds?" }, { "msg_contents": "At 23:48 +0300 on 01/07/1999, Collin F. Lynch wrote:\n\n\n>\tI'm having the peculiar problem That Postgres Seems unable to deal\n>with microseconds in a time field. I have the field defined as type time,\n>and postgres accepts input of the form 12:22:13.41 but appears to ose the\n>microseconds withn the database itself?\n\nIt's just the output of the time datatype. This datatype is a bit odd. You\ncan see that the microseconds are kept. Compare:\n\ntesting=> select '12:05:11.04'::time;\n?column?\n--------\n12:05:11\n(1 row)\n\ntesting=> select datetime( 'today', '12:05:11.04'::time );\ndatetime\n-------------------------------\nSun Jul 04 12:05:11.04 1999 IDT\n(1 row)\n\nSo you see, the milliseconds are actually. It's just that it didn't show in\nthe normal display form.\n\nThe TIME datatype is supposed to be compatible with SQL92, but it's, well,\nnot exactly. Anyway, the default precision for time is 0, that is, no\nfractions of seconds unless stated so explicitly. However, since it stores\nmilliseconds, it defies that definition.\n\nHerouth\n\n--\nHerouth Maoz, Internet developer.\nOpen University of Israel - Telem project\nhttp://telem.openu.ac.il/~herutma\n\n\n", "msg_date": "Sun, 4 Jul 1999 13:10:17 +0300", "msg_from": "Herouth Maoz <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [INTERFACES] Time and microseconds?" }, { "msg_contents": "Herouth Maoz <[email protected]> writes:\n> The TIME datatype is supposed to be compatible with SQL92, but it's, well,\n> not exactly. Anyway, the default precision for time is 0, that is, no\n> fractions of seconds unless stated so explicitly. However, since it stores\n> milliseconds, it defies that definition.\n\nActually, datetime uses a float8 to store seconds-since-some-epoch-\nor-other (from a quick look at the sources, it looks like datetime 0\nis midnight GMT 1/1/2000). That means the precision varies depending\non how far away from time zero you are talking about. Currently,\nwith less than 16 million seconds left until the epoch, a standard\nIEEE float8 will have about 28 bits to spare to the right of the\nbinary point, giving us nominal precision not much worse than\nnanoseconds. For a more reasonable time range, say up to 100 years\nfrom the epoch, you could expect microsecond precision.\n\nThe default output routine for type datetime doesn't seem to want\nto print more than 2 digits after the decimal point, but you can extract\nthe full fractional precision with datetime_part(\"second\", ...).\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 04 Jul 1999 10:18:30 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [INTERFACES] Time and microseconds? " } ]
[ { "msg_contents": "Hello all,\n\nI got the following result.\nIt's FAQ ?\n\ndrop table int2t;\ncreate table int2t (id int2 primary key);\n\nexplain select * from int2t where id=1;\n NOTICE: QUERY PLAN:\n\n Seq Scan on int2t (cost=43.00 rows=2 width=2) \n\nexplain select * from int2t where id=1::int2;\n NOTICE: QUERY PLAN:\n\n Index Scan using int2t_pkey on int2t (cost=2.05 rows=2 width=2) \n\nexplain select * from int2t where id='1';\n NOTICE: QUERY PLAN:\n\n Index Scan using int2t_pkey on int2t (cost=2.05 rows=2 width=2) \n\nRight behavior ?\n\nRegards.\n\nHiroshi Inoue\[email protected]\n", "msg_date": "Fri, 2 Jul 1999 09:14:10 +0900", "msg_from": "\"Hiroshi Inoue\" <[email protected]>", "msg_from_op": true, "msg_subject": "Optimization FAQ ?" } ]
[ { "msg_contents": "Too many questions about adding MySQL comparison to \nPostgres web page received.\n\nReally MySQL is closest product to Postgres \nfrom marketing point of view\n\nIMHO, It's need to add bold red comment about MySQL, at least:\n\n MySQL is not RDBMS, because it miss functionality supported by all other\ndatabases included in comparison. \n\nThis is the quote from MySQL on-line documentation:\n\n 5.3 Functionality missing from MySQL \n 5.3.1 Sub-selects \n 5.3.2 SELECT INTO TABLE \n 5.3.3 Transactions \n 5.3.4 Stored procedures and triggers \n 5.3.5 Foreign Keys \n 5.3.5.1 Reasons NOT to use foreign keys \n 5.3.6 Views \n 5.3.7 `--' as the start of a comment \n \n\nThis is my variant of comparison, (based on the MySQL documentation) \n\nFreeSource: Yes\nAnySource: yes\nOnline documentation: yes\nLicense: free for uncomercial use\u001b\n\nClient-server: yes\nMulti-threaded: yes\nShared SQl cache: ?\nRow-level locking: no\nOnline backup: yes\nOnline recovery: no\nParallel Query: no\nRead only db: no\nMultiple Index type: no?\nUnique indices: yes\nMulticolumn indices: yes\n\nSQL92: no\nODBC: yes\n\nC api: yes\nC++ api: yes\nJava: yes\nPhyton: yes\nPerl 4: ?\nPerl 5: yes\nTcl: yes\n\nFreeBSD 3.x: yes\nNetBSD: yes\nLinux 2.0+ : with LinuxThreads\nLinux Aplha: ?\nLinux Sparc: ?\nSGI Irix: yes\nSolaris Sparc: yes\nSolaris x86: yes\nSunOS 4: with MIT\nHP/UX: yes\nBSDI: yes\nDG-UX: no\nAIX: yes\nNextstep: no\nUltrix 4: no\nOSF1: no? (Tru64 ???)\nSVR4.2: ?\nSCO open server: with FSU threads\nOS/2: yes\nWin32: yes (shareware)\nNovell Netware: no\nMacOS: no \n\nSCO UW7: yes\n\n* some of them require MIT threads package\n\n\n\n\n---\nDmitry Samersoff, [email protected], ICQ:3161705\nhttp://devnull.wplus.net\n* There will come soft rains ...\n", "msg_date": "Fri, 02 Jul 1999 12:27:33 +0400 (MSD)", "msg_from": "Dmitry Samersoff <[email protected]>", "msg_from_op": true, "msg_subject": "MySQL comparison ..." } ]
[ { "msg_contents": "Hi. I've posted RPMs for Postgres v6.5 at\n\n ftp://postgresql.org/pub/RPMS.beta/postgresql*.rpm\nand\n ftp://postgresql.org/pub/SRPMS.beta/postgresql*.rpm\n\nThe \"beta\" in the directory name is for the RPM itself, not Postgres,\nand once one or a few people confirm that these RPMs do the Right\nThing then I'll change the directories to be RPMS/ and SRPMS/.\n\nI've included Tom Lane's patches for building with shared libraries\nand for fixing an rtree indexing problem.\n\nChanges from previous RPMs built by Red Hat:\n\n1) All available interfaces are built and packaged. These include C,\nC++, tcl, perl, python, ODBC, and Java.\n\n2) The \"main package\" contains the basic client libraries and apps,\nand *all* of the documentation. This will allow you to use a remote\nserver, and to read about everything, by installing one basic package.\n\n3) The backend server is in a separate package \"postgresql-server\". In\nprevious RPMs the server was in the main package, and the clients were\nin \"postgresql-clients\", but the clients package was required by all\nof the other packages. I think the new organization behaves better for\nsomeone who just looks at file names to figure out what they need to\ndo (that's just about everyone ;).\n\nKnown problems/features to solve later:\n\na) The perl installation should put libraries into\narchitecture-specific areas on the target machine. It doesn't, and the\nsolution probably involves packaging most of the perl source tree into\nthe binary rpm, then building the package on the target. This is not a\ncommon way to use RPM.\n\nb) The python installation is very specific to python-1.5. The path\nnames are hardcoded, and I'm not sure the best way to make this\ntransparent. Perl has a mechanism for telling you the current library\narea; don't know if python has something similar.\n\nc) Clean up the postgresql.init startup file to make it easier to see\nhow to set typical parameters like DB location.\n\nTo install and use the packages:\n\ni1) If you already have Postgres installed, make sure you do a\npg_dumpall on your installation. Use something like\n pg_dumpall > machine.pg_dumpall\nFor a simple installation, I've found this to be *completely*\ntransparent; much easier than I've heard it was in the past. Then shut\ndown the old server. If you are upgrading from v6.4.2 or earlier, use\nthe \"-z\" flag with pg_dumpall.\n\ni2) Remove any old Postgres RPMs using\n rpm -e <packagename>\npostgresql-clients is no longer part of the distro, so that is listed\nas a \"conflict\" and you can't just upgrade without removing it first.\n\ni3) Install the new RPMs. Use\n rpm -Uvh <packagenames>\n\ni4) The default location for the data area is /var/lib/pgsql/data/.\nMake sure this is cleaned up or moved aside by renaming it so that you\ncan initialize a new database area.\n\ni5) Run initdb. Point to your new database area with something like \n initdb --pgdata=/var/lib/pgsql\n\ni6) Start up your new server.\n\ni7) Reload your database. Use something like\n psql < machine.pg_dumpall\n\ni8) Let me know what went wrong ;)\n\nPlease let me know how it goes, so we can nail the RPM building\nprocess for this release. Oh, I built this using a RH5.2 machine, so\nsomeone running RH6.0 might want to verify that this installs and\nworks there too. afaik it should be OK, unless the python version\nnumber has bumped.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Fri, 02 Jul 1999 14:56:51 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": true, "msg_subject": "First 6.5 RPMs" }, { "msg_contents": "Thomas Lockhart wrote:\n> \n> Hi. I've posted RPMs for Postgres v6.5 at\n> \n> Please let me know how it goes, so we can nail the RPM building\n> process for this release. Oh, I built this using a RH5.2 machine, so\n> someone running RH6.0 might want to verify that this installs and\n> works there too. afaik it should be OK, unless the python version\n> number has bumped.\n\nI successfully installed them here on one of my Mandrake 5.3 machines. \nI'm going to let AOLserver bang on that database for the next 72 hours\nor so -- everything went smoothly. I installed the main package, the\ndevel package (required to build AOLserver's postgres client), and the\nserver package. \n\nI will build rh6 rpms this weekend and place them on my server Monday\n(once I reenable anon-ftp on my utility box), as I don't have a 6.0 box\nhere to build on, yet. \n\nThanks for the work!\n\nLamar Owen\nWGCR Internet Radio\n", "msg_date": "Fri, 02 Jul 1999 12:35:27 -0400", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: First 6.5 RPMs" }, { "msg_contents": "Lamar Owen wrote:\n> \n> I successfully installed them here on one of my Mandrake 5.3 machines.\n\nGreat!\n\n> I'm going to let AOLserver bang on that database for the next 72 hours\n> or so -- everything went smoothly. I installed the main package, the\n> devel package (required to build AOLserver's postgres client), and the\n> server package.\n\nI'm interested in how the other interface packages installed. Perhaps\nyou can install the tcl package and fire up pgaccess (should be\ntrivial)?\n\n> I will build rh6 rpms this weekend and place them on my server Monday\n> (once I reenable anon-ftp on my utility box), as I don't have a 6.0 box\n> here to build on, yet.\n\nCan you try installing the rpms on your RH6.0 box before you build and\ninstall them yourself? I'm interested in whether they behave (I think\nthey should...).\n\nAlso, before we post too many variants, perhaps we can work out the\nportability issues with perl and python; then we can probably rely on\na single solid build on a fresh machine like your RH6.0 box.\n\nThanks for the help.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Fri, 02 Jul 1999 16:50:27 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: First 6.5 RPMs" }, { "msg_contents": "Thomas Lockhart wrote:\n \n> I'm interested in how the other interface packages installed. Perhaps\n> you can install the tcl package and fire up pgaccess (should be\n> trivial)?\n\nI did -- and now I'm going to make pgaccess and the tcl client part of\nmy standard toolkit! I hadn't had time before to try out pgaccess --\nbut, it seems to work fine, once I remembered to set \"-i\" on the\npostmaster line. BTW, while we're on that subject, might I suggest\nthat, since the default pg_hba.conf is for localhost only access, even\nover tcp/ip, the \"-i\" is made the default in the postgresql init\nscript?? It seems that several programs need it -- pgaccess included.\n\n> Can you try installing the rpms on your RH6.0 box before you build and\n> install them yourself? I'm interested in whether they behave (I think\n> they should...).\n\nHmmm... I'll try that first -- the problem is going to be rpms built\nagainst glibc 2.0.7 (RH52) running against glibc 2.1 (RH6) I'll blow in\nthe binaries first -- if there are oddities, I'll build fresh glibc2.1\nrpms and see what comes out. I will note that I have the compat-libs\ninstalled.\n\n> Also, before we post too many variants, perhaps we can work out the\n> portability issues with perl and python; then we can probably rely on\n> a single solid build on a fresh machine like your RH6.0 box.\n\nThose two are going to be fun. I'll let you know the results of my\nRedHat 6 test at home.\n\nLamar Owen\nWGCR Internet Radio\n", "msg_date": "Fri, 02 Jul 1999 14:34:13 -0400", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: First 6.5 RPMs" }, { "msg_contents": "Hi,\n\nI've encountered a strange behavior of the VACUUM ANALYZE command.\nIt seems that this command works only if the size of a text field\ndoes not exceed approximately 4050 bytes! So the real limit on \ntuple size is a half of the max tuple size. I've checked this effect\non Postgres 6.4.2 (Sparc Solaris 2.5.1) and Postgres 6.5 (SUSE 6.1 \nLinux, kernel 2.2.5). Is this a bug or known feature?\nThe python script used to reproduce this problem and results for \nv6.4.2 and v6.5 are follows.\n\nRegards,\nMikhail\n\n===================================================================\n#! /usr/bin/env python\n\nimport sys, pg\n\ncon = pg.connect('test', '', 5432, None, None)\n\ntry:\n con.query(\"CREATE TABLE tmp (t text)\")\nexcept:\n pass\n\nfor i in range(100) :\n s = 'X'*(4050 +i)\n print \"size= %d\" % len(s)\n con.query(\"DROP TABLE tmp\")\n con.query(\"CREATE TABLE tmp (t text)\")\n con.query(\"INSERT INTO tmp (t) VALUES ('%s')\" % s)\n try:\n con.query(\"VACUUM ANALYZE tmp\")\n except pg.error,msg:\n print msg\n sys.exit()\n\nprint \"OK\"\n===================================================================\n\nSunOS luc1 5.5.1 Generic_105428-01 sun4u sparc SUNW,Ultra-5_10\npython vacuum_chk.py\nsize= 4050\nsize= 4051\nsize= 4052\nsize= 4053\nsize= 4054\nsize= 4055\nsize= 4056\nsize= 4057\nERROR: Tuple is too big: size 8184\n\n===================================================================\n\nLinux luc2 2.2.5 #4 Tue Apr 13 16:51:36 MEST 1999 i686 unknown\nvacuum_chk.py\nsize= 4050\nsize= 4051\nsize= 4052\nsize= 4053\nsize= 4054\nsize= 4055\nsize= 4056\nsize= 4057\nsize= 4058\nsize= 4059\nsize= 4060\nsize= 4061\nsize= 4062\nsize= 4063\nsize= 4064\nsize= 4065\nERROR: Tuple is too big: size 8188\n\n===================================================================\n", "msg_date": "Fri, 02 Jul 1999 14:43:28 -0400", "msg_from": "Mikhail Terekhov <[email protected]>", "msg_from_op": false, "msg_subject": "VACUUM ANALYZE and tuple size" } ]
[ { "msg_contents": "> Thomas, can you check this out and let me know if it helps you with\n> conversion:\n> http://xtalk.price.ru/SGML/TEItools/index-en.html\n\nLooks like a possibility, but I didn't see direct *roff support. \n\nI started looking at docbook2man, and got good initial results after\nfixing some bugs/problems. There are still some things to fix of\ncourse (like the file name which is generated), but look at these two\nsample man pages. For some reason, the ABORT man page puts double\nquotes around some headers, but the SELECT page looks better...\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California", "msg_date": "Fri, 02 Jul 1999 17:40:32 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: sgml tool" }, { "msg_contents": "> > Thomas, can you check this out and let me know if it helps you with\n> > conversion:\n> > http://xtalk.price.ru/SGML/TEItools/index-en.html\n> \n> Looks like a possibility, but I didn't see direct *roff support. \n> \n> I started looking at docbook2man, and got good initial results after\n> fixing some bugs/problems. There are still some things to fix of\n> course (like the file name which is generated), but look at these two\n> sample man pages. For some reason, the ABORT man page puts double\n> quotes around some headers, but the SELECT page looks better...\n> \n\nWow, that select manual page does look good.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 2 Jul 1999 14:01:28 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: sgml tool" }, { "msg_contents": "> Wow, that select manual page does look good.\n\nOK, here is a first cut at new man pages. Things aren't perfect and\nneed improving, but this is a good start.\n\nI'll be going through the ref/*.sgml files to change formatting which\nseems to give docbook2man trouble. I'll also be massaging docbook2man\nto fix up its behavior. A few man pages weren't generated at all, but\nthe symptom was similar to things I've already fixed so I think that I\ncan fix these too.\n\nAfter I've gotten these things done, then we should think about taking\nthe existing old man page content and making sure that it is all in\nthe sgml files somewhere (not everything should stay in the reference\npages, but it should show up somewhere: ref pages, User's Guide, or\nAdmin Guide are likely candidates).\n\nWe'll have new man pages for v6.6!\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California", "msg_date": "Fri, 02 Jul 1999 21:26:40 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: sgml tool" }, { "msg_contents": "(enclosure bounced, so posting the tar file on the patches list\ninstead)\n\n> Wow, that select manual page does look good.\n\nOK, here is a first cut at new man pages. Things aren't perfect and\nneed improving, but this is a good start.\n\nI'll be going through the ref/*.sgml files to change formatting which\nseems to give docbook2man trouble. I'll also be massaging docbook2man\nto fix up its behavior. A few man pages weren't generated at all, but\nthe symptom was similar to things I've already fixed so I think that I\ncan fix these too.\n\nAfter I've gotten these things done, then we should think about taking\nthe existing old man page content and making sure that it is all in\nthe sgml files somewhere (not everything should stay in the reference\npages, but it should show up somewhere: ref pages, User's Guide, or\nAdmin Guide are likely candidates).\n\nWe'll have new man pages for v6.6!\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Fri, 02 Jul 1999 21:45:51 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: sgml tool" } ]
[ { "msg_contents": "> I've encountered a strange behavior of the VACUUM ANALYZE command.\n> It seems that this command works only if the size of a text field\n> does not exceed approximately 4050 bytes! So the real limit on \n> tuple size is a half of the max tuple size. I've checked this effect\n> on Postgres 6.4.2 (Sparc Solaris 2.5.1) and Postgres 6.5 (SUSE 6.1 \n> Linux, kernel 2.2.5). Is this a bug or known feature?\n> The python script used to reproduce this problem and results for \n> v6.4.2 and v6.5 are follows.\n> size= 4059\n> size= 4060\n> size= 4061\n> size= 4062\n> size= 4063\n> size= 4064\n> size= 4065\n> ERROR: Tuple is too big: size 8188\n\nI have always suspected these default values where wrong, but no one\nreported it as a bug.\n\nHere is a patch for 6.5 which will prevent the creation of these too big\ntuples in certain cases. Seems we should also check for max length at\nthe time we create the table, but it doesn't look like there is any code\nto do that yet.\n\nI am not going to apply this to 6.5.1 because it may have some unknown\nside-affects.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 2 Jul 1999 19:43:52 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Tuple too big" }, { "msg_contents": "> I have always suspected these default values where wrong, but no one\n> reported it as a bug.\n> \n> Here is a patch for 6.5 which will prevent the creation of these too big\n> tuples in certain cases. Seems we should also check for max length at\n> the time we create the table, but it doesn't look like there is any code\n> to do that yet.\n> \n> I am not going to apply this to 6.5.1 because it may have some unknown\n> side-affects.\n\nOops, forgot the patch. If people want this in 6.5.1, let me know. I\nam going to try and add real tuple check in the places that need it.\n\n---------------------------------------------------------------------------\n\n\n? src/Makefile.custom\n? src/config.log\n? src/log\n? src/config.cache\n? src/config.status\n? src/GNUmakefile\n? src/Makefile.global\n? src/backend/fmgr.h\n? src/backend/parse.h\n? src/backend/postgres\n? src/backend/global1.bki.source\n? src/backend/local1_template1.bki.source\n? src/backend/global1.description\n? src/backend/local1_template1.description\n? src/backend/bootstrap/bootparse.c\n? src/backend/bootstrap/bootstrap_tokens.h\n? src/backend/bootstrap/bootscanner.c\n? src/backend/catalog/genbki.sh\n? src/backend/catalog/global1.bki.source\n? src/backend/catalog/global1.description\n? src/backend/catalog/local1_template1.bki.source\n? src/backend/catalog/local1_template1.description\n? src/backend/port/Makefile\n? src/backend/utils/Gen_fmgrtab.sh\n? src/backend/utils/fmgr.h\n? src/backend/utils/fmgrtab.c\n? src/bin/cleardbdir/cleardbdir\n? src/bin/createdb/createdb\n? src/bin/createlang/createlang\n? src/bin/createuser/createuser\n? src/bin/destroydb/destroydb\n? src/bin/destroylang/destroylang\n? src/bin/destroyuser/destroyuser\n? src/bin/initdb/initdb\n? src/bin/initlocation/initlocation\n? src/bin/ipcclean/ipcclean\n? src/bin/pg_dump/Makefile\n? src/bin/pg_dump/pg_dump\n? src/bin/pg_id/pg_id\n? src/bin/pg_passwd/pg_passwd\n? src/bin/pg_version/Makefile\n? src/bin/pg_version/pg_version\n? src/bin/pgtclsh/mkMakefile.tcldefs.sh\n? src/bin/pgtclsh/mkMakefile.tkdefs.sh\n? src/bin/pgtclsh/Makefile.tkdefs\n? src/bin/pgtclsh/Makefile.tcldefs\n? src/bin/pgtclsh/pgtclsh\n? src/bin/pgtclsh/pgtksh\n? src/bin/psql/Makefile\n? src/bin/psql/psql\n? src/include/version.h\n? src/include/config.h\n? src/interfaces/ecpg/lib/Makefile\n? src/interfaces/ecpg/lib/libecpg.so.3.0.0\n? src/interfaces/ecpg/preproc/ecpg\n? src/interfaces/libpgtcl/Makefile\n? src/interfaces/libpgtcl/libpgtcl.so.2.0\n? src/interfaces/libpq/Makefile\n? src/interfaces/libpq/libpq.so.2.0\n? src/interfaces/libpq++/Makefile\n? src/interfaces/libpq++/libpq++.so.3.0\n? src/interfaces/odbc/GNUmakefile\n? src/interfaces/odbc/Makefile.global\n? src/lextest/lex.yy.c\n? src/lextest/lextest\n? src/pl/plpgsql/src/Makefile\n? src/pl/plpgsql/src/mklang.sql\n? src/pl/plpgsql/src/pl_gram.c\n? src/pl/plpgsql/src/pl.tab.h\n? src/pl/plpgsql/src/pl_scan.c\n? src/pl/plpgsql/src/libplpgsql.so.1.0\n? src/pl/tcl/mkMakefile.tcldefs.sh\n? src/pl/tcl/Makefile.tcldefs\n? src/template/linux_m68k\nIndex: src/backend/access/heap/stats.c\n===================================================================\nRCS file: /usr/local/cvsroot/pgsql/src/backend/access/heap/stats.c,v\nretrieving revision 1.15\ndiff -c -r1.15 stats.c\n*** src/backend/access/heap/stats.c\t1999/02/13 23:14:25\t1.15\n--- src/backend/access/heap/stats.c\t1999/07/02 23:34:45\n***************\n*** 16,21 ****\n--- 16,22 ----\n */\n \n #include <stdio.h>\n+ #include <time.h>\n \n #include <postgres.h>\n \nIndex: src/backend/catalog/index.c\n===================================================================\nRCS file: /usr/local/cvsroot/pgsql/src/backend/catalog/index.c,v\nretrieving revision 1.79\ndiff -c -r1.79 index.c\n*** src/backend/catalog/index.c\t1999/06/19 04:54:11\t1.79\n--- src/backend/catalog/index.c\t1999/07/02 23:34:47\n***************\n*** 20,25 ****\n--- 20,26 ----\n #include \"postgres.h\"\n \n #include \"access/genam.h\"\n+ #include \"access/htup.h\"\n #include \"access/heapam.h\"\n #include \"access/istrat.h\"\n #include \"access/xact.h\"\n***************\n*** 56,62 ****\n /*\n * macros used in guessing how many tuples are on a page.\n */\n! #define AVG_TUPLE_SIZE 8\n #define NTUPLES_PER_PAGE(natts) (BLCKSZ/((natts)*AVG_TUPLE_SIZE))\n \n /* non-export function prototypes */\n--- 57,63 ----\n /*\n * macros used in guessing how many tuples are on a page.\n */\n! #define AVG_TUPLE_SIZE MinTupleSize\n #define NTUPLES_PER_PAGE(natts) (BLCKSZ/((natts)*AVG_TUPLE_SIZE))\n \n /* non-export function prototypes */\nIndex: src/backend/commands/copy.c\n===================================================================\nRCS file: /usr/local/cvsroot/pgsql/src/backend/commands/copy.c,v\nretrieving revision 1.80\ndiff -c -r1.80 copy.c\n*** src/backend/commands/copy.c\t1999/06/12 20:41:25\t1.80\n--- src/backend/commands/copy.c\t1999/07/02 23:34:52\n***************\n*** 1073,1079 ****\n \t}\n }\n \n! #define EXT_ATTLEN 5*BLCKSZ\n \n /*\n returns 1 is c is in s\n--- 1073,1079 ----\n \t}\n }\n \n! #define EXT_ATTLEN\t(5 * BLCKSZ)\n \n /*\n returns 1 is c is in s\nIndex: src/backend/commands/vacuum.c\n===================================================================\nRCS file: /usr/local/cvsroot/pgsql/src/backend/commands/vacuum.c,v\nretrieving revision 1.109\ndiff -c -r1.109 vacuum.c\n*** src/backend/commands/vacuum.c\t1999/06/11 09:35:08\t1.109\n--- src/backend/commands/vacuum.c\t1999/07/02 23:35:02\n***************\n*** 624,630 ****\n \t\t\t\tempty_end_pages;\n \tSize\t\tfree_size,\n \t\t\t\tusable_free_size;\n! \tSize\t\tmin_tlen = MAXTUPLEN;\n \tSize\t\tmax_tlen = 0;\n \tint32\t\ti;\n \tstruct rusage ru0,\n--- 624,630 ----\n \t\t\t\tempty_end_pages;\n \tSize\t\tfree_size,\n \t\t\t\tusable_free_size;\n! \tSize\t\tmin_tlen = MaxTupleSize;\n \tSize\t\tmax_tlen = 0;\n \tint32\t\ti;\n \tstruct rusage ru0,\nIndex: src/backend/optimizer/path/_deadcode/xfunc.c\n===================================================================\nRCS file: /usr/local/cvsroot/pgsql/src/backend/optimizer/path/_deadcode/xfunc.c,v\nretrieving revision 1.4\ndiff -c -r1.4 xfunc.c\n*** src/backend/optimizer/path/_deadcode/xfunc.c\t1999/05/25 22:41:36\t1.4\n--- src/backend/optimizer/path/_deadcode/xfunc.c\t1999/07/02 23:35:08\n***************\n*** 20,25 ****\n--- 20,26 ----\n \n #include \"postgres.h\"\n \n+ #include \"access/htup.h\"\n #include \"access/heapam.h\"\n #include \"catalog/pg_language.h\"\n #include \"catalog/pg_proc.h\"\n***************\n*** 1094,1100 ****\n \tRelOptInfo\touterrel = get_parent((Path) get_outerjoinpath(joinnode));\n \tRelOptInfo\tinnerrel = get_parent((Path) get_innerjoinpath(joinnode));\n \tCount\t\touterwidth = get_width(outerrel);\n! \tCount\t\touters_per_page = ceil(BLCKSZ / (outerwidth + sizeof(HeapTupleData)));\n \n \tif (IsA(joinnode, HashPath))\n \t{\n--- 1095,1101 ----\n \tRelOptInfo\touterrel = get_parent((Path) get_outerjoinpath(joinnode));\n \tRelOptInfo\tinnerrel = get_parent((Path) get_innerjoinpath(joinnode));\n \tCount\t\touterwidth = get_width(outerrel);\n! \tCount\t\touters_per_page = ceil(BLCKSZ / (outerwidth + MinTupleSize));\n \n \tif (IsA(joinnode, HashPath))\n \t{\nIndex: src/backend/parser/gram.y\n===================================================================\nRCS file: /usr/local/cvsroot/pgsql/src/backend/parser/gram.y,v\nretrieving revision 2.84\ndiff -c -r2.84 gram.y\n*** src/backend/parser/gram.y\t1999/06/07 14:28:25\t2.84\n--- src/backend/parser/gram.y\t1999/07/02 23:35:24\n***************\n*** 36,41 ****\n--- 36,42 ----\n #include <ctype.h>\n \n #include \"postgres.h\"\n+ #include \"access/htup.h\"\n #include \"nodes/parsenodes.h\"\n #include \"nodes/print.h\"\n #include \"parser/gramparse.h\"\n***************\n*** 3384,3391 ****\n \n \t\t\t\t\tif ($3 < 1)\n \t\t\t\t\t\telog(ERROR,\"length for '%s' type must be at least 1\",$1);\n! \t\t\t\t\telse if ($3 > BLCKSZ - 128)\n! \t\t\t\t\t\telog(ERROR,\"length for type '%s' cannot exceed %d\",$1, BLCKSZ-128);\n \n \t\t\t\t\t/* we actually implement this sort of like a varlen, so\n \t\t\t\t\t * the first 4 bytes is the length. (the difference\n--- 3385,3393 ----\n \n \t\t\t\t\tif ($3 < 1)\n \t\t\t\t\t\telog(ERROR,\"length for '%s' type must be at least 1\",$1);\n! \t\t\t\t\telse if ($3 > MaxTupleSize)\n! \t\t\t\t\t\telog(ERROR,\"length for type '%s' cannot exceed %d\",$1,\n! \t\t\t\t\t\t\tMaxTupleSize);\n \n \t\t\t\t\t/* we actually implement this sort of like a varlen, so\n \t\t\t\t\t * the first 4 bytes is the length. (the difference\nIndex: src/backend/storage/page/bufpage.c\n===================================================================\nRCS file: /usr/local/cvsroot/pgsql/src/backend/storage/page/bufpage.c,v\nretrieving revision 1.22\ndiff -c -r1.22 bufpage.c\n*** src/backend/storage/page/bufpage.c\t1999/05/25 16:11:25\t1.22\n--- src/backend/storage/page/bufpage.c\t1999/07/02 23:35:28\n***************\n*** 45,51 ****\n \n \tAssert(pageSize == BLCKSZ);\n \tAssert(pageSize >\n! \t\t specialSize + sizeof(PageHeaderData) - sizeof(ItemIdData));\n \n \tspecialSize = DOUBLEALIGN(specialSize);\n \n--- 45,51 ----\n \n \tAssert(pageSize == BLCKSZ);\n \tAssert(pageSize >\n! \t\t\t specialSize + sizeof(PageHeaderData) - sizeof(ItemIdData));\n \n \tspecialSize = DOUBLEALIGN(specialSize);\n \nIndex: src/backend/utils/adt/varchar.c\n===================================================================\nRCS file: /usr/local/cvsroot/pgsql/src/backend/utils/adt/varchar.c,v\nretrieving revision 1.46\ndiff -c -r1.46 varchar.c\n*** src/backend/utils/adt/varchar.c\t1999/05/25 16:12:21\t1.46\n--- src/backend/utils/adt/varchar.c\t1999/07/02 23:35:29\n***************\n*** 14,19 ****\n--- 14,20 ----\n #include <stdio.h>\t\t\t\t/* for sprintf() */\n #include <string.h>\n #include \"postgres.h\"\n+ #include \"access/htup.h\"\n #include \"utils/array.h\"\n #include \"utils/builtins.h\"\n #include \"catalog/pg_type.h\"\n***************\n*** 81,88 ****\n \telse\n \t\tlen = atttypmod - VARHDRSZ;\n \n! \tif (len > BLCKSZ - 128)\n! \t\telog(ERROR, \"bpcharin: length of char() must be less than %d\", BLCKSZ - 128);\n \n \tresult = (char *) palloc(atttypmod);\n \tVARSIZE(result) = atttypmod;\n--- 82,90 ----\n \telse\n \t\tlen = atttypmod - VARHDRSZ;\n \n! \tif (len > MaxTupleSize)\n! \t\telog(ERROR, \"bpcharin: length of char() must be less than %d\",\n! \t\t\t\tMaxTupleSize);\n \n \tresult = (char *) palloc(atttypmod);\n \tVARSIZE(result) = atttypmod;\n***************\n*** 151,158 ****\n \n \trlen = len - VARHDRSZ;\n \n! \tif (rlen > BLCKSZ - 128)\n! \t\telog(ERROR, \"bpchar: length of char() must be less than %d\", BLCKSZ - 128);\n \n #ifdef STRINGDEBUG\n \tprintf(\"bpchar- convert string length %d (%d) ->%d (%d)\\n\",\n--- 153,161 ----\n \n \trlen = len - VARHDRSZ;\n \n! \tif (rlen > MaxTupleSize)\n! \t\telog(ERROR, \"bpchar: length of char() must be less than %d\",\n! \t\t\tMaxTupleSize);\n \n #ifdef STRINGDEBUG\n \tprintf(\"bpchar- convert string length %d (%d) ->%d (%d)\\n\",\n***************\n*** 332,339 ****\n \tif (atttypmod != -1 && len > atttypmod)\n \t\tlen = atttypmod;\t\t/* clip the string at max length */\n \n! \tif (len > BLCKSZ - 128)\n! \t\telog(ERROR, \"varcharin: length of char() must be less than %d\", BLCKSZ - 128);\n \n \tresult = (char *) palloc(len);\n \tVARSIZE(result) = len;\n--- 335,343 ----\n \tif (atttypmod != -1 && len > atttypmod)\n \t\tlen = atttypmod;\t\t/* clip the string at max length */\n \n! \tif (len > MaxTupleSize)\n! \t\telog(ERROR, \"varcharin: length of char() must be less than %d\",\n! \t\t\t\tMaxTupleSize);\n \n \tresult = (char *) palloc(len);\n \tVARSIZE(result) = len;\n***************\n*** 403,410 ****\n \tlen = slen - VARHDRSZ;\n #endif\n \n! \tif (len > BLCKSZ - 128)\n! \t\telog(ERROR, \"varchar: length of varchar() must be less than BLCKSZ-128\");\n \n \tresult = (char *) palloc(slen);\n \tVARSIZE(result) = slen;\n--- 407,415 ----\n \tlen = slen - VARHDRSZ;\n #endif\n \n! \tif (len > MaxTupleSize)\n! \t\telog(ERROR, \"varchar: length of varchar() must be less than %d\",\n! \t\t\tMaxTupleSize);\n \n \tresult = (char *) palloc(slen);\n \tVARSIZE(result) = slen;\nIndex: src/include/access/htup.h\n===================================================================\nRCS file: /usr/local/cvsroot/pgsql/src/include/access/htup.h,v\nretrieving revision 1.16\ndiff -c -r1.16 htup.h\n*** src/include/access/htup.h\t1999/05/25 22:42:32\t1.16\n--- src/include/access/htup.h\t1999/07/02 23:35:34\n***************\n*** 13,19 ****\n #ifndef HTUP_H\n #define HTUP_H\n \n! #include <utils/nabstime.h>\n #include <storage/itemptr.h>\n \n #define MinHeapTupleBitmapSize\t32\t\t/* 8 * 4 */\n--- 13,19 ----\n #ifndef HTUP_H\n #define HTUP_H\n \n! #include <storage/bufpage.h>\n #include <storage/itemptr.h>\n \n #define MinHeapTupleBitmapSize\t32\t\t/* 8 * 4 */\n***************\n*** 51,56 ****\n--- 51,61 ----\n } HeapTupleHeaderData;\n \n typedef HeapTupleHeaderData *HeapTupleHeader;\n+ \n+ #define MinTupleSize\t(sizeof (PageHeaderData) + \\\n+ \t\t\t\t\t\t sizeof(HeapTupleHeaderData) + sizeof(int4))\n+ \n+ #define MaxTupleSize\t(BLCKSZ/2 - MinTupleSize)\n \n #define SelfItemPointerAttributeNumber\t\t\t(-1)\n #define ObjectIdAttributeNumber\t\t\t\t\t(-2)\nIndex: src/include/storage/bufpage.h\n===================================================================\nRCS file: /usr/local/cvsroot/pgsql/src/include/storage/bufpage.h,v\nretrieving revision 1.22\ndiff -c -r1.22 bufpage.h\n*** src/include/storage/bufpage.h\t1999/05/25 16:14:40\t1.22\n--- src/include/storage/bufpage.h\t1999/07/02 23:35:36\n***************\n*** 133,150 ****\n \tOverwritePageManagerMode\n } PageManagerMode;\n \n- /* ----------------\n- *\t\tmisc support macros\n- * ----------------\n- */\n- \n- /*\n- * XXX this is wrong -- ignores padding/alignment, variable page size,\n- * AM-specific opaque space at the end of the page (as in btrees), ...\n- * however, it at least serves as an upper bound for heap pages.\n- */\n- #define MAXTUPLEN\t\t(BLCKSZ - sizeof (PageHeaderData))\n- \n /* ----------------------------------------------------------------\n *\t\t\t\t\t\tpage support macros\n * ----------------------------------------------------------------\n--- 133,138 ----\nIndex: src/interfaces/ecpg/preproc/preproc.y\n===================================================================\nRCS file: /usr/local/cvsroot/pgsql/src/interfaces/ecpg/preproc/preproc.y,v\nretrieving revision 1.57\ndiff -c -r1.57 preproc.y\n*** src/interfaces/ecpg/preproc/preproc.y\t1999/06/16 18:25:50\t1.57\n--- src/interfaces/ecpg/preproc/preproc.y\t1999/07/02 23:35:52\n***************\n*** 3,8 ****\n--- 3,11 ----\n #include <stdio.h>\n #include <string.h>\n #include <stdlib.h>\n+ \n+ #include \"postgres.h\"\n+ #include \"access/htup.h\"\n #include \"catalog/catname.h\"\n #include \"utils/numeric.h\"\n \n***************\n*** 3351,3358 ****\n \t\t\t\t\t\tsprintf(errortext, \"length for '%s' type must be at least 1\",$1);\n \t\t\t\t\t\tyyerror(errortext);\n \t\t\t\t\t}\n! \t\t\t\t\telse if (atol($3) > BLCKSZ - 128) {\n! \t\t\t\t\t\tsprintf(errortext, \"length for type '%s' cannot exceed %d\",$1,BLCKSZ - 128);\n \t\t\t\t\t\tyyerror(errortext);\n \t\t\t\t\t}\n \n--- 3354,3361 ----\n \t\t\t\t\t\tsprintf(errortext, \"length for '%s' type must be at least 1\",$1);\n \t\t\t\t\t\tyyerror(errortext);\n \t\t\t\t\t}\n! \t\t\t\t\telse if (atol($3) > MaxTupleSize) {\n! \t\t\t\t\t\tsprintf(errortext, \"length for type '%s' cannot exceed %d\",$1,MaxTupleSize);\n \t\t\t\t\t\tyyerror(errortext);\n \t\t\t\t\t}\n \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 2 Jul 1999 20:25:40 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Re: Tuple too big" }, { "msg_contents": "> Here is a patch for 6.5 which will prevent the creation of these too big\n> tuples in certain cases. Seems we should also check for max length at\n> the time we create the table, but it doesn't look like there is any code\n> to do that yet.\n> \n> I am not going to apply this to 6.5.1 because it may have some unknown\n> side-affects.\n> \n\nOn second thought, the patch looks harmless, so I am going to apply it. \nIt is better than what is currently there.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 2 Jul 1999 20:30:34 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Re: Tuple too big" }, { "msg_contents": "> I've encountered a strange behavior of the VACUUM ANALYZE command.\n> It seems that this command works only if the size of a text field\n> does not exceed approximately 4050 bytes! So the real limit on \n> tuple size is a half of the max tuple size. I've checked this effect\n> on Postgres 6.4.2 (Sparc Solaris 2.5.1) and Postgres 6.5 (SUSE 6.1 \n> Linux, kernel 2.2.5). Is this a bug or known feature?\n> The python script used to reproduce this problem and results for \n> v6.4.2 and v6.5 are follows.\n> \n\nOK, looks like the new code works:\n\t\n\ttest=> create table test (x char(2000), y char(2000), z char(2000))\\g\n\tCREATE\n\ttest=> insert into test values ('1','2','3');\n\tERROR: Tuple is too big: size 6044, max size 4044\n\ttest=> create table test2 (x varchar(2000), y varchar(2000), z\n\tvarchar(2000))\\g\n\tCREATE\n\ttest=> insert into test2 values ('1','2','3');\n\tINSERT 21303 1\n\nchar() is fixed length, while varchar() is variable. Now, we could\nprevent creation of the first table, but not the second because only the\ninserted data will show if it over the limit. Much easier just to test\nin one place.\n\nHere is the new patch:\n\n---------------------------------------------------------------------------\n\nIndex: hio.c\n===================================================================\nRCS file: /usr/local/cvsroot/pgsql/src/backend/access/heap/hio.c,v\nretrieving revision 1.20\nretrieving revision 1.22\ndiff -c -r1.20 -r1.22\n*** hio.c\t1999/05/25 16:07:07\t1.20\n--- hio.c\t1999/07/03 01:56:16\t1.22\n***************\n*** 16,21 ****\n--- 16,22 ----\n \n #include <storage/bufpage.h>\n #include <access/hio.h>\n+ #include <access/htup.h>\n #include <access/heapam.h>\n #include <storage/bufmgr.h>\n #include <utils/memutils.h>\n***************\n*** 164,169 ****\n--- 165,173 ----\n \t\tif (len > PageGetFreeSpace(pageHeader))\n \t\t\telog(ERROR, \"Tuple is too big: size %d\", len);\n \t}\n+ \n+ \tif (len > MaxTupleSize)\n+ \t\telog(ERROR, \"Tuple is too big: size %d, max size %d\", len, MaxTupleSize);\n \n \tif (!relation->rd_myxactonly)\n \t\tUnlockPage(relation, 0, ExclusiveLock);\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 2 Jul 1999 22:00:44 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Tuple too big" } ]
[ { "msg_contents": "Tom Lane <[email protected]> writes:\n>> The optimizations under discussion will not significantly affect comparison\n>> speed one way or the other, so comparison speed is a moot issue.\n>\n>On what do you base that assertion? I'd expect comparisons to be sped\n>up significantly: no need to unpack the storage format, and the inner\n>loop handles four digits per iteration instead of one.\n\nThe overwhelming majority of comparisons can be resolved just by looking\nat the number of significant digits. Ninety percent of the remainder can\nbe resolved after looking at the most significant digit, and so on, except\nin the case of distributions that vary only in the least significant digits.\n\nFurthermore, on big-endian architectures, four digits of packed representation\ncan be compared in one iteration as well.\n\nSo, I conclude the optimizations under discussion will not significantly\naffect comparison speed one way or the other.\n\n -Michael Robinson\n\n", "msg_date": "Sat, 3 Jul 1999 17:42:50 +0800 (CST)", "msg_from": "Michael Robinson <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] regression bigtest needs very long time" } ]
[ { "msg_contents": "\nIs it a known problem that LIMIT doesn't work with UNION, or have I\ndiscovered a bug?\n\n-- \nChris Bitmead\nmailto:[email protected]\nhttp://www.techphoto.org - Photography News, Stuff that Matters\n", "msg_date": "Sun, 04 Jul 1999 00:08:30 +1000", "msg_from": "Chris Bitmead <[email protected]>", "msg_from_op": true, "msg_subject": "LIMIT and UNION" }, { "msg_contents": "Chris Bitmead <[email protected]> writes:\n> Is it a known problem that LIMIT doesn't work with UNION, or have I\n> discovered a bug?\n\nNot sure, but I think the LIMIT would be parsed as an attribute of\none or the other of the sub-selects, not as a limit on the total\nresult size. This is probably not the behavior you want :-(\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 03 Jul 1999 12:24:46 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] LIMIT and UNION " }, { "msg_contents": "> \n> Is it a known problem that LIMIT doesn't work with UNION, or have I\n> discovered a bug?\n> \n\nI see. Syntax is accepted, but LIMIT is not performed. Looks like a\nbug.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 3 Jul 1999 12:38:30 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] LIMIT and UNION" }, { "msg_contents": "> \n> Is it a known problem that LIMIT doesn't work with UNION, or have I\n> discovered a bug?\n\nAdded to TODO:\n\n\t* UNION with LIMIT fails\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 7 Jul 1999 23:08:03 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] LIMIT and UNION" } ]
[ { "msg_contents": "Can someone tell me what the maximum tuple length is? Is it sort of\nBLCKSZ or BLCKSZ/2? I don't remember if we require the ability to have\nat least two tuples in a block.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 3 Jul 1999 15:16:34 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Tuple length limit" }, { "msg_contents": "> Can someone tell me what the maximum tuple length is? Is it sort of\n> BLCKSZ or BLCKSZ/2? I don't remember if we require the ability to have\n> at least two tuples in a block.\n\nIIRC, the max tuple size was always intended to be BLCKSZ, it's just the max\nsize of the text fields that were 4096. I don't remember any discussions\never on this list about trying to control the # of tuples stored per block.\n\nHope this helps...\n\nDarren\n\n", "msg_date": "Sat, 3 Jul 1999 23:24:21 -0400", "msg_from": "\"Stupor Genius\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: [HACKERS] Tuple length limit" }, { "msg_contents": "[Charset iso-8859-1 unsupported, filtering to ASCII...]\n> > Can someone tell me what the maximum tuple length is? Is it sort of\n> > BLCKSZ or BLCKSZ/2? I don't remember if we require the ability to have\n> > at least two tuples in a block.\n> \n> IIRC, the max tuple size was always intended to be BLCKSZ, it's just the max\n> size of the text fields that were 4096. I don't remember any discussions\n> ever on this list about trying to control the # of tuples stored per block.\n> \n\nThat is what I found too, but vacuum seems to use BLCKSZ/2, varchar uses\nBLCKSZ/2, and tuple size is BLCKSZ. Doesn't make any sense.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 4 Jul 1999 00:27:44 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Tuple length limit" }, { "msg_contents": "> Can someone tell me what the maximum tuple length is? Is it sort of\n> BLCKSZ or BLCKSZ/2? I don't remember if we require the ability to have\n> at least two tuples in a block.\n\nHere is what I found with the new code. Seems it works.\n\n---------------------------------------------------------------------------\n\t\n\ttest=> create table test (x char(8104));\n\tCREATE\n\ttest=> insert into test values ('x');\n\tINSERT 21417 1\n\ttest=> insert into test values ('x');\n\tINSERT 21418 1\n\ttest=> insert into test values ('x');\n\tINSERT 21419 1\n\ttest=> insert into test values ('x');\n\tINSERT 21420 1\n\ttest=> vacuum;\n\tVACUUM\n\ttest=> delete from test;\n\tDELETE 4\n\ttest=> vacuum;\n\tVACUUM\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 4 Jul 1999 01:04:46 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Tuple length limit" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n>> Can someone tell me what the maximum tuple length is?\n\nI had always thought that the limit was supposed to be BLCKSZ less\noverhead.\n\n> Here is what I found with the new code. Seems it works.\n> \ttest=> vacuum;\n> \tVACUUM\n\nWasn't the complaint that started this thread something about \"peculiar\nbehavior\" of VACUUM with big tuples? Might be wise to check VACUUM more\nclosely.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 04 Jul 1999 09:55:44 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Tuple length limit " }, { "msg_contents": "> Bruce Momjian <[email protected]> writes:\n> >> Can someone tell me what the maximum tuple length is?\n> \n> I had always thought that the limit was supposed to be BLCKSZ less\n> overhead.\n> \n> > Here is what I found with the new code. Seems it works.\n> > \ttest=> vacuum;\n> > \tVACUUM\n> \n> Wasn't the complaint that started this thread something about \"peculiar\n> behavior\" of VACUUM with big tuples? Might be wise to check VACUUM more\n> closely.\n\nWe were inconsistent. Varchar and vacuum where BLCKSZ/2, while others\nwhere BLCKSZ, of course minus overhead. The new code is consistent, and\ndoes proper padding. I even got rid of a fudge factor in rewrite\nstorage by using the actual rewrite lengths.\n\nWill be in 6.5.1.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 4 Jul 1999 10:20:02 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Tuple length limit" }, { "msg_contents": "As a matter of fact, VACUUM works just fine in my case.\nIt is VACUUM ANALYSE which doesn't.\n\nRegards,\nMikhail\n\nTom Lane wrote:\n> \n> Bruce Momjian <[email protected]> writes:\n> >> Can someone tell me what the maximum tuple length is?\n> \n> I had always thought that the limit was supposed to be BLCKSZ less\n> overhead.\n> \n> > Here is what I found with the new code. Seems it works.\n> > test=> vacuum;\n> > VACUUM\n> \n> Wasn't the complaint that started this thread something about \"peculiar\n> behavior\" of VACUUM with big tuples? Might be wise to check VACUUM more\n> closely.\n> \n> regards, tom lane\n", "msg_date": "Sun, 04 Jul 1999 17:05:31 -0400", "msg_from": "Mikhail Terekhov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Tuple length limit" }, { "msg_contents": "[Charset koi8-r unsupported, filtering to ASCII...]\n> As a matter of fact, VACUUM works just fine in my case.\n> It is VACUUM ANALYSE which doesn't.\n\nGood point. Works for me now.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 4 Jul 1999 18:22:51 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Tuple length limit" } ]
[ { "msg_contents": "> > I've encountered a strange behavior of the VACUUM ANALYZE command.\n> > It seems that this command works only if the size of a text field\n> > does not exceed approximately 4050 bytes! So the real limit on \n> > tuple size is a half of the max tuple size. I've checked this effect\n> > on Postgres 6.4.2 (Sparc Solaris 2.5.1) and Postgres 6.5 (SUSE 6.1 \n> > Linux, kernel 2.2.5). Is this a bug or known feature?\n> > The python script used to reproduce this problem and results for \n> > v6.4.2 and v6.5 are follows.\n> > \n> \n\nOK, I have again written the code to allow tuples to take up a while\nblock, rather than the 1/2 block limit you were seeing. It consists of\na bunch of patches, so I can't send them to you, but it will be in\n6.5.1, due out July 15th.\n\nAlso, the snapshot on ftp.postgresql.org has the changes too.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 4 Jul 1999 10:22:45 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Tuple too big" } ]
[ { "msg_contents": "(back on list)\n\n> The RPMS I snarfed from ftp.postgresql.org Friday installed cleanly on\n> RedHat 6. HOWEVER, I attempted an rpm --rebuild on the SRPM, and it did\n> not rebuild. I'll get more detailed error messages for you later --\n> right now, I'm going to hammer on 6.5 on RedHat 6.0 for awhile as\n> packaged.\n\nOK. I'm not sure if it makes a difference, but I usually do an\n\"install\" of the source rpm, and then build from there. Something like\n\n $ rpm -ivv postgresql-6.5.src.rpm\n $ cd /usr/src/redhat/SPECS\n $ rpm -ba postgresql.spec\n\n> I think the error is an rpm bug, but I'mm not sure -- it chokes after\n> doing its chgrp -Rf over the build tree -- however, if I execute the RPM\n> temp file manually, it works -- this is the %prep section. I'll let you\n> know what I find. This is rpm 3.0.1.\n> I installed the tcl client, and the pgaccess program ran smoothly. The\n> AOLserver client also ran smoothly.\n> So, the binaries you built on 5.2 work fine on my RH 6.0 box -- that has\n> the compat-libs installed (glibc-2.0.7). You may want to get feedback\n> from someone without the compat-libs installed -- or get the $1.99\n> RedHat 6.0 CD from Cheapbytes.... (www.cheapbytes.com) and try it\n> yourself.\n\nI've got the 6.0 CD, since we are putting it on a few machines at\nwork. I'll probably try upgrading my machine at home first, since we\nhave one or two commercial apps (Allegro Lisp for one) which may or\nmay not have trouble with the new kernel.\n\nSo, shall we declare these RPMs to be an initial release? RedHat is\ninterested in doing a maintenance release of Postgres, especially if\nthe alpha port will build. Presumably they will pound out the RH6.0\nand rpm issues, but if anyone else would like to try it first it can\nonly help. Can someone remind me if the alpha stuff is working on\nLinux? My vague recollection is that it works if compiled with \"-O0\"\n(no optimization); can someone confirm this?\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Mon, 05 Jul 1999 04:22:28 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Postgresql 6.5-1 rpms on RedHat 6.0" }, { "msg_contents": "> > RedHat 6. HOWEVER, I attempted an rpm --rebuild on the SRPM, and it did\n> > not rebuild. I'll get more detailed error messages for you later --\n\n<snip>\n\n> OK. I'm not sure if it makes a difference, but I usually do an\n> \"install\" of the source rpm, and then build from there. Something like\n> \n> $ rpm -ivv postgresql-6.5.src.rpm\n> $ cd /usr/src/redhat/SPECS\n> $ rpm -ba postgresql.spec\n\nI also did this same procedure.\n\n> > I think the error is an rpm bug, but I'm not sure -- it chokes after\n> > doing its chgrp -Rf over the build tree -- however, if I execute the RPM\n> > temp file manually, it works -- this is the %prep section. I'll let you\n> > know what I find. This is rpm 3.0.1.\n\nI got this exact same error on redhat 6.0. At first glance it looks like\nand RPM bug. Has anyone gotten this rpm to build on a redhat 6.0 system?\nI'd be greatly interested in hearing if anyone has gotten this to work.\n\nMike\n\n", "msg_date": "Wed, 7 Jul 1999 10:29:36 -0500 (CDT)", "msg_from": "Michael J Schout <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: Postgresql 6.5-1 rpms on RedHat 6.0" }, { "msg_contents": "Michael J Schout wrote:\n> > > RedHat 6. HOWEVER, I attempted an rpm --rebuild on the SRPM, and it did\n> > > not rebuild. I'll get more detailed error messages for you later --\n> \n> <snip>\n> \n> > OK. I'm not sure if it makes a difference, but I usually do an\n> > \"install\" of the source rpm, and then build from there. Something like\n \n> I also did this same procedure.\n\nThe --rebuild syntax does an rpm -i of the SRPM before invoking a -ba\nover the spec file loaded into SPECS by the rpm -i. It then cleans up\nafter itself. And, I DID try an rpm -i postgresql-6.5-1.src.rpm ; cd\n/usr/src/redhat/SPECS ; rpm -ba postgresql.spec, with the same error as\nwith the --rebuild.\n\n> \n> I got this exact same error on redhat 6.0. At first glance it looks like\n> and RPM bug. Has anyone gotten this rpm to build on a redhat 6.0 system?\n> I'd be greatly interested in hearing if anyone has gotten this to work.\n\nI'm going to attempt it tonight with the newest rpm, 3.0.2, hot off the\npresses. Will advise the list in the morning as to results. AFAIK, the\nsyntax used by Thomas here (in %prep) is, according to documentation,\ncorrect. The error is manifested as a \"bad exit status\" after doing the\nrecursive chgrp and before doing the recursive chmod of %setup. At\nleast that is where the error shows up.\n\nAgain, if I manually execute the resulting rpm-tmp script, there is no\nerror exit reported.\n\nIf the nice folk at RedHat will advise when their newest postgresql rpm\nis available on rawhide, I'll be glad to test that as well. Thanks\nCristian and Jeff for your help.\n\nLamar Owen\nWGCR Internet Radio\nLamar Owen\n", "msg_date": "Wed, 07 Jul 1999 15:53:13 -0400", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: Postgresql 6.5-1 rpms on RedHat 6.0" } ]
[ { "msg_contents": "Hello all!\n\n You probably remember me - recently I complained about speed\n of joins in Postgres. After a short investigation the way was\n found in which Postgres's optimizer can do the right job. It\n was constructive discussion. Now I want to tell you what could\n make Postgres better and faster. And what will make us (our\n development group) happy. Maybe I am bothering someone, if\n I do - tell me that.\n\n Let me begin.\n\n First of all, some accounting reports need to be delivered\n very fast - within minutes or so. And what's bad is that\n quite a few of these reports are quite time-consuming and search\n intensive. In particular, internals of these reports include\n a lot of joins on tables.\n\n Secondly, almost all of accounting information naturally\n fits into network data model, which can be implemented very\n efficiently.\n\n This stuff described here is not accounting-specific, it\n can be found in every database which uses master-detail\n tables and other such types of relations.\n\n So. How is join being performed in such cases? Although I am\n not an expert, I can imagine the way: first there is an (index)\n scan on first table, and then an (index) scan on the second.\n It is the best way, reality could be much worse as we have seen.\n\n How can we radically improve performance in such cases? There\n is a simple and quite obvious way. (For you not to think that\n I am hallucinating I will tell you that there exist some\n real servers that offer such features I am talking about)\n We should make a real reference in one table to another! That\n means there could be special data type called, say, \"link\",\n which is a physical record number in the foreign table.\n\n Queries could look like this:\n\n table1:\n a int4\n b link (->table2)\n\n table2:\n c int4\n recnum (system auxiliary field, really a record number in the table)\n\n select * from table2 where table1.a > 5 and table1.b = table2.recnum\n\n Such joins can fly really fast, as practice shows :)\n Just consider: the thing table1.b = table2.recnum is a READY-MADE\n join, so server doesn't have to build anything on top of that. It\n can simply perform lookup through link, and since it is a physical\n record number, this is done with the efficiency of C pointers! Thus\n performance gain is ENORMOUS.\n\n And it simplifies the optimizer, because it doesn't have to decide\n anything about whether to use indices and such like. The join is\n performed always the same way, and it is the best way.\n\n This feature, being implemented, could bring Postgres ahead\n of most commercial servers, so proving creative abilities of\n free software community. Let us make a step in the future!\n\nBest regards,\n Leon\n\n\n", "msg_date": "Mon, 5 Jul 1999 17:46:36 +0500", "msg_from": "Leon <[email protected]>", "msg_from_op": true, "msg_subject": "Joins and links" }, { "msg_contents": "At 15:46 +0300 on 05/07/1999, Leon wrote:\n\n\n> ow can we radically improve performance in such cases? There\n> is a simple and quite obvious way. (For you not to think that\n> I am hallucinating I will tell you that there exist some\n> real servers that offer such features I am talking about)\n> We should make a real reference in one table to another! That\n> means there could be special data type called, say, \"link\",\n> which is a physical record number in the foreign table.\n>\n> Queries could look like this:\n>\n> table1:\n> a int4\n> b link (->table2)\n>\n> table2:\n> c int4\n> recnum (system auxiliary field, really a record number in the table)\n>\n> select * from table2 where table1.a > 5 and table1.b = table2.recnum\n>\n> Such joins can fly really fast, as practice shows :)\n\nIf you are interested in such a feature, I would send it to the hackers\nlist and not the general list, which is not intended for development\nissues, but for general problems and issues with existing versions.\n\nThe best would be, of course, to get hold of CVS and develop the needed\ncode yourself. That's what open software is all about. Perhaps if it's so\nimportant to you, you could pay PostgreSQL incorporated, buying programmer\ntime for this feature.\n\nIn any case, a message to the hackers list may help you understand how\nthings are implemented in Postgres and how much work will be needed for\nsuch a development. On the face of it, I can see several problems with this\nidea, namely inefficiency in deletions, the need for fixed-length records\nwith no ability to add or drop columns or have variable-length fields\nwithout maximum length, and needing to know all the tables that reference\na table for the sake of vacuuming. But maybe the hackers (who write\nPostgres) would think differently. Ask them.\n\nHerouth\n\n--\nHerouth Maoz, Internet developer.\nOpen University of Israel - Telem project\nhttp://telem.openu.ac.il/~herutma\n\n\n", "msg_date": "Mon, 5 Jul 1999 16:49:58 +0300", "msg_from": "Herouth Maoz <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Joins and links" }, { "msg_contents": "Leon,\n\nI have a few questions abo8ut what you are proposing.\n\nMy background was in non SQL dbms (DataFlex) which supported recnums\nwhich as you pointed out had a number of advantages. However, recnums\nalso have a significant number of problems. Some features like\nreplication have significant dificulties. Others such as exporting and\nre-loading your data also need special work-arounds.\n\nA solution I have seen in some sql dbms (eg MS SQL Server) is to be able\nto choose one index and have the database table sorted by this index.\nThen you don't need a separate index for that sort-order. It means that\nalmost any index can work like a recnum and avoid looking in both the\nindex and the data. I am trying to remember the name of this feature but\ncannot at the moment.\n\nRegards\n\nDave\n\n-- \nDavid Warnock\nSundayta Ltd\n", "msg_date": "Mon, 05 Jul 1999 14:57:13 +0100", "msg_from": "David Warnock <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Joins and links" }, { "msg_contents": "> Leon,\n> \n> I have a few questions abo8ut what you are proposing.\n> \n> My background was in non SQL dbms (DataFlex) which supported recnums\n> which as you pointed out had a number of advantages. However, recnums\n> also have a significant number of problems. Some features like\n> replication have significant dificulties. Others such as exporting and\n> re-loading your data also need special work-arounds.\n> \n> A solution I have seen in some sql dbms (eg MS SQL Server) is to be able\n> to choose one index and have the database table sorted by this index.\n> Then you don't need a separate index for that sort-order. It means that\n> almost any index can work like a recnum and avoid looking in both the\n> index and the data. I am trying to remember the name of this feature but\n> cannot at the moment.\n> \n\nWe have CLUSTER command, but it is something that has to be run manually.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 5 Jul 1999 11:10:06 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Joins and links" }, { "msg_contents": "\n\nDavid Warnock wrote:\n> A solution I have seen in some sql dbms (eg MS SQL Server) is to be able\n> to choose one index and have the database table sorted by this index.\n> Then you don't need a separate index for that sort-order. It means that\n> almost any index can work like a recnum and avoid looking in both the\n> index and the data. I am trying to remember the name of this feature but\n> cannot at the moment.\n\nCLUSTER ?\n\n-- \n\nMaarten Boekhold, [email protected]\nTIBCO Finance Technology Inc.\nThe Atrium\nStrawinskylaan 3051\n1077 ZX Amsterdam, The Netherlands\ntel: +31 20 3012158, fax: +31 20 3012358\nhttp://www.tibco.com\n", "msg_date": "Mon, 05 Jul 1999 17:37:56 +0200", "msg_from": "Maarten Boekhold <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Joins and links" }, { "msg_contents": "Maarten Boekhold wrote:\n> \n> David Warnock wrote:\n> > A solution I have seen in some sql dbms (eg MS SQL Server) is to be able\n> > to choose one index and have the database table sorted by this index.\n> > Then you don't need a separate index for that sort-order. It means that\n> > almost any index can work like a recnum and avoid looking in both the\n> > index and the data. I am trying to remember the name of this feature but\n> > cannot at the moment.\n> \n> CLUSTER ?\n\nThats the one. Thanks.\n\nDave\n", "msg_date": "Mon, 05 Jul 1999 16:56:22 +0100", "msg_from": "David Warnock <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Joins and links" }, { "msg_contents": "Bruce,\n\nI did not know Postgresql had that. I have just looked at the docs and\nit seems that the postgresql CLUSTER is similar to the feature in MS SQL\nServer but at present is rather more limited as\n\na) It is static whereas CLUSTER can be set as an attribute on an index\nin MS SQL Server and then ther index is not created separately the table\nis kept permanently sorted by the index. This obviously is very very\nslow if you add to the middle of the index and wasteful of space if you\ndelete rows from the table. However, for a sequential ID it is supposed\nto work well.\n\nb) as the Postgresql CLUSTER is static it does not replace the index.\n\nI should say at this point that I never actually used the CLUSTER\nfeature in MS SQL Server as we decided not to use that product after\nevaluating it. So I have no practical experience to know how much of the\nspeed improvement wanted by Leon it would deliver.\n\nDave\n-- \nDavid Warnock\nSundayta Ltd\n", "msg_date": "Mon, 05 Jul 1999 17:02:36 +0100", "msg_from": "David Warnock <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Joins and links" }, { "msg_contents": "> \n> \n> David Warnock wrote:\n> > A solution I have seen in some sql dbms (eg MS SQL Server) is to be able\n> > to choose one index and have the database table sorted by this index.\n> > Then you don't need a separate index for that sort-order. It means that\n> > almost any index can work like a recnum and avoid looking in both the\n> > index and the data. I am trying to remember the name of this feature but\n> > cannot at the moment.\n> \n> CLUSTER ?\n> \n\nman cluster.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 5 Jul 1999 12:03:53 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Joins and links" }, { "msg_contents": "> Bruce,\n> \n> I did not know Postgresql had that. I have just looked at the docs and\n> it seems that the postgresql CLUSTER is similar to the feature in MS SQL\n> Server but at present is rather more limited as\n\nIt is in the FAQ under Performance too.\n\n> a) It is static whereas CLUSTER can be set as an attribute on an index\n> in MS SQL Server and then ther index is not created separately the table\n> is kept permanently sorted by the index. This obviously is very very\n> slow if you add to the middle of the index and wasteful of space if you\n> delete rows from the table. However, for a sequential ID it is supposed\n> to work well.\n\nWell, sometimes it is better to have control over when this happens. \nWhat is why cluster is nice, and we don't have time to add dynamic\ncluster right now.\n\n> \n> b) as the Postgresql CLUSTER is static it does not replace the index.\n> \n> I should say at this point that I never actually used the CLUSTER\n> feature in MS SQL Server as we decided not to use that product after\n> evaluating it. So I have no practical experience to know how much of the\n> speed improvement wanted by Leon it would deliver.\n\nSee the performance FAQ. If you look in\npgsql/contrib/fulltextindex/README, we mention it too because without\nit, fulltext indexing is slow.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 5 Jul 1999 12:06:31 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Joins and links" }, { "msg_contents": "Hello David,\n\nMonday, July 05, 1999 you wrote:\n\nD> I should say at this point that I never actually used the CLUSTER\nD> feature in MS SQL Server as we decided not to use that product after\nD> evaluating it. So I have no practical experience to know how much of the\nD> speed improvement wanted by Leon it would deliver.\n\nNot much. Because the main idea is to eliminate index scans\nentirely, whereas CLUSTER is merely \"CLUSTER will help because once the index\nidentifies the heap page for the first row that matches, all other rows that\nmatch are probably already on the same heap page, saving disk accesses and\nspeeding up the query.\" - this is at best a few percent gain and means\nnothing if the database is entirely in memory (as it often is).\n\nBest regards, Leon\n\n\n", "msg_date": "Mon, 5 Jul 1999 21:06:51 +0500", "msg_from": "Leon <[email protected]>", "msg_from_op": true, "msg_subject": "Re[2]: [GENERAL] Joins and links" }, { "msg_contents": "> Bruce,\n> \n> Thanks for your informative messages.\n> \n> > Well, sometimes it is better to have control over when this happens.\n> > What is why cluster is nice, and we don't have time to add dynamic\n> > cluster right now.\n> \n> I personally would not see it as a high priority. \n> \n> My only reason for suggesting it was as a possible way to help provide\n> more performance for Leon without adding the full record number support\n> that he was asking for.\n> \n\nIn fact, you were mentioning that inserting into the middle is slow, but\nthat sequential adding to the end is good, but in fact, heap already\ndoes this, doesn't it? I guess if you only add occasionally, it is OK. \nAlso, our no-over-write table structure had a tendency to mess up that\nordering because updated rows do not go into the same place as the\noriginal row.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 5 Jul 1999 12:36:38 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Joins and links" }, { "msg_contents": "Bruce,\n\nThanks for your informative messages.\n\n> Well, sometimes it is better to have control over when this happens.\n> What is why cluster is nice, and we don't have time to add dynamic\n> cluster right now.\n\nI personally would not see it as a high priority. \n\nMy only reason for suggesting it was as a possible way to help provide\nmore performance for Leon without adding the full record number support\nthat he was asking for.\n\nDave\n\n-- \nDavid Warnock\nSundayta Ltd\n", "msg_date": "Mon, 05 Jul 1999 17:36:55 +0100", "msg_from": "David Warnock <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Joins and links" }, { "msg_contents": "Leon,\n\nI agree that the static CLUSTER that Postgresql currently supports will\nnot help you much. When I suggested looking for a CLUSTER type feature I\nonly knew of dynamic clustering that removed the need for an index.\n\nThe discussion is not going anywhere as static clustering will not help\nyou and dynamic clustering is not about to be added.\n\n\n\nIf you are interested in other solutions that do not involve adding\nrecord number support (which I personally still feel to be a mistake in\na set orientated dbms) then have you considered an application server\nlinked to triggers.\n\nFor some applications is is mposible for an application server to\nmaintain the latest reports on-line, recalculating as required by a\ntrigger notifying it of relevant changes.\nThen reporting comes instantly from the app server.\n\nIf there are a large number of different reports or the reports have a\nlot of selections and options this may not be possible (but a half way\nhouse might still be by using a slightly flattened table structure for\nreporting).\n\nRegards\n\nDave\n\n-- \nDavid Warnock\nSundayta Ltd\n", "msg_date": "Mon, 05 Jul 1999 17:44:26 +0100", "msg_from": "David Warnock <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Joins and links" }, { "msg_contents": "Bruce,\n\nIt is amazing when you get responses written this fast (so that the\nreponse arrives before the copy of the message from the list).\n\n> In fact, you were mentioning that inserting into the middle is slow, but\n> that sequential adding to the end is good, \n\nYes this is what I was told about the way MS SQL Server does clustering.\n\n> but in fact, heap already does this, doesn't it? \n\nheap? I am not sure what you mean.\n\n> I guess if you only add occasionally, it is OK.\n> Also, our no-over-write table structure had a tendency to mess up that\n> ordering because updated rows do not go into the same place as the\n> original row.\n\nI have just been thinking a bit more and have realised that the\nmulti-generational architecture of 6.5 (which I have used in Interbase)\nmeans that probably both clustering (in thr dynamic sense) and full\nrecord number support as request by Leon are impractical. \n\nIt seems to me that record number relationships will fail completely if\nthere can be more than one version of a record. (well even if they are\nforced to work they will lose some/all of their speed advantage).\n\nDynamically clustered indexes might still work but unless tables are\nappended to only with no inserts or updates then maintainig the table in\nindex order when there can be multiple version of each row would be very\nslow.\n\nDave\n\n-- \nDavid Warnock\nSundayta Ltd\n", "msg_date": "Mon, 05 Jul 1999 18:06:44 +0100", "msg_from": "David Warnock <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Joins and links" }, { "msg_contents": "> Bruce,\n> \n> It is amazing when you get responses written this fast (so that the\n> reponse arrives before the copy of the message from the list).\n\nYep.\n\n> > but in fact, heap already does this, doesn't it? \n> \n> heap? I am not sure what you mean.\n\nHeap is our base table structure. Records are always inserted on the\nend of the heap file. Vacuum removes old, superceeded rows.\n\n> \n> > I guess if you only add occasionally, it is OK.\n> > Also, our no-over-write table structure had a tendency to mess up that\n> > ordering because updated rows do not go into the same place as the\n> > original row.\n> \n> I have just been thinking a bit more and have realised that the\n> multi-generational architecture of 6.5 (which I have used in Interbase)\n> means that probably both clustering (in thr dynamic sense) and full\n> record number support as request by Leon are impractical. \n\nYes, would be a big problem. Most commercial databases have found that\nthe network data model is impractical in most cases.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 5 Jul 1999 13:08:24 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Joins and links" }, { "msg_contents": "Hello David,\n\nMonday, July 05, 1999 you wrote:\n\nD> If you are interested in other solutions that do not involve adding\nD> record number support (which I personally still feel to be a mistake in\nD> a set orientated dbms)\n\nWhy? There will be no such field as \"record number\", the only\nplace where it can exist is the field which references another\ntable. I can quite share your feeling about wrongness of\nphysical-oriented things in abstract tables, but don't\nplain old indices deal with physical record numbers? We could\ndo the same - hide the value stored in such field and only\noffer the user ability to use it in queries without knowing\nthe value.\n\nD> then have you considered an application server\nD> linked to triggers.\n\nUnfortunately, every day user demands new types of reports\nfor financial analysis. And nobody knows what will be user's\nwish tomorrow.\n\nAnd, besides, it is not only my personal wish. What I am\nproposing is huge (dozen-fold) performance gain on widespread\ntasks. If you implement this, happy users will erect a gold\nmonument to Postgres development team.\n\nBest regards, Leon\n\n\n", "msg_date": "Mon, 5 Jul 1999 22:22:05 +0500", "msg_from": "Leon <[email protected]>", "msg_from_op": true, "msg_subject": "Re[2]: [GENERAL] Joins and links" }, { "msg_contents": "> And, besides, it is not only my personal wish. What I am\n> proposing is huge (dozen-fold) performance gain on widespread\n> tasks. If you implement this, happy users will erect a gold\n> monument to Postgres development team.\n\nWe(Vadim) did MVCC, and I haven't seen any monuments yet.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 5 Jul 1999 14:00:12 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re[2]: [GENERAL] Joins and links" }, { "msg_contents": "Hello Bruce,\n\nMonday, July 05, 1999 you wrote:\n\n>> I have just been thinking a bit more and have realised that the\n>> multi-generational architecture of 6.5 (which I have used in Interbase)\n>> means that probably both clustering (in thr dynamic sense) and full\n>> record number support as request by Leon are impractical.\n\nB> Yes, would be a big problem. Most commercial databases have found that\nB> the network data model is impractical in most cases.\n\nThat's exacly why such powerful tool as SQL is incomparably slower\nthan plain dbf in most cases. Ignoring network data model was\na big mistake.\n\nBest regards, Leon\n\n\n", "msg_date": "Mon, 5 Jul 1999 23:12:39 +0500", "msg_from": "Leon <[email protected]>", "msg_from_op": true, "msg_subject": "Re[2]: [GENERAL] Joins and links" }, { "msg_contents": "Hello David,\n\nMonday, July 05, 1999 you wrote:\n\nD> I have just been thinking a bit more and have realised that the\nD> multi-generational architecture of 6.5 (which I have used in Interbase)\nD> means that probably both clustering (in thr dynamic sense) and full\nD> record number support as request by Leon are impractical.\n\nD> It seems to me that record number relationships will fail completely if\nD> there can be more than one version of a record.\n\nMaybe it is a silly question, but what are \"more than one version\nof a record\"? In my opinion record is a atomic unique entity.\nIsn't it?\n\nD> (well even if they are\nD> forced to work they will lose some/all of their speed advantage).\n\n\nBest regards, Leon\n\n\n", "msg_date": "Mon, 5 Jul 1999 23:15:12 +0500", "msg_from": "Leon <[email protected]>", "msg_from_op": true, "msg_subject": "Re[2]: [GENERAL] Joins and links" }, { "msg_contents": "Leon wrote:\n> Why? There will be no such field as \"record number\", the only\n> place where it can exist is the field which references another\n> table. I can quite share your feeling about wrongness of\n> physical-oriented things in abstract tables, but don't\n> plain old indices deal with physical record numbers? We could\n> do the same - hide the value stored in such field and only\n> offer the user ability to use it in queries without knowing\n> the value.\n\nLeon,\n\nIn my understanding, pointer based approaches like you\nare recommending have been implemented in several prototype\nobjected oriented databases. They have been shown to be\norders of magnitude slower than set oriented techniques,thus \nmany OO databases are implemented as wrappers over \nrelational systems!\n\nIn general, the best way to handle stuff like this for reports\nis to cashe small tables which are joined (like product lookups)\nin memory to make the queries run much faster. To do this,\nyour design has to be smart, by seperating those tuples which\nare \"active\" products from those \"inactive\" products so that\nthe database can cashe the active records and not the inactive\nrecords. Perhaps something like:\n\n1. CREATE VIEW PRODUCT AS ( SELECT * FROM PRODUCT_ACTIVE_CASHED \n UNION ALL SELECT * FROM PRODUCT_INACTIVE);\n\n2. SELECT ORDER_NO, PRODUCT_NAME FROM ORDER_LINE, PRODUCT WHERE \n PRODUCT.PRODUCT = ORDER_LINE.PRODUCT and ORDER_LINE.ORDER = 120; \n\nWould be a general like solution, where orders with active \nproducts are brought up quickly since the join is done\nin memory, but orders with inactive products take much\nlonger, since the query on the active table is a cashe\nmiss, leaving a disk access on the inactive table.\n\nPerhaps there are several other nicer ways do to this, from my\nunderstanding a HASH based cashe could allow frequently accesed\ntuples to be cahsed in memory? ... anyway, I'm no expert.\n\nA more traditional method (which I use all the time), is to \nhave canned reports that are pre-generated using common \nconditions. These are then saved on a web server and \nupdated daily. It is a bit less accurate, but often for 99% \nof the purposes, day old information is just fine....\n\nHope this helps!\n\n;) Clark\n", "msg_date": "Mon, 05 Jul 1999 14:36:20 -0400", "msg_from": "Clark Evans <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Joins and links" }, { "msg_contents": "Hello Clark,\n\nMonday, July 05, 1999 you wrote:\n\nC> In my understanding, pointer based approaches like you\nC> are recommending have been implemented in several prototype\nC> objected oriented databases. They have been shown to be\nC> orders of magnitude slower than set oriented techniques,thus\nC> many OO databases are implemented as wrappers over\nC> relational systems!\n\nI can't guess where you got such information. Contrarily,\nI know at least one (commercial) network database server which\norders of magnitude faster than ANY SQL server. It simply no\nmatch to them. That experience is exactly what made me write\nto Postgres mailing list. As I wrote (maybe to hackers' list)\npointer lookup takes ideally three CPU commands - read,\nmultiply, lookup, whereas index scan takes dozens of them and\nputs a strain on optimizer's intellectual abilities, and\nas we have seen it can hardly choose the optimum way of\nperforming a join. In pointer-field case optimizer can be quite\ndumb, because there is only one way to perform a query.\n\nBest regards, Leon\n\n\n", "msg_date": "Mon, 5 Jul 1999 23:57:25 +0500", "msg_from": "Leon <[email protected]>", "msg_from_op": true, "msg_subject": "Re[2]: [GENERAL] Joins and links" }, { "msg_contents": "> Hello David,\n> \n> Monday, July 05, 1999 you wrote:\n> \n> D> I have just been thinking a bit more and have realised that the\n> D> multi-generational architecture of 6.5 (which I have used in Interbase)\n> D> means that probably both clustering (in thr dynamic sense) and full\n> D> record number support as request by Leon are impractical.\n> \n> D> It seems to me that record number relationships will fail completely if\n> D> there can be more than one version of a record.\n> \n> Maybe it is a silly question, but what are \"more than one version\n> of a record\"? In my opinion record is a atomic unique entity.\n> Isn't it?\n\nRead how MVCC works in the manuals.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 5 Jul 1999 15:10:34 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re[2]: [GENERAL] Joins and links" }, { "msg_contents": "Hello Bruce,\n\nTuesday, July 06, 1999 you wrote:\n\n>>\n>> Maybe it is a silly question, but what are \"more than one version\n>> of a record\"? In my opinion record is a atomic unique entity.\n>> Isn't it?\n\nB> Read how MVCC works in the manuals.\n\nAh, you mean MVCC! That's what I replied to Tom Lane:\n\n> This problem can be solved. An offhand solution is to have\n> an additional system field which will point to new tuple left after\n> update. It is filled at the same time as the original tuple is\n> marked invalid. So the scenario is as follows: we follow the link,\n> and if we find that in the tuple where we arrived this system field\n> is not NULL, we go to (the same table of course) where it is pointing\n> to. Sure VACUUM will eliminate these. Performance penalty is small.\n\nBest regards, Leon\n\n\n", "msg_date": "Tue, 6 Jul 1999 00:32:47 +0500", "msg_from": "Leon <[email protected]>", "msg_from_op": true, "msg_subject": "Re[4]: [GENERAL] Joins and links" }, { "msg_contents": "\nHi Leon,\n\nIn the long wars between the object databases, Versant which has logical\nrecord ids, and ObjectStore which has physical record ids, Versant has\nconsistently beaten ObjectStore in the sort of queries that financial\npeople are likely to do. (On graphic design type apps, ObjectStore tends\nto win).\n\nVersants object ids actually go through an index to get to the physical\nlocation. Admittedly a VERY highly optimised index, but an index\nnevertheless.\n\nWhat this says to me is that Postgres's concept of oids are ok as is. If\nyour queries are too slow either the optimiser is not using an index\nwhen it should, or else the indexing mechanism is not fast enough. I\nsuspect it would be nice, from an object database perspective if a\nspecial case oid-index index type was written.\n\nPhysical record ids, have lots of problems. You can't re-organise space\ndynamically so you have to take your database off-line for a while to\ntotally re-organise it. You lose space because it can't be re-used\ndynamically. There are problems with backup.\n\nI'm writing a web page on what would be needed to make Postgres into an\nObject database.....\nhttp://www.tech.com.au/postgres\n\nLeon wrote:\n> \n> Hello all!\n> \n> You probably remember me - recently I complained about speed\n> of joins in Postgres. After a short investigation the way was\n> found in which Postgres's optimizer can do the right job. It\n> was constructive discussion. Now I want to tell you what could\n> make Postgres better and faster. And what will make us (our\n> development group) happy. Maybe I am bothering someone, if\n> I do - tell me that.\n> \n> Let me begin.\n> \n> First of all, some accounting reports need to be delivered\n> very fast - within minutes or so. And what's bad is that\n> quite a few of these reports are quite time-consuming and search\n> intensive. In particular, internals of these reports include\n> a lot of joins on tables.\n> \n> Secondly, almost all of accounting information naturally\n> fits into network data model, which can be implemented very\n> efficiently.\n> \n> This stuff described here is not accounting-specific, it\n> can be found in every database which uses master-detail\n> tables and other such types of relations.\n> \n> So. How is join being performed in such cases? Although I am\n> not an expert, I can imagine the way: first there is an (index)\n> scan on first table, and then an (index) scan on the second.\n> It is the best way, reality could be much worse as we have seen.\n> \n> How can we radically improve performance in such cases? There\n> is a simple and quite obvious way. (For you not to think that\n> I am hallucinating I will tell you that there exist some\n> real servers that offer such features I am talking about)\n> We should make a real reference in one table to another! That\n> means there could be special data type called, say, \"link\",\n> which is a physical record number in the foreign table.\n> \n> Queries could look like this:\n> \n> table1:\n> a int4\n> b link (->table2)\n> \n> table2:\n> c int4\n> recnum (system auxiliary field, really a record number in the table)\n> \n> select * from table2 where table1.a > 5 and table1.b = table2.recnum\n> \n> Such joins can fly really fast, as practice shows :)\n> Just consider: the thing table1.b = table2.recnum is a READY-MADE\n> join, so server doesn't have to build anything on top of that. It\n> can simply perform lookup through link, and since it is a physical\n> record number, this is done with the efficiency of C pointers! Thus\n> performance gain is ENORMOUS.\n> \n> And it simplifies the optimizer, because it doesn't have to decide\n> anything about whether to use indices and such like. The join is\n> performed always the same way, and it is the best way.\n> \n> This feature, being implemented, could bring Postgres ahead\n> of most commercial servers, so proving creative abilities of\n> free software community. Let us make a step in the future!\n> \n> Best regards,\n> Leon\n", "msg_date": "Tue, 06 Jul 1999 10:10:27 +1000", "msg_from": "Chris Bitmead <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Joins and links" }, { "msg_contents": "Leon wrote:\n> \n> Ah, you mean MVCC! That's what I replied to Tom Lane:\n> \n> > This problem can be solved. An offhand solution is to have\n> > an additional system field which will point to new tuple left after\n> > update. It is filled at the same time as the original tuple is\n> > marked invalid. So the scenario is as follows: we follow the link,\n> > and if we find that in the tuple where we arrived this system field\n> > is not NULL, we go to (the same table of course) where it is pointing\n> > to. Sure VACUUM will eliminate these. Performance penalty is small.\n\nOld tuple version points to new version right now -:).\nO how else could we handle updates in read committed mode\n(new tuple version are not visible to concurrent update).\n\nLet's go to hackers list.\nBut also let's relax for some more time, Leon... -:)\n\nVadim\n", "msg_date": "Tue, 06 Jul 1999 10:29:52 +0800", "msg_from": "Vadim Mikheev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Joins and links" } ]
[ { "msg_contents": "(mail didn't get through the other day, so here it is again)\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n(enclosure bounced, so posting the tar file on the patches list\ninstead)\n\n> Wow, that select manual page does look good.\n\nOK, here is a first cut at new man pages. Things aren't perfect and\nneed improving, but this is a good start.\n\nI'll be going through the ref/*.sgml files to change formatting which\nseems to give docbook2man trouble. I'll also be massaging docbook2man\nto fix up its behavior. A few man pages weren't generated at all, but\nthe symptom was similar to things I've already fixed so I think that I\ncan fix these too.\n\nAfter I've gotten these things done, then we should think about taking\nthe existing old man page content and making sure that it is all in\nthe sgml files somewhere (not everything should stay in the reference\npages, but it should show up somewhere: ref pages, User's Guide, or\nAdmin Guide are likely candidates).\n\nWe'll have new man pages for v6.6!\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California", "msg_date": "Mon, 05 Jul 1999 13:56:42 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": true, "msg_subject": "[Fwd: sgml tool]" } ]
[ { "msg_contents": "Hello hackers!\n\nI posted this message to general mailing list, and was told\nthat hackers' list is more appropriate place to post this\nmessage to. What will you say about it?\n\n\nThis is a forwarded message\nFrom: Leon <[email protected]>\nTo: pgsql-general <[email protected]>\nDate: Monday, July 05, 1999, 5:46:36 PM\nSubject: Joins and links\n\n===8<==============Original message text===============\nHello all!\n\n You probably remember me - recently I complained about speed\n of joins in Postgres. After a short investigation the way was\n found in which Postgres's optimizer can do the right job. It\n was constructive discussion. Now I want to tell you what could\n make Postgres better and faster. And what will make us (our\n development group) happy. Maybe I am bothering someone, if\n I do - tell me that.\n\n Let me begin.\n\n First of all, some accounting reports need to be delivered\n very fast - within minutes or so. And what's bad is that\n quite a few of these reports are quite time-consuming and search\n intensive. In particular, internals of these reports include\n a lot of joins on tables.\n\n Secondly, almost all of accounting information naturally\n fits into network data model, which can be implemented very\n efficiently.\n\n This stuff described here is not accounting-specific, it\n can be found in every database which uses master-detail\n tables and other such types of relations.\n\n So. How is join being performed in such cases? Although I am\n not an expert, I can imagine the way: first there is an (index)\n scan on first table, and then an (index) scan on the second.\n It is the best way, reality could be much worse as we have seen.\n\n How can we radically improve performance in such cases? There\n is a simple and quite obvious way. (For you not to think that\n I am hallucinating I will tell you that there exist some\n real servers that offer such features I am talking about)\n We should make a real reference in one table to another! That\n means there could be special data type called, say, \"link\",\n which is a physical record number in the foreign table.\n\n Queries could look like this:\n\n table1:\n a int4\n b link (->table2)\n\n table2:\n c int4\n recnum (system auxiliary field, really a record number in the table)\n\n select * from table2 where table1.a > 5 and table1.b = table2.recnum\n\n Such joins can fly really fast, as practice shows :)\n Just consider: the thing table1.b = table2.recnum is a READY-MADE\n join, so server doesn't have to build anything on top of that. It\n can simply perform lookup through link, and since it is a physical\n record number, this is done with the efficiency of C pointers! Thus\n performance gain is ENORMOUS.\n\n And it simplifies the optimizer, because it doesn't have to decide\n anything about whether to use indices and such like. The join is\n performed always the same way, and it is the best way.\n\n This feature, being implemented, could bring Postgres ahead\n of most commercial servers, so proving creative abilities of\n free software community. Let us make a step in the future!\n\nBest regards,\n Leon\n\n===8<===========End of original message text===========\n\n\nBest regards,\n Leon mailto:[email protected]\n\n\n", "msg_date": "Mon, 5 Jul 1999 20:24:52 +0500", "msg_from": "Leon <[email protected]>", "msg_from_op": true, "msg_subject": "Fwd: Joins and links" }, { "msg_contents": "Leon <[email protected]> writes:\n> We should make a real reference in one table to another! That\n> means there could be special data type called, say, \"link\",\n> which is a physical record number in the foreign table.\n\nThere is no such thing as a physical record number for a tuple in\nPostgres. The closest you could come is an OID, which isn't really any\nfaster than any other joinable field --- you still need an index to\nsupport fast lookup by OID.\n\nIf we did have such a concept, the speed penalties for supporting\nhard links from one tuple to another would be enormous. Every time\nyou change a tuple, you'd have to try to figure out what other tuples\nreference it, and update them all.\n\nFinally, I'm not convinced that the results would be materially faster\nthan a standard mergejoin (assuming that you have indexes on both the\nfields being joined) or hashjoin (in the case that one table is small\nenough to be loaded into memory).\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 05 Jul 1999 13:37:29 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Fwd: Joins and links " }, { "msg_contents": "Hello Tom,\n\nMonday, July 05, 1999 you wrote:\n\nT> If we did have such a concept, the speed penalties for supporting\nT> hard links from one tuple to another would be enormous. Every time\nT> you change a tuple, you'd have to try to figure out what other tuples\nT> reference it, and update them all.\n\nI'm afraid that's mainly because fields in Postgres have variable\nlength and after update they go to the end of the table. Am I right?\nIn that case there could be done such referencing only with\ntables with wixed width rows, whose updates can naturally be done\nwithout moving. It is a little sacrifice, but it is worth it.\n\nT> Finally, I'm not convinced that the results would be materially faster\nT> than a standard mergejoin (assuming that you have indexes on both the\nT> fields being joined) or hashjoin (in the case that one table is small\nT> enough to be loaded into memory).\n\nConsider this: no indices, no optimizer thinking, no index lookups -\nno nothing! Just a sequential number of record multiplied by\nrecord size. Exactly three CPU instructions: read, multiply,\nlookup. Can you see the gain now?\n\nBest regards, Leon\n\n\n", "msg_date": "Mon, 5 Jul 1999 23:09:40 +0500", "msg_from": "Leon <[email protected]>", "msg_from_op": true, "msg_subject": "Re[2]: [HACKERS] Fwd: Joins and links" }, { "msg_contents": "Leon <[email protected]> writes:\n> I'm afraid that's mainly because fields in Postgres have variable\n> length and after update they go to the end of the table. Am I right?\n> In that case there could be done such referencing only with\n> tables with wixed width rows, whose updates can naturally be done\n> without moving. It is a little sacrifice, but it is worth it.\n\nNo, you are not right. Tuple updates can *never* be done without\nmoving the tuple, because the old tuple value must not be overwritten\nuntil and unless the transaction is committed. (Under MVCC, it may\nneed to stick around even longer than that, I believe.) Thus, a tuple\nupdate would require an update (and move) of every referencing tuple,\nwhich could cascade into updates of tuples that reference those tuples,\netc.\n\nBut the really serious problem is simply that of finding the tuples\nthat reference the tuple you are about to update. This is relatively\nstraightforwards for indexes --- you just compute the index value for\nthe old tuple and probe into the index with it. When the tuples might\nbe anywhere in the database, there are no easy shortcuts. I think the\nonly way would be maintaining an index listing all the referencing\ntuples for every referenced tuple. This would be a bit of a bear to\nmaintain, as well as a source of unwanted blocks/deadlocks since it\nwould be a bottleneck for updates to *every* table in the database.\n\nT> Finally, I'm not convinced that the results would be materially faster\nT> than a standard mergejoin (assuming that you have indexes on both the\nT> fields being joined) or hashjoin (in the case that one table is small\nT> enough to be loaded into memory).\n\n> Consider this: no indices,\n\nYou'd still need indices --- see above\n\n> no optimizer thinking,\n\nYou'd still need to run the optimizer to decide whether you wanted to\nuse this technique or some more-conventional one (unless your proposal\nis to remove all other join mechanisms? Rather inflexible, that...)\n\n> no nothing! Just a sequential number of record multiplied by\n> record size. Exactly three CPU instructions: read, multiply,\n> lookup. Can you see the gain now?\n\nIf you think it takes three instructions to access a tuple that's out\non disk somewhere, I'm afraid you're sadly mistaken.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 05 Jul 1999 15:02:25 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re[2]: [HACKERS] Fwd: Joins and links " }, { "msg_contents": "> Leon <[email protected]> writes:\n> > We should make a real reference in one table to another! That\n> > means there could be special data type called, say, \"link\",\n> > which is a physical record number in the foreign table.\n> \n> There is no such thing as a physical record number for a tuple in\n> Postgres. The closest you could come is an OID, which isn't really any\n> faster than any other joinable field --- you still need an index to\n> support fast lookup by OID.\n\nActually, there is:\n\n\tselect ctid from pg_class;\n\n\tctid \n\t------\n\t(0,1) \n\t(0,2) \n\t...\n\nThe number is the block number offset in the block. It doesn't help\nbecause UPDATED rows would get a new tid. Tid's can be used for short\nperiods if you are sure the data in the table doesn't change, and there\nis a TODO item to allow ctid reference in the WHERE clause.\n\n\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 5 Jul 1999 15:08:17 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Fwd: Joins and links" }, { "msg_contents": "> Hello Tom,\n> \n> Monday, July 05, 1999 you wrote:\n> \n> T> If we did have such a concept, the speed penalties for supporting\n> T> hard links from one tuple to another would be enormous. Every time\n> T> you change a tuple, you'd have to try to figure out what other tuples\n> T> reference it, and update them all.\n> \n> I'm afraid that's mainly because fields in Postgres have variable\n> length and after update they go to the end of the table. Am I right?\n> In that case there could be done such referencing only with\n> tables with wixed width rows, whose updates can naturally be done\n> without moving. It is a little sacrifice, but it is worth it.\n> \n> T> Finally, I'm not convinced that the results would be materially faster\n> T> than a standard mergejoin (assuming that you have indexes on both the\n> T> fields being joined) or hashjoin (in the case that one table is small\n> T> enough to be loaded into memory).\n> \n> Consider this: no indices, no optimizer thinking, no index lookups -\n> no nothing! Just a sequential number of record multiplied by\n> record size. Exactly three CPU instructions: read, multiply,\n> lookup. Can you see the gain now?\n> \n> Best regards, Leon\n> \n> \n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 5 Jul 1999 15:09:19 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re[2]: [HACKERS] Fwd: Joins and links" }, { "msg_contents": "Hello Tom,\n\nTuesday, July 06, 1999 you wrote:\n\nT> No, you are not right. Tuple updates can *never* be done without\nT> moving the tuple, because the old tuple value must not be overwritten\nT> until and unless the transaction is committed. (Under MVCC, it may\nT> need to stick around even longer than that, I believe.) Thus, a tuple\nT> update would require an update (and move) of every referencing tuple,\nT> which could cascade into updates of tuples that reference those tuples,\nT> etc.\n\nThis problem can be solved. An offhand solution is to have\nan additional system field which will point to new tuple left after\nupdate. It is filled at the same time as the original tuple is\nmarked invalid. So the scenario is as follows: we follow the link,\nand if we find that in the tuple where we arrived this system field\nis not NULL, we go to (the same table of course) where it is pointing\nto. Sure VACUUM will eliminate these. Performance penalty is small.\n\nT>> Finally, I'm not convinced that the results would be materially faster\nT>> than a standard mergejoin (assuming that you have indexes on both the\nT>> fields being joined) or hashjoin (in the case that one table is small\nT>> enough to be loaded into memory).\n\n>> Consider this: no indices,\n\nT> You'd still need indices --- see above\n\nOnly time when we will need to look who is referencing us is\nduring VACUUM. So no real need of indices.\n\n>> no optimizer thinking,\n\nT> You'd still need to run the optimizer to decide whether you wanted to\nT> use this technique or some more-conventional one (unless your proposal\nT> is to remove all other join mechanisms? Rather inflexible, that...)\n\nNo. I am not an evil itself which tries to eliminate everything :)\nI said when optimizer sees join made through such field it has\nthe only option - to follow link. It simply has no choice.\n\nT> If you think it takes three instructions to access a tuple that's out\nT> on disk somewhere, I'm afraid you're sadly mistaken.\n\nNo. I meant a tuple which is in memory somewhere :)\n\nBest regards, Leon\n\n\n", "msg_date": "Tue, 6 Jul 1999 00:28:17 +0500", "msg_from": "Leon <[email protected]>", "msg_from_op": true, "msg_subject": "Re[4]: [HACKERS] Fwd: Joins and links" }, { "msg_contents": "Hello Bruce,\n\nTuesday, July 06, 1999 you wrote:\n\nB> Actually, there is:\n\nB> select ctid from pg_class;\n\nB> ctid\nB> ------\nB> (0,1)\nB> (0,2)\nB> ...\n\nB> The number is the block number offset in the block. It doesn't help\nB> because UPDATED rows would get a new tid. Tid's can be used for short\nB> periods if you are sure the data in the table doesn't change, and there\nB> is a TODO item to allow ctid reference in the WHERE clause.\n\nIt seems that you are moving in a right direction. But these tids\nseem to be of short-term use, anyway. Why not to upgrade to full-featured\nlinks at once?\n\nBest regards, Leon\n\n\n", "msg_date": "Tue, 6 Jul 1999 00:46:17 +0500", "msg_from": "Leon <[email protected]>", "msg_from_op": true, "msg_subject": "Re[2]: [HACKERS] Fwd: Joins and links" }, { "msg_contents": "Leon <[email protected]> writes:\n> This problem can be solved. An offhand solution is to have\n> an additional system field which will point to new tuple left after\n> update. It is filled at the same time as the original tuple is\n> marked invalid. So the scenario is as follows: we follow the link,\n> and if we find that in the tuple where we arrived this system field\n> is not NULL, we go to (the same table of course) where it is pointing\n> to. Sure VACUUM will eliminate these. Performance penalty is small.\n\nIs it small? After multiple updates to the referenced tuple, you'd be\ntalking about following a chain of TID references in order to find the\nreferenced tuple from the referencing tuple. I'd expect this to take\nmore time than an index access within a fairly small number of updates\n(maybe four or so, just on the basis of counting disk-block fetches).\n\nVACUUM is an interesting problem as well: to clean up the chains as you\nsuggest, VACUUM could no longer be a one-table-at-a-time proposition.\nIt would have to be able to update tuples elsewhere while repacking the\ntuples in the current table. This probably means that VACUUM requires\na global lock across the whole database. Also, making those updates\nin an already-vacuumed table without undoing its nicely vacuummed state\nmight be tricky.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 05 Jul 1999 16:33:04 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re[4]: [HACKERS] Fwd: Joins and links " }, { "msg_contents": "Hello Tom,\n\nTuesday, July 06, 1999 you wrote:\n\nT> Is it small?\n\nIt is :) First you should tell me what is the cost of tid lookup.\nIf it is significantly more expensive than C pointer, then we\nshould consider implementing such cheap pointer. If tid is already\ncheap, then even 10 consecutive lookups will cost almost nothing.\n\nAnd besides all, you should consider statistics. Can you name\nfive or even three applications where large databases are\nmassively updated without being vacuumed often?\n\nT> After multiple updates to the referenced tuple, you'd be\nT> talking about following a chain of TID references in order to find the\nT> referenced tuple from the referencing tuple. I'd expect this to take\nT> more time than an index access within a fairly small number of updates\nT> (maybe four or so, just on the basis of counting disk-block fetches).\n\nT> VACUUM is an interesting problem as well: to clean up the chains as you\nT> suggest, VACUUM could no longer be a one-table-at-a-time proposition.\nT> It would have to be able to update tuples elsewhere while repacking the\nT> tuples in the current table. This probably means that VACUUM requires\nT> a global lock across the whole database.\n\nDoes VACUUM require lock on the vacuumed table now? I am sure it\ndoes. And in our case we must lock the vacuumed table and all\nthe tables that are referencing it, not all tables.\nAnd, besides, manual suggests that VACUUM should be done\nnightly, not daily :)\n\nHaving aquired such lock, vacuum should update the \"main\"\ntable first, then update all links in referencing tables.\nIt can be done using oids, which are matched in new and old\nversions of \"main \"table (are oids preserved during vacuum? -\nif they are not, this can be done with primary key)\n\nT> Also, making those updates\nT> in an already-vacuumed table without undoing its nicely vacuummed state\nT> might be tricky.\n\nI didn' get the idea of last sentence. Anyway, I am going to sleep.\nSee you tomorrow :)\n\nBest regards, Leon\n\n\n", "msg_date": "Tue, 6 Jul 1999 02:13:09 +0500", "msg_from": "Leon <[email protected]>", "msg_from_op": true, "msg_subject": "Re[6]: [HACKERS] Fwd: Joins and links" }, { "msg_contents": "> Leon <[email protected]> writes:\n> > I'm afraid that's mainly because fields in Postgres have variable\n> > length and after update they go to the end of the table. Am I right?\n> > In that case there could be done such referencing only with\n> > tables with wixed width rows, whose updates can naturally be done\n> > without moving. It is a little sacrifice, but it is worth it.\n> \n> No, you are not right. Tuple updates can *never* be done without\n> moving the tuple, because the old tuple value must not be overwritten\n> until and unless the transaction is committed. (Under MVCC, it may\n> need to stick around even longer than that, I believe.) Thus, a tuple\n> update would require an update (and move) of every referencing tuple,\n> which could cascade into updates of tuples that reference those tuples,\n> etc.\n\nYes, Thanks, Tom. This is exactly the issue.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 5 Jul 1999 17:25:55 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re[2]: [HACKERS] Fwd: Joins and links" }, { "msg_contents": "Congrats Tom for rising to the challenge ;)\n\nistm that a brute force hack such as is being suggested should be left\nas an exercise for the reader.\n\nThese \"table links\" seem to controvert the ability for a RDBMS to mix\nand match tables in ways which are not hardcoded beforehand.\nRegardless of whether \"there exist some real servers that offer such\nfeatures I am talking\", a departure from the relation model in a\nrelational database is likely to lead to undesireable constraints and\nrestrictions in our future development.\n\nIf they are a good idea, you might be able to implement and prove them\nusing an embedded language and the SPI facilities.\n\nJust the usual $.02...\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Mon, 05 Jul 1999 21:43:56 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Fwd: Joins and links" }, { "msg_contents": "Thomas Lockhart <[email protected]> writes:\n> Regardless of whether \"there exist some real servers that offer such\n> features I am talking\", a departure from the relation model in a\n> relational database is likely to lead to undesireable constraints and\n> restrictions in our future development.\n\nThat was another thing that was bothering me about the idea of \"version\nlinks\" between tuples (as in Leon's second proposal). They don't fit\ninto the fundamental relational model.\n\nI am not sure there's anything fundamentally wrong with his basic point;\nif, say, we could find a way to construct OIDs so that a tuple could be\nfound very quickly from its OID, that wouldn't violate the relational\nmodel AFAICS, and such OIDs would work fine as \"links\". But I don't see\nany way to do that without either giving up UPDATE or introducing a huge\namount of baggage into all processes that can update tables (VACUUM\nbeing the worst case, likely). Without doubt the best compromise would\nlook remarkably like an index on OID.\n\nUltimately, when you consider both the update costs and the access\ncosts, I doubt that this sort of thing could be a win, except maybe\nin the case where the referenced table is hardly ever changed so that\nthe update costs are seldom incurred. But in that situation it's not\nclear you want to store the referenced table in an RDBMS anyway ---\nthere are lots of lower-overhead ways to deal with fixed tables, such\nas perfect-hash generators.\n\n> If they are a good idea, you might be able to implement and prove them\n> using an embedded language and the SPI facilities.\n\nI don't think VACUUM invokes triggers, so you couldn't really do\nanything about VACUUM rearranging the table under you that way,\ncould you?\n\nI'll be interested to see Vadim's comments on this thread...\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 05 Jul 1999 19:36:20 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Fwd: Joins and links " }, { "msg_contents": "Thomas Lockhart wrote:\n> \n> Congrats Tom for rising to the challenge ;)\n> \n> istm that a brute force hack such as is being suggested should be left\n> as an exercise for the reader.\n> \n> These \"table links\" seem to controvert the ability for a RDBMS to mix\n> and match tables in ways which are not hardcoded beforehand.\n\nExcuse me, in what way? All the usual features of SQL are in their\nlegal place. What you are getting is one more additional ability,\nnot constraint. \n\n> Regardless of whether \"there exist some real servers that offer such\n> features I am talking\", a departure from the relation model in a\n> relational database is likely to lead to undesireable constraints and\n> restrictions in our future development.\n\nUntil you offer an example of such constraint these words are \ngroundless fear. \n\nThink of it not merely as index lookup speedup, but as quick and\nclever way to fix the optimizer. As we have seen, optimizer now\nhas troubles choosing the fastest way of doing the query. This\nis deep-rooted trouble, because optimizer generally needs some\nfine-graded statistics on tables which it is working on, and \ngathering such statisctics would require, I am afraid, total\nrewrite of the optimizer. In the case of links quiery is always\ndone the best way.\n\n-- \nLeon.\n\n", "msg_date": "Tue, 06 Jul 1999 10:46:21 +0500", "msg_from": "Leon <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Fwd: Joins and links" }, { "msg_contents": "Tom Lane wrote:\n> \n> Thomas Lockhart <[email protected]> writes:\n> > Regardless of whether \"there exist some real servers that offer such\n> > features I am talking\", a departure from the relation model in a\n> > relational database is likely to lead to undesireable constraints and\n> > restrictions in our future development.\n\nYep. Also, you fix one set of hard links and the next day you need to do\na slightly different join and it doesn't fit into the links you\nconstructed, because you left out a table or something silly. \n\nI used to work on a system storing and retrieving real-time trading data\non Tandem. We were integrating it with their first-generation CORBA\nsystem, but could handle 70 updates a second + heavy reads of\nhistorical/real-time tick data. And this was on an old 4-processor\nK10000. Nearly all data was stored at least twice -- made the inserts\nslightly slower (Tandem is bloody good at inserts!) but otherwise we\ncouldn't cope with the reads of several MBytes of historical data/query. \n\nLeon, I think you should study the accesses, and build the right\nintermediate tables. Yes, I know you are not supposed to duplicate data,\nbut hey, this is the real world, and disk is cheap. And triggers etc\nmake it fairly managable to retain integrity. But what is indispensable\nis the flexibility you have in a true relational model, so that you can\neasily adapt to changing demands -- adding temporary tables as you need\nthem for new reports and dropping them as they go out of use.\n\nOf course you can use views, but this can still be slow. \n\nAs far as I can see: if you know which hard links you need, you know\nwhich additional table you need to build. And knocking up the triggers\nto keep it updated is childs play. Ah, yes -- and I always have to add a\nsanity_function as well that can fix things when I've made a real\nballs-up ;-) \n\nHave fun,\n\nAdriaan\n", "msg_date": "Tue, 06 Jul 1999 09:46:03 +0300", "msg_from": "Adriaan Joubert <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Fwd: Joins and links" }, { "msg_contents": "Tom Lane wrote:\n> \n> I am not sure there's anything fundamentally wrong with his basic point;\n> if, say, we could find a way to construct OIDs so that a tuple could be\n> found very quickly from its OID, that wouldn't violate the relational\n> model AFAICS, and such OIDs would work fine as \"links\". But I don't see\n> any way to do that without either giving up UPDATE or introducing a huge\n> amount of baggage into all processes that can update tables (VACUUM\n> being the worst case, likely). Without doubt the best compromise would\n> look remarkably like an index on OID.\n\nThere is no problems with UPDATE: updated tuple points to newer\nversion, so we can avoid update of referencing tuples here.\nVACUUM would have to update referencing tuples (via normal\nheap_replace, nothing special) while removing old versions. \nThis may cause deadlocks but we could give vacuum higher priority\nand abort others.\n\nSo, vacuum is the worst case, as pointed by Tom.\nNo problems with MVCC and other things.\n\nVadim\n", "msg_date": "Tue, 06 Jul 1999 19:12:29 +0800", "msg_from": "Vadim Mikheev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Fwd: Joins and links" }, { "msg_contents": "Hello Vadim,\n\nTuesday, July 06, 1999 you wrote:\n\nV> There is no problems with UPDATE: updated tuple points to newer\nV> version, so we can avoid update of referencing tuples here.\nV> VACUUM would have to update referencing tuples (via normal\nV> heap_replace, nothing special) while removing old versions.\nV> This may cause deadlocks but we could give vacuum higher priority\nV> and abort others.\n\nV> So, vacuum is the worst case, as pointed by Tom.\nV> No problems with MVCC and other things.\n\nSo. The main drawback is higher priority for VACUUM. Not\ntoo large, eh?\n\nWhen you will decide - to implement or not to implement,\nI urge you to think again about the relief on optimizer,\nwhich I stressed many times. No one rebutted yet that adding\nbrains to optimizer so that it can use appropriate join method\nwill require major rewrite. With links you get the best join\nmethod as side effect - virtually for free. These joins\nwill never be too slow for an unknown reason. Think carefully.\nI hope you will make wise decision.\n\nBest regards, Leon\n\n\n", "msg_date": "Tue, 6 Jul 1999 16:36:57 +0500", "msg_from": "Leon <[email protected]>", "msg_from_op": true, "msg_subject": "Re[2]: [HACKERS] Fwd: Joins and links" }, { "msg_contents": "Leon wrote:\n> \n> So. The main drawback is higher priority for VACUUM. Not\n> too large, eh?\n> \n> When you will decide - to implement or not to implement,\n\nWe will not decide -:))\nIf someone want to implement it - welcome.\n\n> I urge you to think again about the relief on optimizer,\n> which I stressed many times. No one rebutted yet that adding\n> brains to optimizer so that it can use appropriate join method\n> will require major rewrite. With links you get the best join\n> method as side effect - virtually for free. These joins\n> will never be too slow for an unknown reason. Think carefully.\n> I hope you will make wise decision.\n\nOptimizer requires major rewrite in any case, even\nhaving links implemented.\n\nVadim\n", "msg_date": "Tue, 06 Jul 1999 20:05:22 +0800", "msg_from": "Vadim Mikheev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Fwd: Joins and links" }, { "msg_contents": "After thinking a bit more, I decided to reply in a more constructive\nway.\n\nThomas Lockhart wrote:\n\n> These \"table links\" seem to controvert the ability for a RDBMS to mix\n> and match tables in ways which are not hardcoded beforehand.\n\nCertainly links are only of use to their intended purpose, and to\nnothing more. But you should be aware that real life relationships\nare exactly ot this kind. The drawback of general relational model\nis that links (=joins) are built from scratch at the moment of join.\nThis may seem an advantage, but really this is often an unnecessary\nredundant feature whose design allows to build a swarm of relationships\nwhich never existed and will never be used. \n\nKeeping all that in mind, we might consider building a subsystem in\nSQL server which is carefully optimized for such real life tasks.\nThere is no need to put any restrictions on general SQL, the only\nthing proposed is enhancement of a particular side of the server.\n\n> Regardless of whether \"there exist some real servers that offer such\n> features I am talking\", a departure from the relation model in a\n> relational database is likely to lead to undesireable constraints and\n> restrictions in our future development.\n> \n\nYou have already done a heroic deed of implementing MVCC, it seems\nthe most interfered with thing. I can see no serious interference \nwith any SQL feature which you might implement.\n\n-- \nLeon.\n\n", "msg_date": "Tue, 06 Jul 1999 17:13:58 +0500", "msg_from": "Leon <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Fwd: Joins and links" }, { "msg_contents": "Hello Vadim,\n\nTuesday, July 06, 1999 you wrote:\n\n>> These joins\n>> will never be too slow for an unknown reason. Think carefully.\n>> I hope you will make wise decision.\n\nV> Optimizer requires major rewrite in any case, even\nV> having links implemented.\n\nI am afraid that optimizer, even totally rewritten, can't choose\nthe best method always. That is simply because it is such a\ncomplex animal :) Bacterium - simple links will always win\nin the field where they live :)\n\nBest regards, Leon\n\n\n", "msg_date": "Tue, 6 Jul 1999 17:25:44 +0500", "msg_from": "Leon <[email protected]>", "msg_from_op": true, "msg_subject": "Re[2]: [HACKERS] Fwd: Joins and links" }, { "msg_contents": "Adriaan Joubert wrote:\n\n> Yep. Also, you fix one set of hard links and the next day you need to do\n> a slightly different join and it doesn't fit into the links you\n> constructed, because you left out a table or something silly.\n> \n\nNo one is talking about abolishing any standard SQL feature. After\nyou carefully verified design you can hard-code links to speedup\naccess. Before that has happened the usual SQL will do.\n\n> Leon, I think you should study the accesses, and build the right\n> intermediate tables. Yes, I know you are not supposed to duplicate data,\n> but hey, this is the real world, and disk is cheap.\n\nBut RAM is not as big as HDD. If database doesn't fit in RAM performance\ndegrades severely.\n\n> And triggers etc\n> make it fairly managable to retain integrity.\n\nMaking trigger would cost the same as rearranging the table after\npoor design of links is discovered. \n\n> But what is indispensable\n> is the flexibility you have in a true relational model, so that you can\n> easily adapt to changing demands -- adding temporary tables as you need\n> them for new reports and dropping them as they go out of use.\n\nThis will immensely bloat the database thus flooding the disk\nchannel and, what is worse, the main memory.\n\n-- \nLeon.\n\n", "msg_date": "Tue, 06 Jul 1999 17:29:17 +0500", "msg_from": "Leon <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Fwd: Joins and links" }, { "msg_contents": "Leon wrote:\n> No one is talking about abolishing any standard SQL feature. After\n> you carefully verified design you can hard-code links to speedup\n> access. Before that has happened the usual SQL will do.\n\nLeon,\n\nInteresting idea. You are re-introducing some of the\nhierarchical database ideas, is this right? Here is \nwhat I'm receiving, could you correct me if I'm \nmis-understanding? (much of this you did not say...)\n\n- - - - - - \n\nIn current SQL implementations, if updates are done on a\ntuple being modified, referencing tables are not usually\nidentified or checked, let alone updated. Then, when a \nquery is requested, the database figures out how referenced \ndata can be retrieved, and does this at the time of the query.\n\nIn this proposal, in addition to carrying a primary key\nfor a referenced table, tuples in the referencing table \nwill also have a place to record the physical address \nof each referenced tuple. In this way, referenced \ndata is easily retrieved during a query, since the\nphysical address of the referenced information is\nstored in the referant.\n\nFor example, lets take the following schema\n\nORDER (ORDER_ID, ... );\nPRODUCT (PRODUCT_ID, NAME, ... );\nORDER_LINE (ORDER_ID,PRODUCT_ID, ... );\n\nIn the current cases, changes to the PRODUCT table,\nlet's say a changed name, do not result in an update\nof the ORDER_LINE tuples which reference the product\ntuple being changed.\n\nIn this proposal, a few hidden field (ID/REF) would be added:\n\nORDER ( LEON_ID, ORDER_ID, ... );\nPRODUCT ( LEON_ID, PRODUCT_ID, NAME, ... );\nORDER_LINE ( LEON_ID, ORDER_ID, PRODUCT_ID, ... , PRODUCT_LEON_REF );\n\nWhere the ORDER_LINE table would have a reference to the\nphysical LEON_ID of the tuple being referenced by PRODUCT_ID.\n\nThen, an update of the PRODUCT table would result in a cascading\nupdate of all referencing tables, including ORDER_LINE to \nchange the PRODUCT_LEON_REF from its previous value to the\nupdate value. The LEON_ID and LEON_REF are internal implementation\nfields and not available though SQL.\n\nSUMMARY,\n\nWith this method, query speed is drastically improved since\nthe \"join\" logic is performed once during insert, instead\nof many times during select.\n\nThis method should work well, when the referencing table\nchanges relatively infrequently. Thus people, products, \nand other relatively static \"reference\" information is\na key canidate for this 'indexing' technique.\n\nThis technique should not be used if the frequency of \nupdates exceed the frequency of select statements. \n\n- - - - - - -\n\nOverall, I think it is a good idea. I frequently do weaker\nversions all the time that I call \"field cashing\", where \nthe NAME field of infrequently chaging tuples are frequently\naccessed. In this case, one may put PRODUCT_NAME in the \nORDER_LINE table and put a trigger on PRODUCT to cascade \nupdate of NAME to the ORDER_LINE.PRODUCT_NAME table. \nI tend to make monsters like this a nightly process, since\nproduct name changes need not be immediate (they are rare,\nand thus not frequent, and thus, not usually immediate). \nThis allows the cascade update to run at night when \nthings are alot less stressful on the database.\n\nIs this in-line with what you are saying? \n\nI suggest borrowing an XML term for the idea, GROVE.\nIn XML, a GROVE is a tree built from XML/SGML/notations.\nIn this case, you can think of frequently joined \ninformation as cutting deep into the landscape, thus\nthe more the query is done, the more of a chance that\nthe UPDATE/SELECT ratio wil be small, and thus, the\ngreater chance that the hard wired physical address\nis cashed in the table. The reason I like the\nname, is that it defines a common pathway that is \neasy, without preventing the efficiency of uncommon \npaths (where updates >> select ).\n\nHmm. I'm just worrying about the CASCADE nature\nof the beast. On the extreme that I was writing\nabout earlier, a prototype OO dbms that I was\nlooking at about 6 years ago (god knows what the\nname is), they did *everything* this way. And\nGOD it was slow... especially since it cascaded\nwhen frequency of updates far exceed the frequency\nof selects. \n\nThoughts?\n\n\nBest,\n\nClark\n", "msg_date": "Tue, 06 Jul 1999 09:31:05 -0400", "msg_from": "Clark Evans <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Fwd: Joins and links" }, { "msg_contents": "Clark Evans wrote:\n> \n> In this proposal, a few hidden field (ID/REF) would be added:\n ^^^^^^\nNot hidden, but with _link_ type.\n\nVadim\n", "msg_date": "Tue, 06 Jul 1999 21:54:25 +0800", "msg_from": "Vadim Mikheev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Fwd: Joins and links" }, { "msg_contents": "Hello Clark,\n\nTuesday, July 06, 1999 you wrote:\n\nC> Interesting idea. You are re-introducing some of the\nC> hierarchical database ideas, is this right? Here is\nC> what I'm receiving, could you correct me if I'm\nC> mis-understanding? (much of this you did not say...)\n\nStrictly speaking, this is neither hierarchical nor network\ndatabase. It is not hierarchical because cyclic graphs are\nallowed (when tables reference one another, maybe through\nsome intermediate table). And it is not network because there\nis not some weird restriction put on network database.\n(textbook says in network database one referenced tuple must\nbe at most in one link of certain link type)\n\nC> In this proposal, in addition to carrying a primary key\nC> for a referenced table, tuples in the referencing table\nC> will also have a place to record the physical address\nC> of each referenced tuple.\n\nI have read description carefully. I am afraid that MVCC\nwill break your scheme, because referencing tuple must have\na way to reach all versions of foreign updated tuple. If\nyou update the referencing field, all other versions of\nforeign tuple are lost. It seems the only way to satisfy\nMVCC is to chain updated foreign tuples with subsequent\nVACUUM. That's because there is no need of indices, as soon\nas the need of them is only during VACUUM.\n\n\nBest regards, Leon\n\n\n", "msg_date": "Tue, 6 Jul 1999 19:40:26 +0500", "msg_from": "Leon <[email protected]>", "msg_from_op": true, "msg_subject": "Re[2]: [HACKERS] Fwd: Joins and links" }, { "msg_contents": "> When you will decide - to implement or not to implement,\n> I urge you to think again about the relief on optimizer,\n> which I stressed many times. No one rebutted yet that adding\n> brains to optimizer so that it can use appropriate join method\n> will require major rewrite. With links you get the best join\n> method as side effect - virtually for free. These joins\n> will never be too slow for an unknown reason. Think carefully.\n> I hope you will make wise decision.\n\nI believe Ingres does allow this, as it has tid's too. If you are\ncreating a temp table, you could use tids during your processing. In\nfact, it seems tids would be valid until a vacuum is performed.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 6 Jul 1999 11:04:59 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re[2]: [HACKERS] Fwd: Joins and links" }, { "msg_contents": "Leon wrote:\n> C> In this proposal, in addition to carrying a primary key\n> C> for a referenced table, tuples in the referencing table\n> C> will also have a place to record the physical address\n> C> of each referenced tuple.\n> \n> I have read description carefully. I am afraid that MVCC\n> will break your scheme, because referencing tuple must have\n> a way to reach all versions of foreign updated tuple.\n> If you update the referencing field, all other versions of\n> foreign tuple are lost. \n> It seems the only way to satisfy\n> MVCC is to chain updated foreign tuples with subsequent\n> VACUUM. That's because there is no need of indices, as soon\n> as the need of them is only during VACUUM.\n\n(look of puzzlement) Where did I go wrong with what \nyou are proposing? I'm not trying to invent my\nown scheme... I'm trying to understand yours.\n\n;) Clark\n", "msg_date": "Tue, 06 Jul 1999 12:28:45 -0400", "msg_from": "Clark Evans <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Fwd: Joins and links" }, { "msg_contents": "Hello Clark,\n\nTuesday, July 06, 1999 you wrote:\n\nC> (look of puzzlement) Where did I go wrong with what\nC> you are proposing? I'm not trying to invent my\nC> own scheme... I'm trying to understand yours.\n\nOk. If American people wants to know the True Path, it\ncan be enlightened :))) (it's a joke)\n\nSo what's exactly proposed:\n\nIntroduction of what will seem a new data type in table\nstructure:\n\nCREATE TABLE atable (a int4)\nCREATE TABLE btable (b int4, c link (atable)) - \"link\" looks like\nnew data type.\n\nExample query with link:\n\nSELECT * FROM atable where btable.b < 5 AND btable.c = atable.tid\n(or here should go ctid - you can know better)\n\nType checking:\n\nCREATE TABLE ctable (d int4)\nSELECT * FROM ctable where btable.b < 5 AND btable.c = ctable.tid -\nit should produce an error because link isn't to ctable.\n\nNo additional constraint is placed. Tables can reference one\nanother in any combination, maybe the table should be able\nto reference itself.\n\nHow all that is implemented:\n\nAs we have seen, link is matched against tid in queries. It\nmeans that link internally can be of the same data type as tid.\n\nMVCC stuff: as Vadim pointed out, updated tuples are chained\nalready, so this feature can naturally be utilized. Referencing\ntuple is always pointing to the oldest version of foreign\nupdated tuple. If transaction needs the version of foreign\ntuple other than oldest, it follows the chain.\n\nVacuuming removes these chains thus packing the table and\nrewriting references to vacuumed table in other tables.\nVacuuming thus needs high priority, maybe lock on the table\nbeing vacuumed and all referencing tables.\n\nSince referencing fields are rewritten only during vacuum,\nthere is no need of indices on any field.\n\nBest regards, Leon\n\n\n", "msg_date": "Tue, 6 Jul 1999 22:36:09 +0500", "msg_from": "Leon <[email protected]>", "msg_from_op": true, "msg_subject": "Re[2]: [HACKERS] Fwd: Joins and links" }, { "msg_contents": "On Mon, 5 Jul 1999, Tom Lane wrote:\n\n> I am not sure there's anything fundamentally wrong with his basic point;\n> if, say, we could find a way to construct OIDs so that a tuple could be\n> found very quickly from its OID, that wouldn't violate the relational\n> model AFAICS, and such OIDs would work fine as \"links\". But I don't see\n> any way to do that without either giving up UPDATE or introducing a huge\n> amount of baggage into all processes that can update tables (VACUUM\n> being the worst case, likely). Without doubt the best compromise would\n> look remarkably like an index on OID.\n\nSo is there anything wrong with that?\n\n> Ultimately, when you consider both the update costs and the access\n> costs, I doubt that this sort of thing could be a win, except maybe\n> in the case where the referenced table is hardly ever changed so that\n> the update costs are seldom incurred. But in that situation it's not\n> clear you want to store the referenced table in an RDBMS anyway ---\n> there are lots of lower-overhead ways to deal with fixed tables, such\n> as perfect-hash generators.\n\nWhile I read this thread I noticed that a lot of people are concerned\nabout their update speeds. I am primarily concerned about query speeds.\nConsider how often you update data vs. how often you query it. That's the\nwhole point of a database: to optimize information retrieval. Now I am not\nsure how big those update performance penalties would be but I am not\nconcerned really.\n\nMeanwhile I agree that hard-linking via record IDs sounds suspiciously\nlike a page from the OODB textbook where it is praised for exactly the\nsame reasons the person who started this discussion cited: no joins. But\nin order for that to work (if it works) the database software would have\nto be written from scratch in otder for it to be marginally efficient.\n\nThe question I ask myself though is, are there any concrete plans for\nreferential integrity via foreign key clauses? 6.6, 7.0, never? To me,\nthat's really much more important than query speed or MVCC.\n\n-- \nPeter Eisentraut\nPathWay Computing, Inc.\n\n", "msg_date": "Tue, 6 Jul 1999 18:24:57 -0400 (EDT)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Fwd: Joins and links " }, { "msg_contents": "Leon wrote:\n> \n> Hello Vadim,\n> \n> Tuesday, July 06, 1999 you wrote:\n> \n> >> These joins\n> >> will never be too slow for an unknown reason. Think carefully.\n> >> I hope you will make wise decision.\n> \n> V> Optimizer requires major rewrite in any case, even\n> V> having links implemented.\n> \n> I am afraid that optimizer, even totally rewritten, can't choose\n> the best method always. That is simply because it is such a\n> complex animal :) Bacterium - simple links will always win\n> in the field where they live :)\n\n>From what I have read from earlier posts about the optimizer, \nthere can be situations where using links would actually be slower\nthan going through the optimiser, similar to the case where scanning \nthe whole table using an index can be orders of magnitude slower than \ndoing a direct scan.\n\nThat is of course if used unwisely ;)\n\nAnother thing that has remained unclear to me is the way to actually \ninsert or update the links - you can't just put another record there,\nso that should be some kind of field (tid,oid,...) or some function\nlike last_touched('other_table_name').\n\nSo, what have you thought to put there ?\n\n------\nHannu\n", "msg_date": "Wed, 07 Jul 1999 02:18:10 +0300", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Fwd: Joins and links" }, { "msg_contents": "> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]]On Behalf Of Vadim Mikheev\n> Sent: Tuesday, July 06, 1999 8:12 PM\n> To: Tom Lane\n> Cc: Thomas Lockhart; Leon; [email protected]\n> Subject: Re: [HACKERS] Fwd: Joins and links\n>\n>\n> Tom Lane wrote:\n> >\n> > I am not sure there's anything fundamentally wrong with his basic point;\n> > if, say, we could find a way to construct OIDs so that a tuple could be\n> > found very quickly from its OID, that wouldn't violate the relational\n> > model AFAICS, and such OIDs would work fine as \"links\". But I don't see\n> > any way to do that without either giving up UPDATE or introducing a huge\n> > amount of baggage into all processes that can update tables (VACUUM\n> > being the worst case, likely). Without doubt the best compromise would\n> > look remarkably like an index on OID.\n>\n> There is no problems with UPDATE: updated tuple points to newer\n> version, so we can avoid update of referencing tuples here.\n> VACUUM would have to update referencing tuples (via normal\n> heap_replace, nothing special) while removing old versions.\n> This may cause deadlocks but we could give vacuum higher priority\n> and abort others.\n>\n> So, vacuum is the worst case, as pointed by Tom.\n> No problems with MVCC and other things.\n>\n\nWhat about dump/reload ?\nAnd would vacuum be much complicated than now ?\nI think vacuum is sufficiently complicated now.\n\nDidn't these kind of annoying things let RDBMS exceed\nNDBMS inspite of its low performance ?\n\nIf \"link\" is necessary at any cost,how about the following story ?\n\n \"link\" = OID + TID\n\n If oid pointed by TID is different from holding OID,executor resets\n TID using OID indices(my story needs OID indices).\n\nBy this way we need not change vacuum/dump/reload etc.\nThe command to update TID-s to latest ones may be needed.\n\nComments ?\n\nHiroshi Inoue\[email protected]\n\n\n", "msg_date": "Wed, 7 Jul 1999 09:31:11 +0900", "msg_from": "\"Hiroshi Inoue\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: [HACKERS] Fwd: Joins and links" }, { "msg_contents": "Hannu Krosing wrote:\n\n> Another thing that has remained unclear to me is the way to actually\n> insert or update the links - you can't just put another record there,\n> so that should be some kind of field (tid,oid,...) or some function\n> like last_touched('other_table_name').\n> \n> So, what have you thought to put there ?\n> \n\nEarlier I proposed that links should be of type similar to tid,\nso inserts should be fed with values of tid. But this requires\nintermediate step, so there can be a function which takes primary\nkey and returns tid, or as you say a function \nlast_touched('other_table_name') - this seems the best choice.\n\n-- \nLeon.\n\n\n", "msg_date": "Wed, 07 Jul 1999 11:03:40 +0500", "msg_from": "Leon <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Fwd: Joins and links" } ]
[ { "msg_contents": "\n> I am not sure there's anything fundamentally wrong with his basic point;\n> if, say, we could find a way to construct OIDs so that a tuple could be\n> found very quickly from its OID, that wouldn't violate the relational\n> model AFAICS, and such OIDs would work fine as \"links\". But I don't see\n> any way to do that without either giving up UPDATE or introducing a huge\n> amount of baggage into all processes that can update tables (VACUUM\n> being the worst case, likely). Without doubt the best compromise would\n> look remarkably like an index on OID.\n> \nI think the best compromise would be to have ctid in the where clause.\nThis would need to always be considered as best path by the optimizer.\nThen the situation is similar to a primary key foreign key scenario.\n\nThe referenced table does not need an index, since we know the physical \nposition of the row we want (where ctid='(5,255)').\n\nWhat we need second is an update trigger for the referenced table that\nupdates old.ctid to new.ctid in the referencing table. For this to be\nefficient\nyou would need to create an index on the column that stores the reference.\n\nI do not actually think that we would need extra syntax to allow this,\nonly the access method for a ctid where clause.\n\nAndreas\n", "msg_date": "Tue, 6 Jul 1999 09:48:36 +0200 ", "msg_from": "Zeugswetter Andreas IZ5 <[email protected]>", "msg_from_op": true, "msg_subject": "AW: [HACKERS] Fwd: Joins and links " }, { "msg_contents": "Zeugswetter Andreas IZ5 wrote:\n> \n> I think the best compromise would be to have ctid in the where clause.\n\nAnd we told about this a days ago, but no one did it.\n\n> This would need to always be considered as best path by the optimizer.\n> Then the situation is similar to a primary key foreign key scenario.\n> \n> The referenced table does not need an index, since we know the physical\n> position of the row we want (where ctid='(5,255)').\n> \n> What we need second is an update trigger for the referenced table that\n> updates old.ctid to new.ctid in the referencing table. For this to be\n\nTriggers are not fired by vacuum...\n\nVadim\n", "msg_date": "Tue, 06 Jul 1999 19:26:18 +0800", "msg_from": "Vadim Mikheev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: AW: [HACKERS] Fwd: Joins and links" } ]
[ { "msg_contents": "There were a couple of typos that somehow crept in. I fixed them in cvs\njust after 6.5 was released.\n\n[snip]\n\n\njavac postgresql/Driver.java\npostgresql/Driver.java:107: Identifier expected.\n } catch(PSQLException(ex1) {\n ^\n\nThis should read:\n } catch(PSQLException ex1) {\n\npostgresql/Driver.java:111: 'catch' without 'try'.\n } catch(Exception ex2) {\n ^\n\nYou need to add another catch before that line:\n\n\t} catch(PSQLException pex) {\n\t throw pex;\n\t} catch(Exception ex2) {\n\nCVS has got these fixes in there, so as soon as you get it working\nagain, it should be ok.\n\nPeter\n\n-- \nPeter Mount\nEnterprise Support\nMaidstone Borough Council\nAny views stated are my own, and not those of Maidstone Borough Council.\n\n", "msg_date": "Tue, 6 Jul 1999 14:41:30 +0100 ", "msg_from": "Peter Mount <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] CVS, Java etc" }, { "msg_contents": "\nIf I have a multi-field unique index, it allows me to insert duplicates\nif one of the fields is null. Is this a bug or not?\n\n-- \nChris Bitmead\nmailto:[email protected]\n", "msg_date": "Sat, 10 Jul 1999 23:50:49 +1000", "msg_from": "Chris Bitmead <[email protected]>", "msg_from_op": false, "msg_subject": "Unique index problem" } ]
[ { "msg_contents": "\nCan a serial type be added to an existing table using ALTER TABLE? If so\nwhy doesn't it work on hub? If not, I guess that explains why it doesn't\nwork, so I'm open to suggestions on how to add it easily...\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> TEAM-OS2\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n", "msg_date": "Tue, 06 Jul 1999 15:47:27 -0400 (EDT)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": true, "msg_subject": "alter table" }, { "msg_contents": "Vince Vielhaber <[email protected]> writes:\n> Can a serial type be added to an existing table using ALTER TABLE? If so\n> why doesn't it work on hub? If not, I guess that explains why it doesn't\n> work, so I'm open to suggestions on how to add it easily...\n\nI'm not sure about this, but I seem to recall that ALTER TABLE ADD\nCOLUMN doesn't know anything about adding constraints or defaults.\nAnd, of course, the default clause for a serial column is what *really*\nmakes it go. Given this lack, the fact that ALTER TABLE is also\nunprepared to create the underlying sequence object for a SERIAL column\nis the least of your worries ;-)\n\nThere was some discussion a few months ago about redesigning the support\nfor SERIAL columns to make them less of an add-on kluge and more of a\nreal integrated feature. I'd be inclined to think that that should\nhappen before we try to teach ALTER TABLE about serial columns ---\notherwise it'll just be another kluge in need of replacement. Does\nanyone recall what happened with that discussion thread?\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 06 Jul 1999 17:12:55 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] alter table " }, { "msg_contents": "\nOn 06-Jul-99 Tom Lane wrote:\n> Vince Vielhaber <[email protected]> writes:\n>> Can a serial type be added to an existing table using ALTER TABLE? If so\n>> why doesn't it work on hub? If not, I guess that explains why it doesn't\n>> work, so I'm open to suggestions on how to add it easily...\n> \n> I'm not sure about this, but I seem to recall that ALTER TABLE ADD\n> COLUMN doesn't know anything about adding constraints or defaults.\n> And, of course, the default clause for a serial column is what *really*\n> makes it go. Given this lack, the fact that ALTER TABLE is also\n> unprepared to create the underlying sequence object for a SERIAL column\n> is the least of your worries ;-)\n> \n> There was some discussion a few months ago about redesigning the support\n> for SERIAL columns to make them less of an add-on kluge and more of a\n> real integrated feature. I'd be inclined to think that that should\n> happen before we try to teach ALTER TABLE about serial columns ---\n> otherwise it'll just be another kluge in need of replacement. Does\n> anyone recall what happened with that discussion thread?\n> \n> regards, tom lane\n\nNow that you mention it I vaguely recall that thread. I'm also thinking \nthere may be a problem with serial types on hub. I tried using an insert\nwith select and although it inserted all 106 records, it didn't have a \nclue as to what the serial column was. So I dropped the table, dropped\nthe sequence and created it again. It gives me this:\n\nERROR: attribute 'mirrorid' not found\n\nI dropped Marc a note about it since I'm about to run out the door. (Baseball\nplayoffs are here)\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> TEAM-OS2\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n", "msg_date": "Tue, 06 Jul 1999 17:21:49 -0400 (EDT)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] alter table" } ]
[ { "msg_contents": "Hi all!\n\nThis was first posted in pgsql-admin and I got no answer.\n\nI have recently upgraded from Postgresql 6.4.2 to 6.5 on a Slackware 3.5\nwith kernel 2.0.30. The processor is Intel P5-133 and it has 32 MB RAM.\n\nAs a result the trigger functions defined in refint.c from the directory\n[SRC_DIR]contrib/spi/ (check_primary_key, check_secondary_key) and\nothers defined in the same maner are silently ignored. That means it is\ninserting tuples reffering to unexisting keys and I don't even get an\nerror message when I delete refint.so, restart the postmaster and try\nto do an action which is suposed to fire the triggers.\n\nWhen I'm trying to use them as normal functions (with SELECT\ncheck_primary_key(...), etc.) not as triggers I'm getting the expected\noutput or error messages (e.g. refint.so missing or working only for\ntriggers).\n\nHas anybody encontered this malfunction? Please tell me what is the\ntrail to follow in the source files in order to track this malfunction.\nI would like to know what functions are called and in which order, so I\ncan pinpoint where the things get weird.\n\n\nThanks in advance,\n\nCamil\n", "msg_date": "Tue, 06 Jul 1999 12:59:29 -0700", "msg_from": "Camil Coaja <[email protected]>", "msg_from_op": true, "msg_subject": "Trigger functions written in C never get called" } ]
[ { "msg_contents": "Hi.\nFor some time I've been staring at the installation procedure and shaking\nmy head. It looks to me like someone took the rule of least privlege to an\nextreme. Although I believe it is important to be able to install postgres\nif you do not have root access, I think this represents a minority of\nusers.\n\nI think most SA's would prefer to be able to make;make install and have\npostgres install itself and set the permission rather than su'ing to the\npostgres user and building/installing that way. Am I the only one who\nthinks that the install procedure is more complex than it needs to be?\n\nAlso, I've noted the permissions of the installed binaries as a potential\nsecurity risk. A small one, but still... Suppose a user found a buffer\noverrun in postgres (I don't think this would be too hard to do) they\ncould gain access to the postgres account and use that to trojan the\npostgres binaries. The solution would of course be to install the binaries\nowned by root. I normally do this manually, but I think it should be an\ninstall thing.\n\nIf people think these two ideas are good ones, I can easily come up with\npatches for the install.\n\n-Michael\n\n", "msg_date": "Tue, 6 Jul 1999 18:49:58 -0300 (ADT)", "msg_from": "Michael Richards <[email protected]>", "msg_from_op": true, "msg_subject": "Installation permissions" }, { "msg_contents": "Michael Richards <[email protected]> writes:\n> I think most SA's would prefer to be able to make;make install and have\n> postgres install itself and set the permission rather than su'ing to the\n> postgres user and building/installing that way. Am I the only one who\n> thinks that the install procedure is more complex than it needs to be?\n\nWe've been around on that a couple of times. I'm of the opinion that\nhaving the install procedure contain hardwired assumptions about how to\nset the ownership of the installed files will make life more complex,\nnot simpler. In particular I do not think it would be an improvement\nif the install *had* to be done as root; but on a lot of systems chown\nrequires root privs. Removing one \"su\" step for yourself is not worth\nmaking installation much more difficult for people who are not in the\nsame situation you are.\n\n> Also, I've noted the permissions of the installed binaries as a potential\n> security risk. A small one, but still... Suppose a user found a buffer\n> overrun in postgres (I don't think this would be too hard to do) they\n> could gain access to the postgres account and use that to trojan the\n> postgres binaries. The solution would of course be to install the binaries\n> owned by root. I normally do this manually, but I think it should be an\n> install thing.\n\nWaste of time, as long as postgres is an unprivileged user. What you're\nsaying is that once someone has broken into the postgres account, they\ncan hack the postgres binaries to do anything that postgres can do.\nBut they can *already* do anything that postgres can do.\n\nAnd, again, if install insists on doing things that way then it will\nfail when not run as root.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 07 Jul 1999 10:49:32 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Installation permissions " } ]
[ { "msg_contents": "I have updated the FAQ list, to make it easier to read and a little\nshorter.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 6 Jul 1999 17:59:25 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Updated FAQ" } ]
[ { "msg_contents": "> Vince Vielhaber <[email protected]> writes:\n> > Can a serial type be added to an existing table using ALTER TABLE? If so\n> > why doesn't it work on hub? If not, I guess that explains why it doesn't\n> > work, so I'm open to suggestions on how to add it easily...\n> \n> I'm not sure about this, but I seem to recall that ALTER TABLE ADD\n> COLUMN doesn't know anything about adding constraints or defaults.\n> And, of course, the default clause for a serial column is what *really*\n> makes it go. Given this lack, the fact that ALTER TABLE is also\n> unprepared to create the underlying sequence object for a SERIAL column\n> is the least of your worries ;-)\n> \n> There was some discussion a few months ago about redesigning the support\n> for SERIAL columns to make them less of an add-on kluge and more of a\n> real integrated feature. I'd be inclined to think that that should\n> happen before we try to teach ALTER TABLE about serial columns ---\n> otherwise it'll just be another kluge in need of replacement. Does\n> anyone recall what happened with that discussion thread?\n> \n> \t\t\tregards, tom lane\n\nYes, I was the one who wanted to redo SERIAL support as a type instead of a \nsequence and index. I hope to begin working on that later this week, or \nsometime next week. (Work has kept me busy lately :()\n\n-Ryan\n", "msg_date": "Tue, 6 Jul 1999 16:11:53 -0600 (MDT)", "msg_from": "Ryan Bradetich <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] alter table" } ]
[ { "msg_contents": "> In both datetime_trunc() and timespan_trunc() in dt.c,\n> the DTK_MICROSEC case is just like the DTK_MILLISEC case.\n> I think this is wrong and it ought to look like\n> fsec = rint(fsec * 1000000) / 1000000;\n> no?\n\nTom, I looked at this and your fix is the right thing. I am leaving\nfor a week of vacation, and don't have time to apply the fix. If you\nwould like to, be my guest :)\n\nTIA\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Wed, 07 Jul 1999 00:04:55 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: A couple comments about datetime" }, { "msg_contents": "> > In both datetime_trunc() and timespan_trunc() in dt.c,\n> > the DTK_MICROSEC case is just like the DTK_MILLISEC case.\n> > I think this is wrong and it ought to look like\n> > fsec = rint(fsec * 1000000) / 1000000;\n> > no?\n> \n> Tom, I looked at this and your fix is the right thing. I am leaving\n> for a week of vacation, and don't have time to apply the fix. If you\n> would like to, be my guest :)\n> \n\nApplied.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 7 Jul 1999 23:21:25 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: A couple comments about datetime" } ]
[ { "msg_contents": "I just had an idea.\n\nMy tracking of bugs is my e-mail box, and the TODO list.\n\nShould I make a cvs-checked-in mailbox of known bug reports?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 6 Jul 1999 23:38:37 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Bug tracking" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> I just had an idea.\n> My tracking of bugs is my e-mail box, and the TODO list.\n> Should I make a cvs-checked-in mailbox of known bug reports?\n\nWe could put BugZilla into use.\n\n;) Clark\n", "msg_date": "Wed, 07 Jul 1999 08:14:31 -0400", "msg_from": "Clark Evans <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Bug tracking" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> My tracking of bugs is my e-mail box, and the TODO list.\n> Should I make a cvs-checked-in mailbox of known bug reports?\n\nIf we're not going to put up a full-fledged bug tracking system,\nthat would be a simple and useful improvement to the current\nstate of affairs... but a BTS would be better.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 07 Jul 1999 10:58:01 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Bug tracking " }, { "msg_contents": "\nThe question is...which one? I'm willing to install one, but it has to be\nsomething that everyone can use, and that means even those without GUI\ncapabilities.\n\nOn Wed, 7 Jul 1999, Tom Lane wrote:\n\n> Bruce Momjian <[email protected]> writes:\n> > My tracking of bugs is my e-mail box, and the TODO list.\n> > Should I make a cvs-checked-in mailbox of known bug reports?\n> \n> If we're not going to put up a full-fledged bug tracking system,\n> that would be a simple and useful improvement to the current\n> state of affairs... but a BTS would be better.\n> \n> \t\t\tregards, tom lane\n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Wed, 21 Jul 1999 09:15:05 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Bug tracking " }, { "msg_contents": "> \n> The question is...which one? I'm willing to install one, but it has to be\n> something that everyone can use, and that means even those without GUI\n> capabilities.\n> \n\nI think we can assume a browser, no?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 21 Jul 1999 11:10:13 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Bug tracking" }, { "msg_contents": "On Wed, 21 Jul 1999, Bruce Momjian wrote:\n\n> > \n> > The question is...which one? I'm willing to install one, but it has to be\n> > something that everyone can use, and that means even those without GUI\n> > capabilities.\n> > \n> \n> I think we can assume a browser, no?\n\nWhy? That assumes always using a Unix machine with X-Windows running, no?\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Wed, 21 Jul 1999 13:01:15 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Bug tracking" }, { "msg_contents": "> On Wed, 21 Jul 1999, Bruce Momjian wrote:\n> \n> > > \n> > > The question is...which one? I'm willing to install one, but it has to be\n> > > something that everyone can use, and that means even those without GUI\n> > > capabilities.\n> > > \n> > \n> > I think we can assume a browser, no?\n> \n> Why? That assumes always using a Unix machine with X-Windows running, no?\n\nHow about lynx? Do they suppor that? Probably not.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 21 Jul 1999 12:06:28 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Bug tracking" }, { "msg_contents": "On Wed, 21 Jul 1999, The Hermit Hacker wrote:\n\n> On Wed, 21 Jul 1999, Bruce Momjian wrote:\n> \n> > > \n> > > The question is...which one? I'm willing to install one, but it has to be\n> > > something that everyone can use, and that means even those without GUI\n> > > capabilities.\n> > > \n> > \n> > I think we can assume a browser, no?\n> \n> Why? That assumes always using a Unix machine with X-Windows running, no?\n\nHuh? How do you figure?\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> TEAM-OS2\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Wed, 21 Jul 1999 12:25:36 -0400 (EDT)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Bug tracking" }, { "msg_contents": "What's wrong with Lynx? Keep it simple and text-based.\n\nCiao\n --Louis <[email protected]> \n\nLouis Bertrand http://www.bertrandtech.on.ca\nBertrand Technical Services, Bowmanville, ON, Canada \n\nOpenBSD: Secure by default. http://www.openbsd.org/\n\nOn Wed, 21 Jul 1999, The Hermit Hacker wrote:\n\n> On Wed, 21 Jul 1999, Bruce Momjian wrote:\n> \n> > > \n> > > The question is...which one? I'm willing to install one, but it has to be\n> > > something that everyone can use, and that means even those without GUI\n> > > capabilities.\n> > > \n> > \n> > I think we can assume a browser, no?\n> \n> Why? That assumes always using a Unix machine with X-Windows running, no?\n> \n> Marc G. Fournier ICQ#7615664 IRC Nick: Scrappy\n> Systems Administrator @ hub.org \n> primary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n> \n> \n> \n> \n\n\n", "msg_date": "Wed, 21 Jul 1999 17:10:40 +0000 (GMT)", "msg_from": "Louis Bertrand <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Bug tracking" }, { "msg_contents": "The Hermit Hacker <[email protected]> writes:\n> On Wed, 21 Jul 1999, Bruce Momjian wrote:\n>>>> The question is...which one? I'm willing to install one, but it has to be\n>>>> something that everyone can use, and that means even those without GUI\n>>>> capabilities.\n>> \n>> I think we can assume a browser, no?\n\n> Why? That assumes always using a Unix machine with X-Windows running, no?\n\nOr a PC with Windoze, or a Mac, or a character terminal with Lynx ...\n\nBut I agree with Marc: it'd be a good idea to ensure that someone with\nonly (say) email access can do at least the basic stuff of submitting\nbug reports and checking on the status of bugs.\n\nPossibly we could get away with a manual approach, at least to start\nwith. I suspect that the majority of users do have a web browser,\nso probably only a few bug reports would still need to come via the\npgsql-bugs mailing list. If someone would volunteer to manually\ntransfer said reports into the tracking system, that'd handle the\nemail-submission problem.\n\nNot sure what to do about the status tracking part of the problem.\nBut, since currently there is *no* way to track the status of bugs,\nit's not like using a Web-based tracking system is going to take\nanything away from email-only people that they have now.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 21 Jul 1999 13:28:02 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Bug tracking " } ]
[ { "msg_contents": "All,\n\nPeter Eisentraut wrote:\n>> The question I ask myself though is, are there any concrete plans for\n>> referential integrity via foreign key clauses? 6.6, 7.0, \n>> never? To me,\n>> that's really much more important than query speed or MVCC.\nI'd have to go along with this, because I've noticed that DRI is a LOT\nfaster than using triggers to implement RI. Although not on PG (on Oracle\nactually), I think that the results can be extrapolated closely enough. DRI\nreduces the implementation overhead dramatically.\n\nMikeA\n", "msg_date": "Wed, 7 Jul 1999 09:01:44 +0200 ", "msg_from": "\"Ansley, Michael\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Joins and links " }, { "msg_contents": "[Charset iso-8859-1 unsupported, filtering to ASCII...]\n> All,\n> \n> Peter Eisentraut wrote:\n> >> The question I ask myself though is, are there any concrete plans for\n> >> referential integrity via foreign key clauses? 6.6, 7.0, \n> >> never? To me,\n> >> that's really much more important than query speed or MVCC.\n> I'd have to go along with this, because I've noticed that DRI is a LOT\n> faster than using triggers to implement RI. Although not on PG (on Oracle\n> actually), I think that the results can be extrapolated closely enough. DRI\n> reduces the implementation overhead dramatically.\n\nThis is on our 6.6 short list, and someone has said he will work on it\n\"after 6.5\".\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 7 Jul 1999 03:47:23 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] RE: Joins and links" } ]
[ { "msg_contents": "\n============================================================================\n POSTGRESQL BUG REPORT TEMPLATE\n============================================================================\n\n\nYour name : Yves MARTIN\nYour email address : [email protected]\n\nCategory : runtime: back-end: SQL\nSeverity : non-critical\n\nSummary: No primary key possible with type reltime & timestamp\n\nSystem Configuration\n--------------------\n Operating System : Solaris 2.6\n\n PostgreSQL version : 6.5\n\n Compiler used : egcs-2.91.66\n\nHardware:\n---------\nSunOS 5.6 Generic_105181-12 sun4u sparc SUNW,Ultra-Enterprise\n\nVersions of other tools:\n------------------------\n\n\n--------------------------------------------------------------------------\n\nProblem Description:\n--------------------\nError message when trying to create a table\nwith a primary key on type reltime or timestamp\n\n--------------------------------------------------------------------------\n\nTest Case:\n----------\ncreate table periodes ( b reltime primary key ) ;\nERROR: Can't find a default operator class for type 703.\n\ncreate table periodes ( b timestamp primary key ) ;\nERROR: Can't find a default operator class for type 1296.\n\n\n--------------------------------------------------------------------------\n\nSolution:\n---------\n\n\n--------------------------------------------------------------------------\n\n", "msg_date": "Wed, 7 Jul 1999 04:20:11 -0400 (EDT)", "msg_from": "Unprivileged user <nobody>", "msg_from_op": true, "msg_subject": "Port Bug Report: No primary key possible with type reltime &\n timestamp" }, { "msg_contents": "\nUpdated TODO item:\n\n * Creating index of TIMESTAMP & RELTIME fails, rename to DATETIME(Thomas)\n\n\n> Problem Description:\n> --------------------\n> Error message when trying to create a table\n> with a primary key on type reltime or timestamp\n> \n> --------------------------------------------------------------------------\n> \n> Test Case:\n> ----------\n> create table periodes ( b reltime primary key ) ;\n> ERROR: Can't find a default operator class for type 703.\n> \n> create table periodes ( b timestamp primary key ) ;\n> ERROR: Can't find a default operator class for type 1296.\n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 7 Jul 1999 23:23:16 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PORTS] Port Bug Report: No primary key possible with type\n reltime\n\t& timestamp" }, { "msg_contents": "> Problem Description:\n> --------------------\n> Error message when trying to create a table\n> with a primary key on type reltime or timestamp\n> Solution:\n> ---------\n\nUse timespan and datetime instead. They are better supported; perhaps\nin the next release \"reltime\" and \"timestamp\" will simply be aliases\nfor them...\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Wed, 14 Jul 1999 05:47:13 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PORTS] Port Bug Report: No primary key possible with type\n\treltime & timestamp" }, { "msg_contents": "Porting:\n1) Seems like -O0/-O1 works best for this machine. It appears u get\nspin lock errors/timeouts if i optimize at -O3, and -O2\n\n2) in nabstime.h, the typedefs AbsoluteTime & RelativeTime ( was that\nAbsolutetime & Relativetime ) should be kept at a fixed ( for all ports\n) size like int32. Adjusting it to what ever size time_t becomes leads\nto problems with 'signed' words v. 'non-signed' extended longwords. For\ninstance the constant 0x80000001 is a negative 32bit integer, but as a\ntime_t it just a large positive number!.\n\n3) Having problems with sign extension also creates problems for '@ 3\nseconds ago'::reltime. see #2\n\n4) You dont store reltime or abstime as 64bit's into the db. Are there\nany plans to use 64bit alpha/linux timevalues as reltime & abstime ?\nmaybe reltime64 & abstime64? whats the sql world doing with 'seconds\nsince 1970' if the year is > 2038 ( which is the 32bit signed overflow )\n?\n\n5) having $(CC) -mieee all over just isn't good, particular if no float\noperations are done. It slows down everthing. Is there a way to limit\nthis in the makefile?\ngat\n\nBTW these are porting issues ( but as well hacking issues ).\n\n", "msg_date": "Wed, 14 Jul 1999 06:23:22 -0400", "msg_from": "Uncle George <[email protected]>", "msg_from_op": false, "msg_subject": "RedHat6.0 & Alpha" }, { "msg_contents": "> Porting:\n> 1) Seems like -O0/-O1 works best for this machine. It appears u get\n> spin lock errors/timeouts if i optimize at -O3, and -O2\n\nYes, the 6.5.1 code will use:\n\n\tCFLAGS:-O -mieee # optimization -O2 removed because of egcs problem\n\n> \n> 2) in nabstime.h, the typedefs AbsoluteTime & RelativeTime ( was that\n> Absolutetime & Relativetime ) should be kept at a fixed ( for all ports\n> ) size like int32. Adjusting it to what ever size time_t becomes leads\n> to problems with 'signed' words v. 'non-signed' extended longwords. For\n> instance the constant 0x80000001 is a negative 32bit integer, but as a\n> time_t it just a large positive number!.\n\nOK, the real problem is that we are using \"magic\" values to mark certain\nvalues, and this is not done portably. Can you suggestion good values?\nCan you send over a patch?\n\n> 3) Having problems with sign extension also creates problems for '@ 3\n> seconds ago'::reltime. see #2\n\nSame thing. We should not be using hard-coded values.\n\n> \n> 4) You dont store reltime or abstime as 64bit's into the db. Are there\n> any plans to use 64bit alpha/linux timevalues as reltime & abstime ?\n> maybe reltime64 & abstime64? whats the sql world doing with 'seconds\n> since 1970' if the year is > 2038 ( which is the 32bit signed overflow )\n> ?\n\nNot sure on this.\n\n> 5) having $(CC) -mieee all over just isn't good, particular if no float\n> operations are done. It slows down everthing. Is there a way to limit\n> this in the makefile?\n> gat\n\nWhat does that flag do, and where would it be needed or not needed?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 14 Jul 1999 11:11:50 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PORTS] RedHat6.0 & Alpha" }, { "msg_contents": "Well, a reply, anyway\n\n1) reltime & abstime values are stored in the DB as 4 byte values. The\ndefinitions for abstime&reltime are also stored in the DB ( this from empiracle\ndebugging ) . If you do not plan to embrace the notion of #of seconds >\n2^(32-1), and you dont want to alter the DB notion that storage is 4 bytes then\n\n typedef int32 Absolutetime;\n typedef int32 Relativetime;\n\n would appear to be most preferable & more stable (majic #'s work ) than\n\n typedef time_t Absolutetime;\n typedef time_t Relativetime;\n\n This is not a complete solution , as there are still some sign extension\nproblems as demonstratable by the regression tests.\n If you want to use 64bit Absolutetime & reltimes, then you should adjust (\nor make more abstract ) the concept of abstime&reltime. BUT\nTHIS IS NOT A PORTING ISSUE! I would just like to get the abstime*reltime to\nbehave much like the 32bit folks.\n\n2) Can u add HAS_LONG_LONG to $(CFLAGS)\n I dont have long long, but it turns on some code ( somewhere ) that fixes\nanother problem.\n\n3) -mieee informs the egcs compiler fot the alpha to inject 'trapb'\ninstructions at various places in a floating point computation. The trapb is a\npipeline stall forcing the processor to stop issueing instructions until all\ncurrent instructions in the pipeline have executed. This is done to capture a\npossible 'fault' at a resomable time so you can backtrack to the instruction\nthat faulted and take some corrective measure. There are also rules for\nbacktracing, and repairing. The usage of -mieee inserted these trapb's all over\nthe place. The current egcs compiler appears to do a better job at it For\npurely int operations, then a module need not be enhanced by the -mieee switch.\n\n4) I'd give u some patches, but still getting the regression tests to work.\nWhere do I get 6.5.1, so I can work with that as a base\n\n5) What is the floating point rounding set to ( up/down ). There seems to be an\nextra digit of precision in ur i386, where the alpha port appears to round up (\nand have 1 digit less :( )\n\ngat\n\nBruce Momjian wrote:\n\n> > Porting:\n> > 1) Seems like -O0/-O1 works best for this machine. It appears u get\n> > spin lock errors/timeouts if i optimize at -O3, and -O2\n>\n> Yes, the 6.5.1 code will use:\n>\n> CFLAGS:-O -mieee # optimization -O2 removed because of egcs problem\n>\n> >\n> > 2) in nabstime.h, the typedefs AbsoluteTime & RelativeTime ( was that\n> > Absolutetime & Relativetime ) should be kept at a fixed ( for all ports\n> > ) size like int32. Adjusting it to what ever size time_t becomes leads\n> > to problems with 'signed' words v. 'non-signed' extended longwords. For\n> > instance the constant 0x80000001 is a negative 32bit integer, but as a\n> > time_t it just a large positive number!.\n>\n> OK, the real problem is that we are using \"magic\" values to mark certain\n> values, and this is not done portably. Can you suggestion good values?\n> Can you send over a patch?\n>\n> > 3) Having problems with sign extension also creates problems for '@ 3\n> > seconds ago'::reltime. see #2\n>\n> Same thing. We should not be using hard-coded values.\n>\n> >\n> > 4) You dont store reltime or abstime as 64bit's into the db. Are there\n> > any plans to use 64bit alpha/linux timevalues as reltime & abstime ?\n> > maybe reltime64 & abstime64? whats the sql world doing with 'seconds\n> > since 1970' if the year is > 2038 ( which is the 32bit signed overflow )\n> > ?\n>\n> Not sure on this.\n>\n> > 5) having $(CC) -mieee all over just isn't good, particular if no float\n> > operations are done. It slows down everthing. Is there a way to limit\n> > this in the makefile?\n> > gat\n>\n> What does that flag do, and where would it be needed or not needed?\n>\n> --\n> Bruce Momjian | http://www.op.net/~candle\n> [email protected] | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n\n", "msg_date": "Wed, 14 Jul 1999 20:39:24 -0400", "msg_from": "Uncle George <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PORTS] RedHat6.0 & Alpha" }, { "msg_contents": "> Well, a reply, anyway\n> \n> 1) reltime & abstime values are stored in the DB as 4 byte values. The\n> definitions for abstime&reltime are also stored in the DB ( this from empiracle\n> debugging ) . If you do not plan to embrace the notion of #of seconds >\n> 2^(32-1), and you dont want to alter the DB notion that storage is 4 bytes then\n> \n> typedef int32 Absolutetime;\n> typedef int32 Relativetime;\n> \n> would appear to be most preferable & more stable (majic #'s work ) than\n> \n> typedef time_t Absolutetime;\n> typedef time_t Relativetime;\n> \n> This is not a complete solution , as there are still some sign extension\n> problems as demonstratable by the regression tests.\n> If you want to use 64bit Absolutetime & reltimes, then you should adjust (\n> or make more abstract ) the concept of abstime&reltime. BUT\n> THIS IS NOT A PORTING ISSUE! I would just like to get the abstime*reltime to\n> behave much like the 32bit folks.\n\nMakes sense. Using time_t does not make sense if we are forcing\neverything to 4 bytes.\n\n> \n> 2) Can u add HAS_LONG_LONG to $(CFLAGS)\n> I dont have long long, but it turns on some code ( somewhere ) that fixes\n> another problem.\n\nCheck configure. It runs a test to see if long long works, and sets that\nin include/config.h.\n\n> \n> 3) -mieee informs the egcs compiler fot the alpha to inject 'trapb'\n> instructions at various places in a floating point computation. The trapb is a\n> pipeline stall forcing the processor to stop issueing instructions until all\n> current instructions in the pipeline have executed. This is done to capture a\n> possible 'fault' at a resomable time so you can backtrack to the instruction\n> that faulted and take some corrective measure. There are also rules for\n> backtracing, and repairing. The usage of -mieee inserted these trapb's all over\n> the place. The current egcs compiler appears to do a better job at it For\n> purely int operations, then a module need not be enhanced by the -mieee switch.\n\nI am stumped on why we even need -mieee, but someone supplied a patch to\nadd it.\n\n> \n> 4) I'd give u some patches, but still getting the regression tests to work.\n> Where do I get 6.5.1, so I can work with that as a base\n\nGo to ftp.postgresql.org, and get the \"snapshot\". That will be 6.5.1 on\nJuly 19th.\n\n> 5) What is the floating point rounding set to ( up/down ). There seems to be an\n> extra digit of precision in ur i386, where the alpha port appears to round up (\n> and have 1 digit less :( )\n\nNot sure where that is set. Would be fpsetround() on BSD/OS, however, I\ndon't see us setting it anywhere, so my guess is that we are using the\nOS default for this.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 14 Jul 1999 23:13:44 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PORTS] RedHat6.0 & Alpha" }, { "msg_contents": "1) The reason why for -mieee is that if u care for some of the 'rare' floating point\nexceptions ( as defined by alpha floating point hardware ) then u want to handle\nthem - as per ieee specifications to give u the correct ieee result. When the\nprocessor cant handle the exceptions it (can ) traps to software assist routines\n( hidden in the kernel ). But in order for the kernel to fix the exception u have to\nstop the pipeline as close to the problem, so u can backtrace the user pc ( which is\nby now quite a few instructions ahead of where the exception occured ) to the point\nwhere it occured to see what register needs to have the correct value inserted.\n Without the -mieee, the compiler will not arrange the float operations so that\nit can be backstepped when a fault occures. The kernel then cannot fix the problem,\nand forces a floating point exception onto the program. Death usually follows.\n Therefor only do -mieee where u need to. ERGO can this flag be set individually\nas per each individual makefile, and not as per ./configure ?\n2) Then I want to report a bug - HAS_LONG_LONG in one of the 'c' files needs to be\nturned on - I think there is only one - also for RH6.0/alpha. I dont think that\nRH6.0/alpha has long long as a type and just uses long to define a 64bit quantity\n3) Then can I presume that Absolutetime/Relativetime in nabstime.h will be changed\nto int32?\n\nBruce Momjian wrote:\n\n> > Well, a reply, anyway\n> >\n> > 1) reltime & abstime values are stored in the DB as 4 byte values. The\n> > definitions for abstime&reltime are also stored in the DB ( this from empiracle\n> > debugging ) . If you do not plan to embrace the notion of #of seconds >\n> > 2^(32-1), and you dont want to alter the DB notion that storage is 4 bytes then\n> >\n> > typedef int32 Absolutetime;\n> > typedef int32 Relativetime;\n> >\n> > would appear to be most preferable & more stable (majic #'s work ) than\n> >\n> > typedef time_t Absolutetime;\n> > typedef time_t Relativetime;\n> >\n> > This is not a complete solution , as there are still some sign extension\n> > problems as demonstratable by the regression tests.\n> > If you want to use 64bit Absolutetime & reltimes, then you should adjust (\n> > or make more abstract ) the concept of abstime&reltime. BUT\n> > THIS IS NOT A PORTING ISSUE! I would just like to get the abstime*reltime to\n> > behave much like the 32bit folks.\n>\n> Makes sense. Using time_t does not make sense if we are forcing\n> everything to 4 bytes.\n>\n> >\n> > 2) Can u add HAS_LONG_LONG to $(CFLAGS)\n> > I dont have long long, but it turns on some code ( somewhere ) that fixes\n> > another problem.\n>\n> Check configure. It runs a test to see if long long works, and sets that\n> in include/config.h.\n>\n> >\n> > 3) -mieee informs the egcs compiler fot the alpha to inject 'trapb'\n> > instructions at various places in a floating point computation. The trapb is a\n> > pipeline stall forcing the processor to stop issueing instructions until all\n> > current instructions in the pipeline have executed. This is done to capture a\n> > possible 'fault' at a resomable time so you can backtrack to the instruction\n> > that faulted and take some corrective measure. There are also rules for\n> > backtracing, and repairing. The usage of -mieee inserted these trapb's all over\n> > the place. The current egcs compiler appears to do a better job at it For\n> > purely int operations, then a module need not be enhanced by the -mieee switch.\n>\n> I am stumped on why we even need -mieee, but someone supplied a patch to\n> add it.\n>\n> >\n> > 4) I'd give u some patches, but still getting the regression tests to work.\n> > Where do I get 6.5.1, so I can work with that as a base\n>\n> Go to ftp.postgresql.org, and get the \"snapshot\". That will be 6.5.1 on\n> July 19th.\n>\n> > 5) What is the floating point rounding set to ( up/down ). There seems to be an\n> > extra digit of precision in ur i386, where the alpha port appears to round up (\n> > and have 1 digit less :( )\n>\n> Not sure where that is set. Would be fpsetround() on BSD/OS, however, I\n> don't see us setting it anywhere, so my guess is that we are using the\n> OS default for this.\n>\n\n", "msg_date": "Thu, 15 Jul 1999 06:44:12 -0400", "msg_from": "Uncle George <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PORTS] RedHat6.0 & Alpha" }, { "msg_contents": "> 1) The reason why for -mieee is that if u care for some of the 'rare' floating point\n> exceptions ( as defined by alpha floating point hardware ) then u want to handle\n> them - as per ieee specifications to give u the correct ieee result. When the\n> processor cant handle the exceptions it (can ) traps to software assist routines\n> ( hidden in the kernel ). But in order for the kernel to fix the exception u have to\n> stop the pipeline as close to the problem, so u can backtrace the user pc ( which is\n> by now quite a few instructions ahead of where the exception occured ) to the point\n> where it occured to see what register needs to have the correct value inserted.\n> Without the -mieee, the compiler will not arrange the float operations so that\n> it can be backstepped when a fault occures. The kernel then cannot fix the problem,\n> and forces a floating point exception onto the program. Death usually follows.\n> Therefor only do -mieee where u need to. ERGO can this flag be set individually\n> as per each individual makefile, and not as per ./configure ?\n\nRight now, it is hard to have makefile-specific flags.\n\n> 2) Then I want to report a bug - HAS_LONG_LONG in one of the 'c' files needs to be\n> turned on - I think there is only one - also for RH6.0/alpha. I dont think that\n> RH6.0/alpha has long long as a type and just uses long to define a 64bit quantity\n\nAdd 'set -x' to configure, and figure how how the test is working in\nconfigure. Look at the configure output. It shows how it is setting\nthose flags.\n\n> 3) Then can I presume that Absolutetime/Relativetime in nabstime.h will be changed\n> to int32?\n\nAdded to TODO:\n\n* Make Absolutetime/Relativetime int4 because time_t can be int8 on some ports\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 15 Jul 1999 09:42:30 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PORTS] RedHat6.0 & Alpha" }, { "msg_contents": "In the regression test rules.sql there is this SQL command\n\n update rtest_v1 set a = rtest_t3.a + 20 where b = rtest_t3.b;\n\nWhich causes my alpha port to go core. The above line can be reduced to:\n\n update rtest_v1 set a = rtest_t3.a + 20 ;\n\nwhich also causes the same problem. It seems that the 64 bit address\n((Expr*)nodeptr)->oper gets truncated ( high 32 bits ) somewhere along the way.\n\nI was able to locate the errant code in rewriteManip.c:712. but There seems to be a\nbigger problem other than eraseing the upper 32bit address. It seems that\nFindMatchingNew() returns a node of type T_Expr, rather than the expected type of\nT_Var. Once u realize this then u can see why the now MISCAST \"(Var *)\n*nodePtr)->varlevelsup = this_varlevelsup\" will cause a problem. On my alpha this erases\na portion in the address in the T_Expr. On the redhat 5.2/i386 this code seems to be\nbenign, BUT YOU ARE ERASEING SOMETHING that doesn't belong to to T_Expr !\n\nSo what gives?\ngat\nMaybe an assert() will help in finding some of the miscast returned types? Wuddya think?\nsure would catch some of the boo-boo's hanging around\n\nrewriteManip.c:\n if (this_varno == info->new_varno &&\n this_varlevelsup == sublevels_up)\n {\n n = FindMatchingNew(targetlist,\n ((Var *) node)->varattno);\n if (n == NULL)\n {\n if (info->event == CMD_UPDATE)\n {\n *nodePtr = n = copyObject(node);\n ((Var *) n)->varno = info->current_varno;\n ((Var *) n)->varnoold = info->current_varno;\n }\n else\n *nodePtr = make_null(((Var *) node)->vartype);\n }\n else\n {\n *nodePtr = copyObject(n);\n ((Var *) *nodePtr)->varlevelsup = this_varlevelsup; /* This\nline zaps the address */\n }\n }\n\n\n\n\n\n", "msg_date": "Mon, 19 Jul 1999 20:21:10 -0400", "msg_from": "Uncle George <[email protected]>", "msg_from_op": false, "msg_subject": "RedHat6.0 & Alpha" }, { "msg_contents": "On Wed, 14 Jul 1999, Bruce Momjian wrote:\n\n> > 3) -mieee informs the egcs compiler fot the alpha to inject 'trapb'\n> > instructions at various places in a floating point computation. The trapb is a\n> > pipeline stall forcing the processor to stop issueing instructions until all\n> > current instructions in the pipeline have executed. This is done to capture a\n> > possible 'fault' at a resomable time so you can backtrack to the instruction\n> > that faulted and take some corrective measure. There are also rules for\n> > backtracing, and repairing. The usage of -mieee inserted these trapb's all over\n> > the place. The current egcs compiler appears to do a better job at it For\n> > purely int operations, then a module need not be enhanced by the -mieee switch.\n> \n> I am stumped on why we even need -mieee, but someone supplied a patch to\n> add it.\n\n\tThat someone would be me. :) I supplied a patch to add about a\nyear ago as that was the only way I could get some of the date/time code\nwork correctly. If it is needed anywhere anymore, then it is down in\nsrc/backend/util/adt, as that is where the datetime code is/was that were\ncausing FPEs to occur on regression testing. Without that flag, the\ndatetime code used to blow up all over the place. Might be worthwhile to\ntry removing it, recompiling, and running regression tests to see if it\nneeded anymore. That, and fixing the datetime code so it is not needed in\nthe first place (if it is still needed).\n\tThe biggest problem area for pgsql on Linux/Alpha at the moment is\nin the datetime code, including what reltime and abstime regression tests\nexercise.\n\tIf anyone wants me to test pgsql patches on Alpha, feel free to\nsend them my way, and I will give them a test on my XL366 Alpha running\nDebian 2.1. \n\tTTYL.\n\n----------------------------------------------------------------------------\n| \"For to me to live is Christ, and to die is gain.\" |\n| --- Philippians 1:21 (KJV) |\n----------------------------------------------------------------------------\n| Ryan Kirkpatrick | Boulder, Colorado | [email protected] |\n----------------------------------------------------------------------------\n| http://www-ugrad.cs.colorado.edu/~rkirkpat/ |\n----------------------------------------------------------------------------\n\n", "msg_date": "Mon, 19 Jul 1999 19:04:44 -0600 (MDT)", "msg_from": "Ryan Kirkpatrick <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PORTS] RedHat6.0 & Alpha" }, { "msg_contents": "Thats NOT THE PROBLEM.\nAlthough u have localize the -mieee/float in the date stuff, u have needlessly\ninflicted the -mieee switch on ALL compiled modules.\nI would have prefered it to be makefile ( Certainly on a SUBSYS.o, and even better on\non a per module.o) compile under a makefile switch\nie: ( or something simular )\n\nif eq($(CPUID),alpha)\nmyfloat.o: myfloat.c\n $(CC) $(CFLAGS) -mieee myfloat.c -o myfloat.o\nfi\n\n\nRyan Kirkpatrick wrote:\n\n> On Wed, 14 Jul 1999, Bruce Momjian wrote:\n>\n> > > 3) -mieee informs the egcs compiler fot the alpha to inject 'trapb'\n> >\n> > I am stumped on why we even need -mieee, but someone supplied a patch to\n> > add it.\n>\n> That someone would be me. :) I supplied a patch to add about a\n> year ago as that was the only way I could get some of the date/time code\n> w\n\n", "msg_date": "Mon, 19 Jul 1999 21:25:58 -0400", "msg_from": "Uncle George <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PORTS] RedHat6.0 & Alpha" }, { "msg_contents": "Uncle George <[email protected]> writes:\n> In the regression test rules.sql there is this SQL command\n> update rtest_v1 set a = rtest_t3.a + 20 where b = rtest_t3.b;\n> Which causes my alpha port to go core.\n\nYeah. This was reported by Pedro Lobo on 11 June, and we've been\npatiently waiting for Jan to decide what to do about it :-(\n\nYou could stop the coredump by putting a test into ResolveNew:\n\n {\n *nodePtr = copyObject(n);\n+ if (IsA(*nodePtr, Var))\n ((Var *) *nodePtr)->varlevelsup = this_varlevelsup;\n }\n\nbut what's not so clear is what's supposed to happen when the\nreplacement item *isn't* a Var. I tried to convince myself that nothing\nneeded to happen in that case, but wasn't successful. (Presumably the\nreplacement expression contains no instances of the variable being\nreplaced, so recursing into it with ResolveNew shouldn't be needed\n--- but maybe its varlevelsup values need adjusted?)\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 19 Jul 1999 22:15:12 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] RedHat6.0 & Alpha " }, { "msg_contents": "> Thats NOT THE PROBLEM.\n> Although u have localize the -mieee/float in the date stuff, u have needlessly\n> inflicted the -mieee switch on ALL compiled modules.\n> I would have prefered it to be makefile ( Certainly on a SUBSYS.o, and even better on\n> on a per module.o) compile under a makefile switch\n> ie: ( or something simular )\n> \n> if eq($(CPUID),alpha)\n> myfloat.o: myfloat.c\n> $(CC) $(CFLAGS) -mieee myfloat.c -o myfloat.o\n> fi\n> \n\nOK, I have added code in utils/adt/Makefile as:\n\n\t# seems to be required for some date/time stuff 07/19/1999 bjm\n\tifeq ($(CPU),alpha)\n\tCFLAGS+= -mieee\n\tendif\n\nThis is in the current tree, not 6.5.1. Please test and let me know if\nthis helps. I also added a Makefile-visible variable called CPU. Seems\nwe really don't have such a variable already available in the\nMakefile-scope.\n\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 19 Jul 1999 22:46:51 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PORTS] RedHat6.0 & Alpha" }, { "msg_contents": "> > if eq($(CPUID),alpha)\n> > myfloat.o: myfloat.c\n> > $(CC) $(CFLAGS) -mieee myfloat.c -o myfloat.o\n> > fi\n> # seems to be required for some date/time stuff 07/19/1999 bjm\n> ifeq ($(CPU),alpha)\n> CFLAGS+= -mieee\n> endif\n> This is in the current tree, not 6.5.1. Please test and let me know \n> if this helps. I also added a Makefile-visible variable called CPU. \n> Seems we really don't have such a variable already available in the\n> Makefile-scope.\n\nI imagine that this flag is specific to the compiler. It would\nprobably be best to leave it to patches until the alpha issues are\nsolved for every OS environment; sorry I don't have a platform myself\nto test on.\n\nbtw, RedHat is interested in doing a maintenance release of Postgres\nrpms, and would dearly love to have the Alpha port problems solved (or\nvica versa; they hate that their shipping rpms are broken or not\navailable on one of their three supported architectures).\n\nUncle G, could you tell us the actual port string configure generates\nfor your platform? At the moment, PORTNAME on my i686 box says\n\"linux\", and I don't see architecture info. But perhaps we can have\nconfigure deduce an ARCH parameter too? It already knows it when first\nidentifying the system...\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Tue, 20 Jul 1999 05:10:59 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PORTS] RedHat6.0 & Alpha" }, { "msg_contents": "Thomas Lockhart wrote:\n\n>\n>\n> btw, RedHat is interested in doing a maintenance release of Postgres\n> rpms, and would dearly love to have the Alpha port problems solved (or\n> vica versa; they hate that their shipping rpms are broken or not\n> available on one of their three supported architectures).\n\nWell, in order to do this properly for linux/alpha & the egcs compiler u\nneed to know more, or realize more on the dangers of casting. Please\nnote that I haven't said improperly, blithely, or arbitarily. Things\njust happen in the alpha if things are not properly casted. In the case\nof postgres this happens to be a (major) problem with function calls &\nfunction parameters. I have fixed just enough to get the regression\ntests to work.\nBTW I'd really love to have a redhat 6.0/alpha cd but not at the going\nprice of $79.00\n\n>\n>\n> Uncle G, could you tell us the actual port string configure generates\n> for your platform? At the moment, PORTNAME on my i686 box says\n> \"linux\", and I don't see architecture info. But perhaps we can have\n> configure deduce an ARCH parameter too? It already knows it when first\n> identifying the system...\n>\n\nWhat is PORTNAME. i ( as well as others ) use uname.\n\n", "msg_date": "Tue, 20 Jul 1999 10:58:39 -0400", "msg_from": "Uncle George <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PORTS] RedHat6.0 & Alpha" }, { "msg_contents": "> > RedHat is interested in doing a maintenance release of Postgres\n> > rpms\n> I have fixed just enough to get the regression\n> tests to work.\n> BTW I'd really love to have a redhat 6.0/alpha cd but not at the going\n> price of $79.00\n\nI heard that Costco (a discounting volume retailer) has the grey-box\n(MacMillan?) version of RH6.0 for $25...\n\n> What is PORTNAME. i ( as well as others ) use uname.\n\nIt is defined in src/Makefile.global. We would need to be able to\ncheck for both OS (linux) and architecture (alpha); perhaps Bruce's\nrecent change to give a \"CPU\" variable is just what we need. I'll add\nthe PORTNAME check to the relevant Makefile.\n\nIf you can send patches for what you have changed, I can incorporate\nthem into an RPM for testing (built on a RH5.2-i686 box, but the\nsource rpm can be rebuilt on yours).\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Tue, 20 Jul 1999 15:21:45 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PORTS] RedHat6.0 & Alpha" }, { "msg_contents": "> I imagine that this flag is specific to the compiler. It would\n> probably be best to leave it to patches until the alpha issues are\n> solved for every OS environment; sorry I don't have a platform myself\n> to test on.\n> \n> btw, RedHat is interested in doing a maintenance release of Postgres\n> rpms, and would dearly love to have the Alpha port problems solved (or\n> vica versa; they hate that their shipping rpms are broken or not\n> available on one of their three supported architectures).\n> \n> Uncle G, could you tell us the actual port string configure generates\n> for your platform? At the moment, PORTNAME on my i686 box says\n> \"linux\", and I don't see architecture info. But perhaps we can have\n> configure deduce an ARCH parameter too? It already knows it when first\n> identifying the system...\n\nOK, I have made it:\n\t\n\tifeq ($(CPU),alpha)\n\tifeq ($(CC), gcc)\n\tCFLAGS+= -mieee\n\tendif\n\tifeq ($(CC), egcs)\n\tCFLAGS+= -mieee\n\tendif\n\tendif\n\nI can always rip it out later.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 20 Jul 1999 12:47:21 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PORTS] RedHat6.0 & Alpha" }, { "msg_contents": "> OK, I have made it:\n> \n> ifeq ($(CPU),alpha)\n> ifeq ($(CC), gcc)\n> CFLAGS+= -mieee\n> endif\n> ifeq ($(CC), egcs)\n> CFLAGS+= -mieee\n> endif\n> endif\n\nGreat. I think that is closer to what is needed...\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Tue, 20 Jul 1999 16:59:19 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PORTS] RedHat6.0 & Alpha" }, { "msg_contents": "On Mon, 19 Jul 1999, Uncle George wrote:\n\n> Although u have localize the -mieee/float in the date stuff, u have needlessly\n> inflicted the -mieee switch on ALL compiled modules.\n\n\tI did that originally to see if it would solve any other of the\nproblems that the regression tests were revealing at that time. Though, I\nwill admit it was a mistake to leave it as a global flag without more\nresearch into if it helped anywhere or not. Unfortuntely, I got busy with\nschool about then and never got back to it. :(\n\n----------------------------------------------------------------------------\n| \"For to me to live is Christ, and to die is gain.\" |\n| --- Philippians 1:21 (KJV) |\n----------------------------------------------------------------------------\n| Ryan Kirkpatrick | Boulder, Colorado | [email protected] |\n----------------------------------------------------------------------------\n| http://www-ugrad.cs.colorado.edu/~rkirkpat/ |\n----------------------------------------------------------------------------\n\n", "msg_date": "Wed, 21 Jul 1999 19:39:06 -0600 (MDT)", "msg_from": "Ryan Kirkpatrick <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PORTS] RedHat6.0 & Alpha" }, { "msg_contents": "Can anyone address the status of this bug?\n\n\n> In the regression test rules.sql there is this SQL command\n> \n> update rtest_v1 set a = rtest_t3.a + 20 where b = rtest_t3.b;\n> \n> Which causes my alpha port to go core. The above line can be reduced to:\n> \n> update rtest_v1 set a = rtest_t3.a + 20 ;\n> \n> which also causes the same problem. It seems that the 64 bit address\n> ((Expr*)nodeptr)->oper gets truncated ( high 32 bits ) somewhere along the way.\n> \n> I was able to locate the errant code in rewriteManip.c:712. but There seems to be a\n> bigger problem other than eraseing the upper 32bit address. It seems that\n> FindMatchingNew() returns a node of type T_Expr, rather than the expected type of\n> T_Var. Once u realize this then u can see why the now MISCAST \"(Var *)\n> *nodePtr)->varlevelsup = this_varlevelsup\" will cause a problem. On my alpha this erases\n> a portion in the address in the T_Expr. On the redhat 5.2/i386 this code seems to be\n> benign, BUT YOU ARE ERASEING SOMETHING that doesn't belong to to T_Expr !\n> \n> So what gives?\n> gat\n> Maybe an assert() will help in finding some of the miscast returned types? Wuddya think?\n> sure would catch some of the boo-boo's hanging around\n> \n> rewriteManip.c:\n> if (this_varno == info->new_varno &&\n> this_varlevelsup == sublevels_up)\n> {\n> n = FindMatchingNew(targetlist,\n> ((Var *) node)->varattno);\n> if (n == NULL)\n> {\n> if (info->event == CMD_UPDATE)\n> {\n> *nodePtr = n = copyObject(node);\n> ((Var *) n)->varno = info->current_varno;\n> ((Var *) n)->varnoold = info->current_varno;\n> }\n> else\n> *nodePtr = make_null(((Var *) node)->vartype);\n> }\n> else\n> {\n> *nodePtr = copyObject(n);\n> ((Var *) *nodePtr)->varlevelsup = this_varlevelsup; /* This\n> line zaps the address */\n> }\n> }\n> \n> \n> \n> \n> \n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 23 Sep 1999 18:01:09 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PORTS] RedHat6.0 & Alpha" }, { "msg_contents": "This seems to be the detail on the bug report.\n\n\n> Uncle George <[email protected]> writes:\n> > In the regression test rules.sql there is this SQL command\n> > update rtest_v1 set a = rtest_t3.a + 20 where b = rtest_t3.b;\n> > Which causes my alpha port to go core.\n> \n> Yeah. This was reported by Pedro Lobo on 11 June, and we've been\n> patiently waiting for Jan to decide what to do about it :-(\n> \n> You could stop the coredump by putting a test into ResolveNew:\n> \n> {\n> *nodePtr = copyObject(n);\n> + if (IsA(*nodePtr, Var))\n> ((Var *) *nodePtr)->varlevelsup = this_varlevelsup;\n> }\n> \n> but what's not so clear is what's supposed to happen when the\n> replacement item *isn't* a Var. I tried to convince myself that nothing\n> needed to happen in that case, but wasn't successful. (Presumably the\n> replacement expression contains no instances of the variable being\n> replaced, so recursing into it with ResolveNew shouldn't be needed\n> --- but maybe its varlevelsup values need adjusted?)\n> \n> \t\t\tregards, tom lane\n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 23 Sep 1999 18:01:33 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PORTS] Re: [HACKERS] RedHat6.0 & Alpha" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> Can anyone address the status of this bug?\n[actual bug text snipped...]\n\nWow, Bruce. That's an old thread. I'll just say this -- 6.5.1 with\nUncle George and Ryan Kirkpatrick's patchset applied passes regression\nat RedHat on their alpha development machine (for the RPM distribution).\n\nWhether the current pre-6.6 tree passes regression or not, I can't say.\n\nThe author of the original message you replied to is gat -- AKA Uncle\nGeorge.\n\nLamar OWen\nWGCR Internet Radio\n", "msg_date": "Thu, 23 Sep 1999 18:39:06 -0400", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PORTS] RedHat6.0 & Alpha" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> Can anyone address the status of this bug?\n\nAFAIK it hasn't changed since July --- we've been waiting for Jan\nto opine on the proper fix, but he's been busy...\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 23 Sep 1999 20:31:47 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [PORTS] RedHat6.0 & Alpha " }, { "msg_contents": "\nAny ideas on this one?\n\n> Uncle George <[email protected]> writes:\n> > In the regression test rules.sql there is this SQL command\n> > update rtest_v1 set a = rtest_t3.a + 20 where b = rtest_t3.b;\n> > Which causes my alpha port to go core.\n> \n> Yeah. This was reported by Pedro Lobo on 11 June, and we've been\n> patiently waiting for Jan to decide what to do about it :-(\n> \n> You could stop the coredump by putting a test into ResolveNew:\n> \n> {\n> *nodePtr = copyObject(n);\n> + if (IsA(*nodePtr, Var))\n> ((Var *) *nodePtr)->varlevelsup = this_varlevelsup;\n> }\n> \n> but what's not so clear is what's supposed to happen when the\n> replacement item *isn't* a Var. I tried to convince myself that nothing\n> needed to happen in that case, but wasn't successful. (Presumably the\n> replacement expression contains no instances of the variable being\n> replaced, so recursing into it with ResolveNew shouldn't be needed\n> --- but maybe its varlevelsup values need adjusted?)\n> \n> \t\t\tregards, tom lane\n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 29 Nov 1999 17:46:02 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PORTS] Re: [HACKERS] RedHat6.0 & Alpha" }, { "msg_contents": "I'm sorry, the resultant node as returned is not expected. The solution, as\nprovided, did stop the insidious erasure of a field in a structure it did not own.\nI'm content ( but i dont know any better )\nIf ur asking me what one is suppose to do at this point - I dunno.\ngat\n\nBruce Momjian wrote:\n\n> Any ideas on this one?\n>\n> > Uncle George <[email protected]> writes:\n> > > In the regression test rules.sql there is this SQL command\n> > > update rtest_v1 set a = rtest_t3.a + 20 where b = rtest_t3.b;\n> > > Which causes my alpha port to go core.\n> >\n> > Yeah. This was reported by Pedro Lobo on 11 June, and we've been\n> > patiently waiting for Jan to decide what to do about it :-(\n> >\n> > You could stop the coredump by putting a test into ResolveNew:\n> >\n> > {\n> > *nodePtr = copyObject(n);\n> > + if (IsA(*nodePtr, Var))\n> > ((Var *) *nodePtr)->varlevelsup = this_varlevelsup;\n> > }\n> >\n> > but what's not so clear is what's supposed to happen when the\n> > replacement item *isn't* a Var. I tried to convince myself that nothing\n> > needed to happen in that case, but wasn't successful. (Presumably the\n> > replacement expression contains no instances of the variable being\n> > replaced, so recursing into it with ResolveNew shouldn't be needed\n> > --- but maybe its varlevelsup values need adjusted?)\n> >\n> > regards, tom lane\n> >\n> >\n>\n> --\n> Bruce Momjian | http://www.op.net/~candle\n> [email protected] | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n>\n> ************\n\n", "msg_date": "Mon, 29 Nov 1999 18:54:47 -0500", "msg_from": "Uncle George <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PORTS] Re: [HACKERS] RedHat6.0 & Alpha" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> Any ideas on this one?\n\n>> Uncle George <[email protected]> writes:\n>>>> In the regression test rules.sql there is this SQL command\n>>>> update rtest_v1 set a = rtest_t3.a + 20 where b = rtest_t3.b;\n>>>> Which causes my alpha port to go core.\n>> \n>> Yeah. This was reported by Pedro Lobo on 11 June, and we've been\n>> patiently waiting for Jan to decide what to do about it :-(\n>> \n>> You could stop the coredump by putting a test into ResolveNew:\n>> \n>> {\n>> *nodePtr = copyObject(n);\n>> + if (IsA(*nodePtr, Var))\n>> ((Var *) *nodePtr)->varlevelsup = this_varlevelsup;\n>> }\n>> \n>> but what's not so clear is what's supposed to happen when the\n>> replacement item *isn't* a Var. I tried to convince myself that nothing\n>> needed to happen in that case, but wasn't successful. (Presumably the\n>> replacement expression contains no instances of the variable being\n>> replaced, so recursing into it with ResolveNew shouldn't be needed\n>> --- but maybe its varlevelsup values need adjusted?)\n\n\nThat code currently reads like:\n\n /* Make a copy of the tlist item to return */\n n = copyObject(n);\n if (IsA(n, Var))\n {\n ((Var *) n)->varlevelsup = this_varlevelsup;\n }\n /* XXX what to do if tlist item is NOT a var?\n * Should we be using something like apply_RIR_adjust_sublevel?\n */\n return n;\n\nso it won't coredump when the tlist item is not a Var, but I'm not\nconvinced it does the right thing either. Jan is the only man who\nunderstands that code well enough to say what needs to be done about\nit...\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 29 Nov 1999 21:20:36 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PORTS] Re: [HACKERS] RedHat6.0 & Alpha " } ]
[ { "msg_contents": "Hello all,\n\nI see the following defintion of query_planner() in\noptimizer/planner/planmain.c .\n\nPlan * query_planner(Query *root,\n int command_type,\n List *tlist,\n List *qual)\n\nDoes this mean that qual is already a List when \nquery_planner () is called ?\n\nBut I see the following code in query_planner()\n\tqual = cnfify((Expr *) qual, true);\n\nAre Expr and List compatible ?\nI could see such CAST in some places.\n\nRegards.\n\nHiroshi Inoue\[email protected]\n", "msg_date": "Wed, 7 Jul 1999 17:30:28 +0900", "msg_from": "\"Hiroshi Inoue\" <[email protected]>", "msg_from_op": true, "msg_subject": "When(Where) does qual become a List ?" }, { "msg_contents": "\"Hiroshi Inoue\" <[email protected]> writes:\n> I see the following defintion of query_planner() in\n> optimizer/planner/planmain.c .\n\n> Plan * query_planner(Query *root,\n> int command_type,\n> List *tlist,\n> List *qual)\n\n> Does this mean that qual is already a List when \n> query_planner () is called ?\n\nNo, the declaration is a misnomer.\n\n> But I see the following code in query_planner()\n> \tqual = cnfify((Expr *) qual, true);\n\n> Are Expr and List compatible ?\n\nThey're both pointers to \"Node\" objects, so the code works, ugly though\nit is. It would probably be better to have both query_planner's qual\nand cnfify's argument declared as \"Node *\", since they aren't\nnecessarily Expr nodes either (could be Var, Const, etc...)\n\nMost of the planner/optimizer was once Lisp code, where there is only\none data type (effectively Node*), and the translation to C code was a\nlittle sloppy about node types in many places. There are still a lot of\nroutines that declare their args to be of a specific type that really\nisn't the only kind of node they might be handed.\n\nBTW, I never much liked the fact that cnfify returns a list rather than\nan explicit \"AND\" expression...\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 07 Jul 1999 11:28:23 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] When(Where) does qual become a List ? " } ]
[ { "msg_contents": "I am getting an crash on psql \\do when assert checking is enabled. The\nproblem is in cost_seqscan() where temp is now > 0.\n\nNot sure on the cause yet.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 7 Jul 1999 04:58:09 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "psql and \\do" }, { "msg_contents": "> I am getting an crash on psql \\do when assert checking is enabled. The\n> problem is in cost_seqscan() where temp is not > 0.\n> \n> Not sure on the cause yet.\n\nI have cleaned up some areas, but now cost_index() is getting that\nproblem with temp as NaN because it has exceeded it's range or something\nstrange like that.\n\nDoes psql \\do work on stock 6.5?\n\nHere is the weird part. This is with no optimization, and of course the\nassert at the end fails on the test temp >= 0.\n\t\t\n\tBreakpoint 1, cost_index (indexid=17033, expected_indexpages=137251568, \n\t selec=0.00877192989, relpages=2, reltuples=114, indexpages=2, \n\t indextuples=114, is_injoin=1 '\\001') at costsize.c:132\n\t132 Cost temp = 0;\n\t(gdb) n\n\t134 if (!_enable_indexscan_ && !is_injoin)\n\t(gdb) \n\t142 if (expected_indexpages <= 0)\n\t(gdb) \n\t144 if (indextuples <= 0)\n\t(gdb) print temp\n\t$1 = 0\n\t(gdb) n\n\t148 temp += expected_indexpages;\n\t(gdb) print temp\n\t$2 = 0\n\t(gdb) n\n\t156 temp += ceil(((double) selec) * ((double) relpages));\n\t(gdb) print temp\n\t$3 = -NaN(0x400000)\n\t(gdb) print expected_indexpages\n\t$4 = 137251568\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 7 Jul 1999 06:31:39 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] psql and \\do" }, { "msg_contents": "> I am getting an crash on psql \\do when assert checking is enabled. The\n> problem is in cost_seqscan() where temp is now > 0.\n> \n> Not sure on the cause yet.\n\nLet me ask specifically. With assert checking enabled, does \\do crash\npsql in 6.5?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 7 Jul 1999 06:34:26 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] psql and \\do" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n>> I am getting an crash on psql \\do when assert checking is enabled. The\n>> problem is in cost_seqscan() where temp is now > 0.\n\n> Let me ask specifically. With assert checking enabled, does \\do crash\n> psql in 6.5?\n\nWorks for me, with sources from about 30 June. Someone's broken\nsomething since then, perhaps.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 07 Jul 1999 11:31:08 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] psql and \\do " }, { "msg_contents": "> > I am getting an crash on psql \\do when assert checking is enabled. The\n> > problem is in cost_seqscan() where temp is now > 0.\n> > \n> > Not sure on the cause yet.\n> \n> Let me ask specifically. With assert checking enabled, does \\do crash\n> psql in 6.5?\n> \n\nOK, fixed. The call to ceil() in plancat.c did not have a prototype, so\nit thought ceil() returned an int, so it caused an invalid float value\nthat was only caught later on in cost_index().\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 7 Jul 1999 11:47:06 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] psql and \\do" }, { "msg_contents": "> Bruce Momjian <[email protected]> writes:\n> >> I am getting an crash on psql \\do when assert checking is enabled. The\n> >> problem is in cost_seqscan() where temp is now > 0.\n> \n> > Let me ask specifically. With assert checking enabled, does \\do crash\n> > psql in 6.5?\n> \n> Works for me, with sources from about 30 June. Someone's broken\n> something since then, perhaps.\n\nI have fixed it. It was a call to ceil() without a prototype, causing\ninvalid float value that was only caught later. Not sure why it was not\nshowing up earlier.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 7 Jul 1999 11:47:54 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] psql and \\do" } ]
[ { "msg_contents": "> Added to TODO:\n> \n> \t* Transaction log, so re-do log can be on a separate disk by\n> \t logging SQL queries, or before/after row images\n> \nI would drop the \"log SQL queries idea\". \nNo need to log before row images eighter, since this is the \ncurrent state of the row during rollforward.\n(For asserts a checksum of the before image would be sufficient, \nbut IMHO not necessary.)\n\nI suggest:\n\t* Transaction log that stores after row (or even only column)\nimages,\n\t which can be put on a separate disk to allow rollforward after\n\t a restore of a server.\n\nThe \"restore of a server\" is a main problem here, but I suggest the\nfollowing\nadditional backup tool, that could be used for a \"restore of a server\"\nwhich could then be used for a rollforward and would also be a lot faster \nthan a pg_dump:\n\n1. place a vacuum lock on db (we don't want vacuum during backup)\n2. backup pg_log using direct file access (something like dd bs=32k)\n3. backup the rest in any order (same as pg_log)\n4. release vacuum lock\n\nIf this was restored, this should lead to a consistent database, \nthat has all transactions after the start of backup rolled back.\n\nIs there a nono in this idea? I feel it should work.\nA problem is probably, that the first to touch a row with a committed update\nstores this info in that row. There would probably need to be an undo for\nthis \nafter restore of the physical files.\n\nAndreas\n\n", "msg_date": "Wed, 7 Jul 1999 11:13:15 +0200 ", "msg_from": "Zeugswetter Andreas IZ5 <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] RE: [GENERAL] Transaction logging" }, { "msg_contents": "Well, I'm thinking about WAL last two weeks. Hiroshi pointed me\nproblems in my approach to savepoints (when a tuple was marked\nfor update and updated after it) and solution would require\nnew tid field in header and both t_cmin/t_cmax => bigger header.\nI don't like it and so I switched my mind -:).\nI'm using \"Transaction Processing...\" book from Bruce - thanks\na lot, it's very helpful.\n\nI'll come with thoughts and feels in next few days...\n\nZeugswetter Andreas IZ5 wrote:\n> \n> > Added to TODO:\n> >\n> > * Transaction log, so re-do log can be on a separate disk by\n> > logging SQL queries, or before/after row images\n> >\n> I would drop the \"log SQL queries idea\".\n\nMe too.\n\n> No need to log before row images eighter, since this is the\n> current state of the row during rollforward.\n\nThis is true as long as we follow non-overwriting - may be\nchanged some day.\n\n> The \"restore of a server\" is a main problem here, but I suggest the\n> following\n> additional backup tool, that could be used for a \"restore of a server\"\n> which could then be used for a rollforward and would also be a lot faster\n> than a pg_dump:\n> \n> 1. place a vacuum lock on db (we don't want vacuum during backup)\n> 2. backup pg_log using direct file access (something like dd bs=32k)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n> 3. backup the rest in any order (same as pg_log)\n> 4. release vacuum lock\n\nIt looks like log archiving, not backup.\nI believe that _full_ backup will do near the same\nthings as pg_dump now, but _incremental_ backup will\nfetch info about what changed after last _full_ backup\nfrom log.\n\nVadim\n", "msg_date": "Wed, 07 Jul 1999 19:00:03 +0800", "msg_from": "Vadim Mikheev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] RE: [GENERAL] Transaction logging" }, { "msg_contents": "\nUpdated TODO:\n\n* Transaction log, so re-do log can be on a separate disk by\n with after-row images\n\n\n> > Added to TODO:\n> > \n> > \t* Transaction log, so re-do log can be on a separate disk by\n> > \t logging SQL queries, or before/after row images\n> > \n> I would drop the \"log SQL queries idea\". \n> No need to log before row images eighter, since this is the \n> current state of the row during rollforward.\n> (For asserts a checksum of the before image would be sufficient, \n> but IMHO not necessary.)\n> \n> I suggest:\n> \t* Transaction log that stores after row (or even only column)\n> images,\n> \t which can be put on a separate disk to allow rollforward after\n> \t a restore of a server.\n> \n> The \"restore of a server\" is a main problem here, but I suggest the\n> following\n> additional backup tool, that could be used for a \"restore of a server\"\n> which could then be used for a rollforward and would also be a lot faster \n> than a pg_dump:\n> \n> 1. place a vacuum lock on db (we don't want vacuum during backup)\n> 2. backup pg_log using direct file access (something like dd bs=32k)\n> 3. backup the rest in any order (same as pg_log)\n> 4. release vacuum lock\n> \n> If this was restored, this should lead to a consistent database, \n> that has all transactions after the start of backup rolled back.\n> \n> Is there a nono in this idea? I feel it should work.\n> A problem is probably, that the first to touch a row with a committed update\n> stores this info in that row. There would probably need to be an undo for\n> this \n> after restore of the physical files.\n> \n> Andreas\n> \n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 7 Jul 1999 23:25:24 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] RE: [GENERAL] Transaction logging" }, { "msg_contents": "Vadim Mikheev wrote:\n> \n> > The \"restore of a server\" is a main problem here, but I suggest the\n> > following\n> > additional backup tool, that could be used for a \"restore of a server\"\n> > which could then be used for a rollforward and would also be a lot faster\n> > than a pg_dump:\n> >\n> > 1. place a vacuum lock on db (we don't want vacuum during backup)\n> > 2. backup pg_log using direct file access (something like dd bs=32k)\n> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n> > 3. backup the rest in any order (same as pg_log)\n> > 4. release vacuum lock\n> \n> It looks like log archiving, not backup.\n> I believe that _full_ backup will do near the same\n> things as pg_dump now, but _incremental_ backup will\n> fetch info about what changed after last _full_ backup\n> from log.\n\nSorry, I was wrong. pg_dump is what's known as Export utility\nin Oracle and backup is quite different thing. But I have\ncorrections for full backup described above:\n\n1. no vacuum lock is needed: all vacuum ops will be logged\n in normal way to rollback changes in failures;\n2. all datafiles have to be backed up _before_ log backup\n due to WAL logic: changes must be written to log before\n they'll be written to on-disk data pages.\n\nVadim\n", "msg_date": "Fri, 16 Jul 1999 15:35:50 +0800", "msg_from": "Vadim Mikheev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] RE: [GENERAL] Transaction logging" } ]
[ { "msg_contents": "\nAnyone have any idea when either 6.3.2 came out or when the man pages\nwere htmlized? I need the date for the news page.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> TEAM-OS2\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Wed, 7 Jul 1999 11:28:18 -0400 (EDT)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": true, "msg_subject": "History?" }, { "msg_contents": "> \n> Anyone have any idea when either 6.3.2 came out or when the man pages\n> were htmlized? I need the date for the news page.\n\n>From release.sgml:\n\t\n\tTue Apr 7 16:53:16 EDT 1998\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 7 Jul 1999 12:06:47 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] History?" }, { "msg_contents": "On Wed, 7 Jul 1999, Bruce Momjian wrote:\n\n> > \n> > Anyone have any idea when either 6.3.2 came out or when the man pages\n> > were htmlized? I need the date for the news page.\n> \n> >From release.sgml:\n> \t\n> \tTue Apr 7 16:53:16 EDT 1998\n> \n> \n\nThanks!\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> TEAM-OS2\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Wed, 7 Jul 1999 12:12:39 -0400 (EDT)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] History?" } ]
[ { "msg_contents": "Leon <[email protected]> wrote:\n> Earlier I proposed that links should be of type similar to tid,\n> so inserts should be fed with values of tid. But this requires\n> intermediate step, so there can be a function which takes primary\n> key and returns tid, or as you say a function \n> last_touched('other_table_name') - this seems the best choice.\n\nBeware of adding special purpose hard-links as a way to\nskip the run-time value comparisons. A link looks attractive\nbut it really only works for one-to-one relationships\n(any multi-way relationships would require a list of links\nto follow) and a link has all of the overhead that a\nforeign key requires.\n\nAs somone who has developed several commercial dbms systems,\nI would discourage doing a special \"link\" type. There are\nother ways to gain performance -- de-normalize your tables\nif you are doing mainly reads; carefully check your storage\nlayout; and, of course, buy more RAM ;-)\n\n--\nBob Devine [email protected]\n", "msg_date": "Wed, 07 Jul 1999 13:20:51 -0600", "msg_from": "Bob Devine <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Fwd: Joins and links" }, { "msg_contents": "> Leon <[email protected]> wrote:\n> > Earlier I proposed that links should be of type similar to tid,\n> > so inserts should be fed with values of tid. But this requires\n> > intermediate step, so there can be a function which takes primary\n> > key and returns tid, or as you say a function \n> > last_touched('other_table_name') - this seems the best choice.\n> \n> Beware of adding special purpose hard-links as a way to\n> skip the run-time value comparisons. A link looks attractive\n> but it really only works for one-to-one relationships\n> (any multi-way relationships would require a list of links\n> to follow) and a link has all of the overhead that a\n> foreign key requires.\n> \n> As somone who has developed several commercial dbms systems,\n> I would discourage doing a special \"link\" type. There are\n> other ways to gain performance -- de-normalize your tables\n> if you are doing mainly reads; carefully check your storage\n> layout; and, of course, buy more RAM ;-)\n\nGood to see you around Bob. This guy does know what he is talking\nabout.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 7 Jul 1999 16:45:48 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Fwd: Joins and links" }, { "msg_contents": "Bruce Momjian wrote:\n\n> >\n> > As somone who has developed several commercial dbms systems,\n> > I would discourage doing a special \"link\" type. There are\n> > other ways to gain performance -- de-normalize your tables\n> > if you are doing mainly reads; carefully check your storage\n> > layout; and, of course, buy more RAM ;-)\n> \n> Good to see you around Bob. This guy does know what he is talking\n> about.\n\nBelieve me, I know what I say. Some day I spoke exactly like you,\nbut having seen an impementation of network DBMS, I suddenly\nrealized that SQL days are numbered. The sooner you understand that\nthe better.\n\n-- \nLeon.\n\n\n", "msg_date": "Thu, 08 Jul 1999 11:21:59 +0500", "msg_from": "Leon <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Fwd: Joins and links" }, { "msg_contents": "Bob Devine wrote:\n\n> Beware of adding special purpose hard-links as a way to\n> skip the run-time value comparisons. A link looks attractive\n> but it really only works for one-to-one relationships\n> (any multi-way relationships would require a list of links\n> to follow) \n\nNot exactly. If you have a fixed set of links it a tuple, you\ndon't have to follow the list of them.\n\n> and a link has all of the overhead that a\n> foreign key requires.\n> \n\nWe looked at the matter carefully and found no overhead like\nforegn key's. Maybe you should read the thread more carefully\nonce again.\n\n> As somone who has developed several commercial dbms systems,\n> I would discourage doing a special \"link\" type. There are\n> other ways to gain performance -- de-normalize your tables\n> if you are doing mainly reads;\n\nIf I denormalize my tables, they will grow some five to ten \ntimes in size.\n\nBut simply think what you are proposing: you are proposing \nexactly to break RDBMS \"alphabet\" to gain performance! This\nmeans that even SQL warriors see RDBMS's ideology as not \nproper and as corrupt, because it hinders performance. \n\nYou are contradicting yourself! \n\n> carefully check your storage\n> layout; and, of course, buy more RAM ;-)\n\nAnd what will I do with performance loss from bloated tables?\n\n-- \nLeon.\n\n", "msg_date": "Thu, 08 Jul 1999 11:37:48 +0500", "msg_from": "Leon <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Fwd: Joins and links" }, { "msg_contents": "Bruce Momjian wrote:\n\n> > As somone who has developed several commercial dbms systems,\n> > I would discourage doing a special \"link\" type. There are\n> > other ways to gain performance -- de-normalize your tables\n> > if you are doing mainly reads; carefully check your storage\n> > layout; and, of course, buy more RAM ;-)\n> \n> Good to see you around Bob. This guy does know what he is talking\n> about.\n> \n\nAfter thinking a bit, it became clear to me that we are flaming \nsenselessly here. So can anyone do a fast hack to test links\nfor speed? Especially with three or more tables being joined.\n\n-- \nLeon.\n\n\n", "msg_date": "Thu, 08 Jul 1999 17:16:04 +0500", "msg_from": "Leon <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Fwd: Joins and links" }, { "msg_contents": "Bob Devine wrote:\n\n> As somone who has developed several commercial dbms systems,\n> I would discourage doing a special \"link\" type.\n\nOf course you tried to implement links and failed, didn't you?\nIt such case personally I and maybe others want to hear what\ncan go wrong, in order to benefit from your mistakes and lessons. \n\n-- \nLeon.\n\n", "msg_date": "Thu, 08 Jul 1999 17:19:33 +0500", "msg_from": "Leon <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Fwd: Joins and links" }, { "msg_contents": "[Charset koi8-r unsupported, filtering to ASCII...]\n> Bruce Momjian wrote:\n> \n> > > As somone who has developed several commercial dbms systems,\n> > > I would discourage doing a special \"link\" type. There are\n> > > other ways to gain performance -- de-normalize your tables\n> > > if you are doing mainly reads; carefully check your storage\n> > > layout; and, of course, buy more RAM ;-)\n> > \n> > Good to see you around Bob. This guy does know what he is talking\n> > about.\n\nNo. I wasn't flaming, just confirming that he has lots of experience.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 8 Jul 1999 12:19:10 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Fwd: Joins and links" }, { "msg_contents": "Leon wrote:\n> If I denormalize my tables, they will grow some five to ten\n> times in size.\n> \n> But simply think what you are proposing: you are proposing\n> exactly to break RDBMS \"alphabet\" to gain performance! This\n> means that even SQL warriors see RDBMS's ideology as not\n> proper and as corrupt, because it hinders performance.\n\nand he wrote:\n> After thinking a bit, it became clear to me that we are flaming \n> senselessly here. So can anyone do a fast hack to test links\n> for speed? Especially with three or more tables being joined.\n\nand in another message:\n> Of course you tried to implement links and failed, didn't you?\n> It such case personally I and maybe others want to hear what\n> can go wrong, in order to benefit from your mistakes and lessons. \n\nIt's a good idea to test it out. My guess is that a hard link\nbetween tables would speed up the join a small amount.\n\nThe bigger drawbacks are:\n1) the application design is now encoded in the database structure.\nUsing link forces your _one_ application's need to affect all other\nusers of that table. Each affected table would be bloated with\nat least one more column. All updates now affect multiple tables\nleading to more locking, paging, and synchronization overhead. Etc.\n\n2) adding performance tweaks for a version condemns you to always\nbe aware of it for future versions. I know of many cases where\npeople now hate the idea of a database performance \"improvement\"\nthat has prevented them from modifying the database schema.\nOne person's company is still using a database that everyone hates\nbecause one critical application prevents them from changing it.\nIndexes are about the only useful physical level hack that have\nsurvived the test of time. An index is not part of relational\ndatabases but are universally implemented because they yield\na huge payback.\n\n3) Be aware of hardware improvements. System performance is\nstill doubling every 18 months. If a software hack can't match\nthat rate, it is probably not worth doing.\n\n\nIn my experience, old style network and hierarchical databases\nare still faster than relational systems. Just like OO DBMSs\ncan be faster. However, the non-relational databases gain their\nspeed by optimizing for a single application instead of being a\nmore general purpose approach. Nearly every company that uses\ndatabases realizes that flexibility is more important than a bit\nmore speed unless that business has already maxed out their\ncomputer's performance and are desparate for that extract bit.\n\nIt is my many years of watching databases in use that suggest\nthat links are not worth the overhead. My gut feeling is that\nlinks would speed up a simple join by only 10% and there are\nmany other ways to speed up joins.\n\n--\nBob Devine [email protected]\n", "msg_date": "Thu, 08 Jul 1999 10:31:49 -0600", "msg_from": "Bob Devine <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Fwd: Joins and links" }, { "msg_contents": "Bob Devine wrote:\n\n> The bigger drawbacks are:\n> 1) the application design is now encoded in the database structure.\n\nThis is true.\n\n> Using link forces your _one_ application's need to affect all other\n> users of that table. Each affected table would be bloated with\n> at least one more column.\n\nIn fact link is intended to replace foreign key in a given table \nand not coexist with it. Given that it eliminates the need of \nindex, there is even a small space gain.\n\n> All updates now affect multiple tables\n> leading to more locking, paging, and synchronization overhead. Etc.\n\nOh, no :) After a short discussion it became clear that there\nmust not be a link rewrite in a referencing table during update. \nSo update goes as usual, involving only one table. Instead we have \na chain of referenced tuples left after update. VACUUM eliminates\nthese. \n\n> \n> 2) adding performance tweaks for a version condemns you to always\n> be aware of it for future versions.\n\nAbsolutely right. If we started a talk on general matters, let me \nclear my position. \n\nEvery tool is suitable for it's purpose. No one walks from city\nto city and uses car instead. And no one takes a car to get into\nneighbor's home for evening tea :) So. There are tasks of \ndifferent kind. Some are flexible and require redesigning of \nrelationships often. But there are other, which are well known\nand explored well, and have well known structure. Accounting is\nsome of them. There are a lot others, without doubt. What is \nproposed is a tool to handle tasks of the second sort effectively,\nsince general RDBMS is a tool for other, flexible tasks. This is a \nmatter of design and designer's job to choose the right tool.\nIf designer made a wrong choice, it is a problem of him an his\nkicked ass. You should give designer as many tools as possible \nand let him choose. They will love you for that :)\n\n\n> 3) Be aware of hardware improvements. System performance is\n> still doubling every 18 months. If a software hack can't match\n> that rate, it is probably not worth doing.\n\nOh, that argument again :) I'll tell you - sooner or later\nthis development will stop. There are purely physical obstacles\nthat prevent manufacturing of silicon chips with frequencies much\nhigher than 10 gigahertz.\n\n> It is my many years of watching databases in use that suggest\n> that links are not worth the overhead. My gut feeling is that\n> links would speed up a simple join by only 10% and there are\n> many other ways to speed up joins.\n\nLet's count. We have two tables, joined by link. What is the\ncost of lookup? First there is an index scan, which is between\n2 and 5 iterations, and link lookup which is 1 iteration. Average\nis 4 iterations. And if we don't have link, there is 6 iterations.\nMore than 10% already! We still didn't consider joining multiple\ntables and big tables. So the gain will be big anyway.\n\nThat is not to consider the optimizer (do I sound like a broken\nrecord? :) To be sincere, current Postgres optimizer sucks heavily\nand in most cases can't figure out the fastest way. Implementing\nlinks is a quick and cheap way to get a performance gain on \na wide range of tasks. I am obliged to repeat this again and again, \nbecause every day there appears a new developer who didn't hear\nthat yet :)\n\n-- \nLeon.\n\n", "msg_date": "Thu, 08 Jul 1999 23:10:51 +0500", "msg_from": "Leon <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Fwd: Joins and links" }, { "msg_contents": "Leon <[email protected]> writes:\n\n> > 3) Be aware of hardware improvements. System performance is\n> > still doubling every 18 months. If a software hack can't match\n> > that rate, it is probably not worth doing.\n> \n> Oh, that argument again :) I'll tell you - sooner or later\n> this development will stop. There are purely physical obstacles\n> that prevent manufacturing of silicon chips with frequencies much\n> higher than 10 gigahertz.\n\nFurthermore, the continuous availability of ever faster hardware at\nlow prices will slow down very soon, now that the MS Windows users\nfinally don't need to upgrade to twice as fast computers every 18\nmonths just to be able to run the latest version of MS bloatware, and\nwill spend their money on peripherals and fast net access instead.\n\n...but as for \"purely physical obstacles\", I don't buy it. We will\nalways find a way to make what we need. Count on it.\n\n-tih\n-- \nPopularity is the hallmark of mediocrity. --Niles Crane, \"Frasier\"\n", "msg_date": "09 Jul 1999 07:45:36 +0200", "msg_from": "Tom Ivar Helbekkmo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Fwd: Joins and links" }, { "msg_contents": "Bob Devine <[email protected]> writes:\n\n> 3) Be aware of hardware improvements. System performance is\n> still doubling every 18 months. If a software hack can't match\n> that rate, it is probably not worth doing.\n\nI like Kernighan's and Pike's argument, presented in their recent\nbook, The Practice of Programming, that if a software improvement is\nexpected to save more accumulated user time than the programmer time\nspent making it, then it should be considered worthwhile.\n\nGreat book, by the way. Highly recommended.\n\n-tih\n-- \nPopularity is the hallmark of mediocrity. --Niles Crane, \"Frasier\"\n", "msg_date": "09 Jul 1999 09:32:24 +0200", "msg_from": "Tom Ivar Helbekkmo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Fwd: Joins and links" }, { "msg_contents": "Leon wrote:\n> \n> Bob Devine wrote:\n> \n> > It is my many years of watching databases in use that suggest\n> > that links are not worth the overhead. My gut feeling is that\n> > links would speed up a simple join by only 10% and there are\n> > many other ways to speed up joins.\n> \n> Let's count. We have two tables, joined by link. What is the\n> cost of lookup? First there is an index scan, which is between\n> 2 and 5 iterations, and link lookup which is 1 iteration. Average\n> is 4 iterations.\n\nThis is true for the case wher you want to look up only one row.\n\nThe difference will quickly degrade as more rows are fetched in one \nquery and cache misses and disk head movement start rattling your \ndisks. The analogy being a man who needs 10 different items from a \nsupermarket and takes 10 full round trips from home to buy them.\n\n> And if we don't have link, there is 6 iterations.\n> More than 10% already! We still didn't consider joining multiple\n> tables and big tables.\n\nI think that the two-tables-one-row lookup will gain the most, \nprobably even more than 10%\n\n> So the gain will be big anyway.\n> \n> That is not to consider the optimizer (do I sound like a broken\n> record? :) To be sincere, current Postgres optimizer sucks heavily\n> and in most cases can't figure out the fastest way.\n\nAdding links does nothing to improve the optimizer, its still free \nto choose sucky plans. It is possible that links are faster if used \nin the right way, as they cut out the index lookup, but I suspect that \nhard-coding link-is-always-faster into the optimiser would also produce \na lot of very bad plans. \n\nThe link-is-always-faster is probably true only for all-memory\ndatabases, \nand even there not allways - for example if it happened to produce a\nworse \ninitial ordering for sort/group by than some other strategy, a complex \nquery can still run slower (the difference will be small either way)\n\n> Implementing links is a quick and cheap way to get a performance \n> gain on a wide range of tasks.\n\nFixing the optimizer would get a performance gain on a far wider \nrange of tasks, and is still needed for links.\n\n> I am obliged to repeat this again and again,\n> because every day there appears a new developer who didn't hear\n> that yet :)\n\nUnfortunaltely there are far less _developers_ than letter-writers, and\nit\nis sometimes quite hard to make them even commit good and useful patches \nthat are ready.\n\nSo I quess thet if you want links in foreseeable future, your best bet \nwould be to start coding, and to coordinate with whoever starts to\nfix/rewrite\nthe optimizer (probably Vadim)\n\n(BTW, in PostgreSQL, I still consider myself a letter-writer and not \ndeveloper, as I have committed no code for the backend)\n\n-------------\nHannu\n", "msg_date": "Fri, 09 Jul 1999 10:46:53 +0300", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Fwd: Joins and links" }, { "msg_contents": "Hannu Krosing wrote:\n\n> The difference will quickly degrade as more rows are fetched in one\n> query and cache misses and disk head movement start rattling your\n> disks. The analogy being a man who needs 10 different items from a\n> supermarket and takes 10 full round trips from home to buy them.\n\nFrankly, I didn't even consider fetching database from disk. This\nslows queries immensely and I wonder if there exist someone who\ndoesn't keep their entire DB in RAM.\n\n> I think that the two-tables-one-row lookup will gain the most,\n> probably even more than 10%\n\nI think the gain will raise with the number of tables, because\nthe more tables - the more index lookups are saved.\n\n> Adding links does nothing to improve the optimizer, its still free\n> to choose sucky plans. It is possible that links are faster if used\n> in the right way, as they cut out the index lookup, but I suspect that\n> hard-coding link-is-always-faster into the optimiser would also produce\n> a lot of very bad plans.\n\nMethinks that hard-wiring link-is-always-faster into optimizer will\nstill help it very much, because there are few situations where it\nis not true.\n\n> Fixing the optimizer would get a performance gain on a far wider\n> range of tasks, and is still needed for links.\n\nBut general fixing of optimizer is total rewritement of it, whereas\nlink fix is almost a fast hack.\n\n> So I quess thet if you want links in foreseeable future, your best bet\n> would be to start coding, and to coordinate with whoever starts to\n> fix/rewrite\n> the optimizer (probably Vadim)\n>\n\nUnfortunately I already have a project to work on. There is too \nlittle of me for two projects.\n\n-- \nLeon.\n\n", "msg_date": "Fri, 09 Jul 1999 18:04:13 +0500", "msg_from": "Leon <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Fwd: Joins and links" }, { "msg_contents": "Leon wrote:\n> \n> Hannu Krosing wrote:\n> \n> > The difference will quickly degrade as more rows are fetched in one\n> > query and cache misses and disk head movement start rattling your\n> > disks. The analogy being a man who needs 10 different items from a\n> > supermarket and takes 10 full round trips from home to buy them.\n> \n> Frankly, I didn't even consider fetching database from disk. This\n> slows queries immensely and I wonder if there exist someone who\n> doesn't keep their entire DB in RAM.\n\nWell, I personally dont even know, how I could keep my entire PostgreSQL \nDB in RAM :)\n\nIt would be interesting to know what percentage of people do use \nPostgreSQL for databases that are small enough to fit in RAM - \nsurely not the ones who need splitting of tables >2GB.\n\nAnd I think that setting up PostgreSQL for maximum RAM usage would \nmake a nice topic for \"Optimizing PostgreSQL\". \n\nWhen my backends are mostly idle they usually use about 3-4MB of memory, \n(hardly enough for any database :).\n\nIt is quite possible that some bigger tables end up in a disk-cache, \nbut you can expect to find all your data in that cache only if you do \nmany queries on the same tables in a row, and the machine is otherways \nidle.\n\n> I think the gain will raise with the number of tables, because\n> the more tables - the more index lookups are saved.\n\nMy point is that sometimes even sequential scan is faster than index\nlookup,\nand not only due to overhead of using the index, but due to better disk \nperformance of sequential reads vs. random reads\n\nFor in-memory databases this of course does not count.\n\nStill I'm quite sure that the main effort in PostgreSQL development has\nso \nfar gone to optimising queries where most of the data is fetched from\nthe \ndisk.\n\n> > Fixing the optimizer would get a performance gain on a far wider\n> > range of tasks, and is still needed for links.\n> \n> But general fixing of optimizer is total rewritement of it, whereas\n> link fix is almost a fast hack.\n\nI'm not too sure about it. It certainly can be done without a _total_ \nrewrite, but getting all the new node types and access methods into the \nparser/planner/executor may not be trivial. \n\nOne idea would be a cross-table OID index for anything in memory.\nThen, assuming that everything is in-memory, using oids as links would\nbe \nonly trivially, if at all, slower (2-10 memory accesses and comparisons) \nthan \"straight\" link lookup, that could also be chasing linear chains of \nforward-id-links on frequently updated DBs. On infrequently updated DBs \nyou could just use triggers and/or cron jobs to keep your reports\nupdated,\nI quess that this is what most commercial OLAP systems do.\n\nActually I lived my first halfyear of using PostgreSQL under a delusion \nthat lookup by OID would be somewhat special (fast). \nProbably due to my misunderstanding of the (ever diminishing) O in\nORDBMS :) \nThere have been some attempts to get the object-orientedness better \nsupported by PGSQL, (possibly even some contrib funtions), but nobody\nseems \nto have needed it bad enough to\n1) implement it\n and \n2) shout long enough to get it included in standart distibution. \nMost (all?) of the developers seem to be die-hard RDBMS guys and thanks \nto that we have now a solid and reasonably fast Relational DBMS with \nsome OO rudiments\n\nSo I quess that unless you do (at least part of) links, no-one else will\n;(\n\n> Unfortunately I already have a project to work on. There is too\n> little of me for two projects.\n\nDarn! I almost hoped we would get one more PostgreSQL hacker as I'm sure \nthat after familiarising oneself with the code enougth to implement\nlinks \none would be quite capable of helping with most of PostgreSQL\ndevelopment\n<grin>\n\n----------------------\nHannu\n", "msg_date": "Fri, 09 Jul 1999 16:55:02 +0300", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Fwd: Joins and links" }, { "msg_contents": "> Unfortunaltely there are far less _developers_ than letter-writers, and\n> it\n> is sometimes quite hard to make them even commit good and useful patches \n> that are ready.\n> \n> So I quess thet if you want links in foreseeable future, your best bet \n> would be to start coding, and to coordinate with whoever starts to\n> fix/rewrite\n> the optimizer (probably Vadim)\n\nAre people complaining about the 6.5 optimizer, or the pre-6.5\noptimizer? 6.5 has a much improved optimizer.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 9 Jul 1999 12:39:57 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Fwd: Joins and links" }, { "msg_contents": "Hello Hannu,\n\nFriday, July 09, 1999 you wrote:\n\n\nH> Still I'm quite sure that the main effort in PostgreSQL development has\nH> so\nH> far gone to optimising queries where most of the data is fetched from\nH> the\nH> disk.\n\nOh, I see. This is appropriate for some not time critical\nand dumb applications, such as web DB. But this is out of the\nway of speed server tasks. Maybe Postgres has been designed with\nsuch plan in mind - to use big DBs from disc? That is not\ngood news for me either. Almost everyone has suggested me\nto use more RAM to speed up queries, and now it turned out to\nbe not in Postgres's mainstream. Maybe there is something\nwrong with this ideology, since RAM is bigger and cheaper\nevery day?\n\nH> forward-id-links on frequently updated DBs. On infrequently updated DBs\nH> you could just use triggers and/or cron jobs to keep your reports\nH> updated,\nH> I quess that this is what most commercial OLAP systems do.\n\nIt seems that trigger will be the last resort.\n\nBest regards, Leon\n\n\n", "msg_date": "Fri, 9 Jul 1999 22:07:36 +0500", "msg_from": "Leon <[email protected]>", "msg_from_op": false, "msg_subject": "Re[2]: [HACKERS] Fwd: Joins and links" }, { "msg_contents": "\n\nHannu Krosing wrote: \n> Leon wrote:\n\n> > Frankly, I didn't even consider fetching database from disk. This\n> > slows queries immensely and I wonder if there exist someone who\n> > doesn't keep their entire DB in RAM.\n> \n> Well, I personally dont even know, how I could keep my entire PostgreSQL\n> DB in RAM :)\n\nI thought about doing this once on a Linux box. What I was thinking about was\ncreating a large RAM disk, and use that disk together with a physical drive in\na mirror setup. However, I was never able to create a large enough RAM disk back then\n(must have been like LInux 2.0), and also the RAID mirror code wasn't able to\nsupport such a mix of devices (i.e. RAM disk + physical disk). The situation might\nhave changed by now.\n\nMaarten\n\n-- \n\nMaarten Boekhold, [email protected]\nTIBCO Finance Technology Inc.\nThe Atrium\nStrawinskylaan 3051\n1077 ZX Amsterdam, The Netherlands\ntel: +31 20 3012158, fax: +31 20 3012358\nhttp://www.tibco.com\n\n", "msg_date": "Mon, 12 Jul 1999 09:56:54 +0200", "msg_from": "Maarten Boekhold <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Fwd: Joins and links" }, { "msg_contents": "Then <[email protected]> spoke up and said:\n> Hannu Krosing wrote: \n> > Leon wrote:\n> > > Frankly, I didn't even consider fetching database from disk. This\n> > > slows queries immensely and I wonder if there exist someone who\n> > > doesn't keep their entire DB in RAM.\n> > \n> > Well, I personally dont even know, how I could keep my entire PostgreSQL\n> > DB in RAM :)\n> \n> I thought about doing this once on a Linux box. What I was thinking about was\n> creating a large RAM disk, and use that disk together with a physical drive in\n> a mirror setup. However, I was never able to create a large enough RAM disk back then\n> (must have been like LInux 2.0), and also the RAID mirror code wasn't able to\n> support such a mix of devices (i.e. RAM disk + physical disk). The situation might\n> have changed by now.\n\nMaarten, PostgreSQL keeps it's data in the filesystem, rather than on\nraw disks. Due to the nature of *nix, all you need to do to keep your\nentire DB in memory is have enough memory. The buffer cache will do\nthe rest, for you. Of course, you still need to start it up with -F\nto avoid fsync's. This is also somewhat OS dependent, as you may have\nto do some tuning to allow full memory utilization in this manner.\n\n\n-- \n=====================================================================\n| JAVA must have been developed in the wilds of West Virginia. |\n| After all, why else would it support only single inheritance?? |\n=====================================================================\n| Finger [email protected] for my public key. |\n=====================================================================", "msg_date": "12 Jul 1999 09:11:51 -0400", "msg_from": "Brian E Gallew <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Fwd: Joins and links" }, { "msg_contents": "\n> > > Well, I personally dont even know, how I could keep my entire PostgreSQL\n> > > DB in RAM :)\n> >\n> > I thought about doing this once on a Linux box. What I was thinking about was\n> > creating a large RAM disk, and use that disk together with a physical drive in\n> > a mirror setup. However, I was never able to create a large enough RAM disk back then\n[...] \n> Maarten, PostgreSQL keeps it's data in the filesystem, rather than on\n> raw disks. Due to the nature of *nix, all you need to do to keep your\n> entire DB in memory is have enough memory. The buffer cache will do\n> the rest, for you. Of course, you still need to start it up with -F\n\nI know, but there's no *guarantee* that the complete database is going to be in RAM.\nThat's what I was trying to solve. Putting the thing on a RAM disk would guarantee that\nit is.\n\nMaarten\n\n-- \n\nMaarten Boekhold, [email protected]\nTIBCO Finance Technology Inc.\nThe Atrium\nStrawinskylaan 3051\n1077 ZX Amsterdam, The Netherlands\ntel: +31 20 3012158, fax: +31 20 3012358\nhttp://www.tibco.com\n", "msg_date": "Mon, 12 Jul 1999 15:34:39 +0200", "msg_from": "Maarten Boekhold <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Fwd: Joins and links" } ]
[ { "msg_contents": "Sorry to have to blast all this e-mail around. That is really the only\nway to tell bug reporters personally that their bugs are fixed, and make\nsure no unfixed bugs are lost.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 7 Jul 1999 18:06:18 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "All my e-mail" } ]
[ { "msg_contents": "If you have a default value for a column, say\n\n\tcreate table t (a int4, b int4 default 12345);\n\nand you write a command that depends on the default, say\n\n\tinsert into t (a) values (1);\n\nthe way that the default is currently accounted for is that the parser\nrewrites the command into\n\n\tinsert into t (a,b) values (1, 12345);\n\n(this happens in transformInsertStmt()). It strikes me that doing this\nin the parser is too early, and it needs to be done later, like after\nthe rewriter. Why? Because the rule mechanism stores rules as\nparsetrees. If the above INSERT is part of a rule, then the stored form\nof the rule will look like the rewritten command, with the default\nalready attached. This is bad: if I later alter the default for t.b,\nthe rule won't get updated.\n\n(I can't currently change the default with ALTER TABLE, I think, but\nsooner or later ALTER TABLE will be fixed. I *can* alter t.b's default\nby dumping the database, changing the CREATE TABLE command for t, and\nreloading --- but the rule still won't be updated, because what's dumped\nout for it will look like \"insert into t values (1, 12345);\" ! Try it\nand see...)\n\nI am inclined to think that attachment of default values should happen\nin the planner, at the same time that the targetlist is reordered to\nmatch the physical column order and dummy NULLs are inserted for missing\ncolumns (ie, expand_targetlist()). Certainly it must happen after the\nrule mechanism. Unless I hear objections, I will do that while I am\ncleaning up INSERT processing for the INSERT ... SELECT ... GROUP BY bug.\n\n\nMore generally, I wonder whether it is such a good idea for rules to be\nstored as parsetrees. For example, I can't drop and recreate a table\nmentioned in a rule attached to a different table, because the compiled\nrule includes the OIDs of the tables it references. So the compiled\nrule will start failing if I do that. (Right now, this causes a core\ndump :-( ... apparently someone is assuming that the OID in an RTE will\nnever be bad ...)\n\nWith rules stored as parsetrees, we need to be very careful about how\nmuch semantic knowledge gets factored into the parsetree before it is\nfrozen as a rule. (This is another reason for pushing \"optimization\"\ntransformations out of the parser and into modules downstream of the\nrule rewriter, BTW.)\n\nComments? Storing rules as plain text would be too slow, perhaps,\nbut would it help any to store rules as \"raw\" parsetrees that haven't\nyet gone through analyze.c?\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 07 Jul 1999 19:49:35 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Delaying insertion of default values" }, { "msg_contents": "> More generally, I wonder whether it is such a good idea for rules to be\n> stored as parsetrees. For example, I can't drop and recreate a table\n> mentioned in a rule attached to a different table, because the compiled\n> rule includes the OIDs of the tables it references. So the compiled\n> rule will start failing if I do that. (Right now, this causes a core\n> dump :-( ... apparently someone is assuming that the OID in an RTE will\n> never be bad ...)\n> \n> With rules stored as parsetrees, we need to be very careful about how\n> much semantic knowledge gets factored into the parsetree before it is\n> frozen as a rule. (This is another reason for pushing \"optimization\"\n> transformations out of the parser and into modules downstream of the\n> rule rewriter, BTW.)\n> \n> Comments? Storing rules as plain text would be too slow, perhaps,\n> but would it help any to store rules as \"raw\" parsetrees that haven't\n> yet gone through analyze.c?\n\nAll this sounds good, though we have so many TODO items, it seems a\nlittle of a reach to be going after this. Seems like a good thing to do\nas you add that extra phase of query processing.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 7 Jul 1999 20:59:04 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Delaying insertion of default values" }, { "msg_contents": "Tom Lane wrote:\n> \n> (this happens in transformInsertStmt()). It strikes me that doing this\n> in the parser is too early, and it needs to be done later, like after\n> the rewriter. Why? Because the rule mechanism stores rules as\n> parsetrees. If the above INSERT is part of a rule, then the stored form\n> of the rule will look like the rewritten command, with the default\n> already attached. This is bad: if I later alter the default for t.b,\n> the rule won't get updated.\n> \n> (I can't currently change the default with ALTER TABLE, I think, but\n> sooner or later ALTER TABLE will be fixed. I *can* alter t.b's default\n\nALTER TABLE could (or should?) re-compile table' rules...\n\n> by dumping the database, changing the CREATE TABLE command for t, and\n> reloading --- but the rule still won't be updated, because what's dumped\n> out for it will look like \"insert into t values (1, 12345);\" ! Try it\n> and see...)\n> \n> I am inclined to think that attachment of default values should happen\n> in the planner, at the same time that the targetlist is reordered to\n> match the physical column order and dummy NULLs are inserted for missing\n> columns (ie, expand_targetlist()). Certainly it must happen after the\n\nWhy not? Not bad way, imho.\n\n> rule mechanism. Unless I hear objections, I will do that while I am\n> cleaning up INSERT processing for the INSERT ... SELECT ... GROUP BY bug.\n\nNo objections -:).\n\nVadim\n", "msg_date": "Thu, 08 Jul 1999 09:24:20 +0800", "msg_from": "Vadim Mikheev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Delaying insertion of default values" }, { "msg_contents": "Vadim wrote:\n\n> ALTER TABLE could (or should?) re-compile table' rules...\n\n Rules should be recompilable for various reasons. DROP/CREATE\n of objects (relations, functions etc.) referenced in rules\n changes their OID and needs recompilation too.\n\n Thus we need to store the original rule text and a cross\n reference listing all the objects used in the rules actions.\n That's two new system catalogs for me.\n\n Another problem with rules coming up every so often is the\n rule plan string too big error. I'm actually thinking about\n arbitrary tuple sizes and will open another discussion thread\n on that, but I'm not sure how far we'll get this for v6.6 and\n if the solution would be good enough to handle system\n catalogs and syscache entries as well. To get rules out of\n the way here and beeing free to add this technique to user\n tables only I'll go ahead then and implement rule qual and\n action splitting handled by the rule system itself anyway.\n\n> > rule mechanism. Unless I hear objections, I will do that while I am\n> > cleaning up INSERT processing for the INSERT ... SELECT ... GROUP BY bug.\n>\n> No objections -:).\n\n This would be obsolete when having the above recompilation\n implemented. I'll add a support function that takes an OID\n which should be called at any DROP\n TABLE/VIEW/FUNCTION/OPERATOR etc. which will cause rule\n recompilation on the next usage of the relation.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Thu, 8 Jul 1999 10:47:13 +0200 (MET DST)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Delaying insertion of default values" }, { "msg_contents": "Jan Wieck wrote:\n> \n> Vadim wrote:\n> \n> > ALTER TABLE could (or should?) re-compile table' rules...\n> \n> Rules should be recompilable for various reasons. DROP/CREATE\n> of objects (relations, functions etc.) referenced in rules\n> changes their OID and needs recompilation too.\n\nYes. And the same is true for stored procedures when we'll\nget them.\n\n> > > rule mechanism. Unless I hear objections, I will do that while I am\n> > > cleaning up INSERT processing for the INSERT ... SELECT ... GROUP BY bug.\n> >\n> > No objections -:).\n> \n> This would be obsolete when having the above recompilation\n> implemented. I'll add a support function that takes an OID\n> which should be called at any DROP\n> TABLE/VIEW/FUNCTION/OPERATOR etc. which will cause rule\n> recompilation on the next usage of the relation.\n\nAgreed. I didn't object but of course I more like general solution\n- a way to invalidate stored rules/procedures/etc and re-compilate\nthem when need.\n\nBTW, what's your plan for RI constraints, Jan?\nDid you see my letter about statement level triggers?\nIf I'll get WAL implemented then it could be used for RI.\nIn any case I believe that statement level triggers\nare very nice thing and they are better for RI than\nrules.\n\nVadim\n", "msg_date": "Thu, 08 Jul 1999 18:27:49 +0800", "msg_from": "Vadim Mikheev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Delaying insertion of default values" }, { "msg_contents": "Vadim wrote:\n\n>\n> Jan Wieck wrote:\n> >\n> > Vadim wrote:\n> >\n> > > ALTER TABLE could (or should?) re-compile table' rules...\n> >\n> > Rules should be recompilable for various reasons. DROP/CREATE\n> > of objects (relations, functions etc.) referenced in rules\n> > changes their OID and needs recompilation too.\n>\n> Yes. And the same is true for stored procedures when we'll\n> get them.\n\n Don't we have some kind of them already with the PL\n functions? They get compiled on each first use per backend,\n and I think that for a database under development (usually\n not a live system) it isn't too bad to need a reconnect after\n schema changes.\n\n> BTW, what's your plan for RI constraints, Jan?\n> Did you see my letter about statement level triggers?\n> If I'll get WAL implemented then it could be used for RI.\n> In any case I believe that statement level triggers\n> are very nice thing and they are better for RI than\n> rules.\n\n What's WAL?\n\n Let's think about a foreign key constraint that must be\n checked at transaction commit (deferred constraint), so\n someone can do\n\n BEGIN;\n SET CONSTRAINT reftab_check_refkey DEFERRED;\n UPDATE reftab SET refkey = 4711,\n prodname = 'New product'\n WHERE prodname = 'Temp product';\n INSERT INTO keytab (keyval, prodname)\n VALUES (4711, 'New product');\n COMMIT;\n\n The statement level trigger should not check all 25 million\n rows of reftab against keytab. It should only check the 10\n rows that got updated because they matched. How does the\n statement level trigger get access to the qualification of\n the query that fired it. And how does it find out which of\n them WHERE meant because it will not be able to find them\n again with the same qual.\n\n Currently rules cannot do this job too. I planned to change\n the handling of snapshot as discussed and to implement a\n deferred querytree list ran at appropriate times (like\n COMMIT). Plus a new RAISE command that's internally most of a\n SELECT but throwing an elog if it finds some rows. Such a CI\n rule would then look like:\n\n CREATE RULE reftab_check_refkey AS ON UPDATE TO reftab DO\n RAISE 'foreign key % not present', new.refkey\n WHERE NOT EXISTS\n (SELECT keyval FROM keytab WHERE keyval = new.refkey);\n\n This rule will get expanded by the rewriter to do a scan with\n the snapshot when the UPDATE ran against reftab and with the\n qual expanded to match the updated old tuples only, but the\n subselect will have the snapshot at commit time which will\n find the newly inserted keytab row. I don't see how statement\n level triggers can do it.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Thu, 8 Jul 1999 13:06:13 +0200 (MET DST)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Delaying insertion of default values" }, { "msg_contents": "Jan Wieck wrote:\n> \n> > BTW, what's your plan for RI constraints, Jan?\n> > Did you see my letter about statement level triggers?\n> > If I'll get WAL implemented then it could be used for RI.\n> > In any case I believe that statement level triggers\n> > are very nice thing and they are better for RI than\n> > rules.\n> \n> What's WAL?\n\nWrite Ahead Log. We could backward scan WAL to get tid of\nchanged primary/unique/foreign table rows and check constraints.\nMore of that, we could write to WAL RI infos only for rows with\nupdated _keys_ to avoid check for cases when there was no key\nupdate.\n\n...\n> Currently rules cannot do this job too. I planned to change\n> the handling of snapshot as discussed and to implement a\n> deferred querytree list ran at appropriate times (like\n> COMMIT). Plus a new RAISE command that's internally most of a\n> SELECT but throwing an elog if it finds some rows. Such a CI\n> rule would then look like:\n> \n> CREATE RULE reftab_check_refkey AS ON UPDATE TO reftab DO\n> RAISE 'foreign key % not present', new.refkey\n> WHERE NOT EXISTS\n> (SELECT keyval FROM keytab WHERE keyval = new.refkey);\n> \n> This rule will get expanded by the rewriter to do a scan with\n> the snapshot when the UPDATE ran against reftab and with the\n> qual expanded to match the updated old tuples only, but the\n> subselect will have the snapshot at commit time which will\n> find the newly inserted keytab row. I don't see how statement\n> level triggers can do it.\n\nAs far as I understand what is statement level trigger (SLT),\none is able to use NEW/OLD in queries of SLT just like as\nNEW/OLD are used in rules. I would say that SLT-s are\nrules powered by PL, and nothing more. You would just rewrite\neach query of SLT with NEW/OLD in normal fashion. Using power\nof PL _ANY_ constraints (not just simple RI ones) could be\nimplemented. \n\nComments?\n\nVadim\n", "msg_date": "Thu, 08 Jul 1999 19:45:47 +0800", "msg_from": "Vadim Mikheev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Delaying insertion of default values" }, { "msg_contents": "Vadim wrote:\n\n> Jan Wieck wrote:\n> >\n> > What's WAL?\n>\n> Write Ahead Log. We could backward scan WAL to get tid of\n> changed primary/unique/foreign table rows and check constraints.\n> More of that, we could write to WAL RI infos only for rows with\n> updated _keys_ to avoid check for cases when there was no key\n> update.\n\n Sounds reasonable.\n\n>\n> As far as I understand what is statement level trigger (SLT),\n> one is able to use NEW/OLD in queries of SLT just like as\n> NEW/OLD are used in rules. I would say that SLT-s are\n> rules powered by PL, and nothing more. You would just rewrite\n> each query of SLT with NEW/OLD in normal fashion. Using power\n> of PL _ANY_ constraints (not just simple RI ones) could be\n> implemented.\n\n Ah - in contrast to what I thought SLT's would be. I thought\n an SLT would only be called once per statement, not once per\n tuple (... FOR EACH STATEMENT EXECUTE PROCEDURE ...).\n\n In my understanding an SLT couldn't have worked for something\n like\n\n UPDATE t1 SET b = t2.b WHERE t1.a = t2.a;\n\n Isn't this all still an AFTER trigger on ROW level that could\n be executed deferred?\n\n I like the aproach to give constraints the power of PL.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Thu, 8 Jul 1999 14:39:41 +0200 (MET DST)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Delaying insertion of default values" }, { "msg_contents": "[email protected] (Jan Wieck) writes:\n>>>> rule mechanism. Unless I hear objections, I will do that while I am\n>>>> cleaning up INSERT processing for the INSERT ... SELECT ... GROUP BY bug.\n>> \n>> No objections -:).\n\n> This would be obsolete when having the above recompilation\n> implemented.\n\nI plan to do it anyway, since I need to restructure analyze.c's handling\nof INSERT and I believe that it would be cleaner to do the\ndefault-adding work in expand_targetlist. But recompiling rules is\nneeded to solve other problems, so that has to happen too.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 08 Jul 1999 10:44:09 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Delaying insertion of default values " }, { "msg_contents": "> Vadim wrote:\n> \n> > ALTER TABLE could (or should?) re-compile table' rules...\n> \n> Rules should be recompilable for various reasons. DROP/CREATE\n> of objects (relations, functions etc.) referenced in rules\n> changes their OID and needs recompilation too.\n\nAdded to TODO:\n\n\t* Allow RULE recomplation\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 8 Jul 1999 11:47:23 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Delaying insertion of default values" }, { "msg_contents": "Jan Wieck wrote:\n> \n> >\n> > Write Ahead Log. We could backward scan WAL to get tid of\n> > changed primary/unique/foreign table rows and check constraints.\n> > More of that, we could write to WAL RI infos only for rows with\n> > updated _keys_ to avoid check for cases when there was no key\n> > update.\n> \n> Sounds reasonable.\n\nUnfortunately, additional WAL reads are not goot for overall\nsystem performance, but it's way.\n\n> > As far as I understand what is statement level trigger (SLT),\n> > one is able to use NEW/OLD in queries of SLT just like as\n> > NEW/OLD are used in rules. I would say that SLT-s are\n> > rules powered by PL, and nothing more. You would just rewrite\n> > each query of SLT with NEW/OLD in normal fashion. Using power\n> > of PL _ANY_ constraints (not just simple RI ones) could be\n> > implemented.\n> \n> Ah - in contrast to what I thought SLT's would be. I thought\n> an SLT would only be called once per statement, not once per\n> tuple (... FOR EACH STATEMENT EXECUTE PROCEDURE ...).\n\nYes, SLT is called once per statement, but queries in SLT are\nable to see _all_ old/new tuples affected by statement, just\nlike rule action queries are able to do it.\n\nFor the case of checking existance of primary key trigger\nover referencing table could execute\n\nSELECT count(*) FROM new\nWHERE NOT EXISTS\n(SELECT keyval FROM keytab WHERE keyval = new.refkey);\n\nand abort if count returned is > 0.\nThe query above will be just rewritten by rewrite system.\nSLTes are rule actions + all these nice IF, FOR etc\nPL statements -:)\n\nActually, query above must be modified to deal with\nconcurrent updates. Some other xaction can delete keyval\nand query will not notice this. To see concurrent update/delete\nquery must be able to read dirty data and wait for other\nxactions. It's not easy to do. I need in more time to\nthink about this issue.\n\nVadim\n", "msg_date": "Fri, 09 Jul 1999 11:34:00 +0800", "msg_from": "Vadim Mikheev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Delaying insertion of default values" } ]
[ { "msg_contents": "Attached is the current TODO list. It has all the current things I\nhave.\n\nThis is also updated in the cvs tree every time I make a change.\n\n---------------------------------------------------------------------------\n\nTODO list for PostgreSQL\n========================\nLast updated:\t\tWed Jul 7 23:33:17 EDT 1999\n\nCurrent maintainer:\tBruce Momjian ([email protected])\n\nThe most recent version of this document can be viewed at\nthe PostgreSQL web site, http://www.postgreSQL.org.\n\nA dash(-) marks changes that will appear in the next release.\n\n\nRELIABILITY\n-----------\n\nRESOURCES\n\n* Elog() does not free all its memory(Jan)\n* spinlock stuck problem when elog(FATAL) and elog(ERROR) inside bufmgr\n* Recover or force failure when disk space is exhausted\n\nPARSER\n\n* Disallow inherited columns with the same name as new columns\n* INSERT INTO ... SELECT with AS columns matching result columns problem\n* SELECT pg_class FROM pg_class generates strange error\n* Alter TABLE ADD COLUMN does not honor DEFAULT, add CONSTRAINT\n* Do not allow bpchar column creation without length\n* Select a[1] FROM test fails, it needs test.a[1]\n* Array index references without table name cause problems\n* Update table SET table.value = 3 fails\n* Creating index of TIMESTAMP & RELTIME fails, rename to DATETIME(Thomas)\n* SELECT foo UNION SELECT foo is incorrectly simplified to SELECT foo\n* INSERT ... SELECT ... GROUP BY groups by target columns not source columns\n* CREATE TABLE test (a char(5) DEFAULT text '', b int4) fails on INSERT\n* UNION with LIMIT fails\n\nVIEWS\n\n* Views containing aggregates sometimes fail(Jan)\n* Views with spaces in view name fail when referenced\n\nMISC\n\n* User who can create databases can modify pg_database table\n* Plpgsql does not handle quoted mixed-case identifiers\n\nENHANCEMENTS\n------------\n\nURGENT\n\n* Add referential integrity\n* Add OUTER joins, left and right(Thomas)\n* Allow long tuples by chaining or auto-storing outside db (chaining,large objs)\n* Eliminate limits on query length\n* Fix memory leak for expressions?, aggregates?\n\nEXOTIC FEATURES\n\n* Add sql3 recursive unions\n* Add the concept of dataspaces\n* Add replication of distributed databases\n* Allow queries across multiple databases\n\nADMIN\n\n* Better interface for adding to pg_group\n* More access control over who can create tables and access the database\n* Add syslog functionality\n* Allow elog() to return error codes, not just messages\n* Allow international error message support and add error codes\n* Generate postmaster pid file and remove flock/fcntl lock code\n* Add ability to specifiy location of lock/socket files\n\nTYPES\n\n* Add BIT, BIT VARYING\n* Nchar (as distinguished from ordinary varchar),\n* Domain capability\n* Add STDDEV/VARIANCE() function for standard deviation computation/variance\n* Allow compression of large fields or a compressed field type\n* Large objects\n\to Fix large object mapping scheme, own typeid or reltype(Peter)\n\to Allow large text type to use large objects(Peter)\n\to Not to stuff everything as files in a single directory, hash dirs\n\to Allow large object vacuuming\n* Allow pg_descriptions when creating types, tables, columns, and functions\n* Add IPv6 capability to INET/CIDR types\n* Make a separate SERIAL type?\n* Store binary-compatible type information in the system\n* Allow user to define char1 column\n* Add support for & operator\n* Allow LOCALE on a per-column basis, default to ASCII\n* Allow array on int8[]\n* Remove Money type, add money formatting for decimal type\n* Declare typein/out functions in pg_proc with a special \"C string\" data type\n* Add non-large-object binary field\n* Add index on NUMERIC type\n\nVIEWS\n\n* Allow DISTINCT on views\n* Allow views of aggregate columns\n* Allow views with subselects\n\nINDEXES\n\n* Allow CREATE INDEX zman_index ON test (date_trunc( 'day', zman ) datetime_ops)\n fails index can't store constant parameters\n* Allow creation of functional indexes to use default types\n* Permissions on indexes - prevent them?\n* Allow SQL function indexes\n* Add FILLFACTOR to index creation\n* Allow indexing of LIKE with localle character sets\n* Allow indexing of more than eight columns\n\nCOMMANDS\n\n* ALTER TABLE ADD COLUMN to inherited table put column in wrong place\n* Add ALTER TABLE DROP/ALTER COLUMN feature\n* Allow CLUSTER on all tables at once, and improve CLUSTER\n* Generate error on CREATE OPERATOR of ~~, ~ and and ~*\n* Add SIMILAR TO to allow character classes, 'pg_[a-c]%'\n* Auto-destroy sequence on DROP of table with SERIAL(Ryan)\n* Allow LOCK TABLE tab1, tab2, tab3 so all tables locked in unison\n* Allow INSERT/UPDATE of system-generated oid value for a row\n* Allow ESCAPE '\\' at the end of LIKE for ANSI compliance\n* Rewrite the LIKE handling by rewriting the user string with the \n supplied ESCAPE\n* Move LIKE index optimization handling to the optimizer\n \nCLIENTS\n\n* Make NULL's come out at the beginning or end depending on the \n ORDER BY direction\n* Allow flag to control COPY input/output of NULLs\n* Update reltuples from COPY command\n* Allow psql \\copy to allow delimiters\n* Add a function to return the last inserted oid, for use in psql scripts\n* Allow psql to print nulls as distinct from \"\"(?)\n* PQrequestCancel() be able to terminate backend waiting for lock\n\nMISC\n\n* Increase identifier length(NAMEDATALEN) if small performance hit\n* Allow row re-use without vacuum(Vadim)\n* Create a background process for each database that runs while\n database is idle, finding superceeded rows, gathering stats and vacuuming\n* Add UNIQUE capability to non-btree indexes\n* Certain indexes will not shrink, i.e. oid indexes with many inserts\n* Restore unused oid's on backend exit if no one else has gotten oids\n* Have UPDATE/DELETE clean out indexes\n* Allow WHERE restriction on ctid\n* Allow cursors to be DECLAREd/OPENed/CLOSEed outside transactions\n* Allow PQrequestCancel() to terminate when in waiting-for-lock state\n* Transaction log, so re-do log can be on a separate disk by\n with after-row images\n* Populate backend status area and write program to dump status data\n* Make oid use unsigned int more reliably, pg_atoi()\n* Allow subqueries in target list\n* Put sort files, large objects in their on directory\n* Do autocommit so always in a transaction block\n* Show location of syntax error in query\n* Redesign the function call interface to handle NULLs better(Jan)\n* Document/trigger/rule so changes to pg_shadow create pg_pwd\n* Missing optimizer selectivities for date, r-tree, etc.\n* Overhaul mdmgr/smgr to fix double unlinking and double opens, cleanup\n* Overhaul bufmgr/lockmgr/transaction manager\n* Tables that start with xinv confused to be large objects\n* Add PL/Perl(Mark Hollomon)\n\n\nPERFORMANCE\n-----------\n\nFSYNC\n\n* Allow transaction commits with rollback with no-fsync performance\n* Prevent fsync in SELECT-only queries\n\nINDEXES\n\n* Use indexes in ORDER BY for restrictive data sets, min(), max()\n* Pull requested data directly from indexes, bypassing heap data\n* Use index to restrict rows returned by multi-key index when used with\n non-consecutive keys or OR clauses, so fewer heap accesses\n* Convert function(constant) into a constant for index use\n* Allow LIMIT ability on single-table queries that have no ORDER BY to use\n a matching index\n* Improve LIMIT processing by using index to limit rows processed\n* Have optimizer take LIMIT into account when considering index scans\n* Make index creation use psort code, because it is now faster(Vadim)\n* Allow creation of sort temp tables > 1 Gig\n* Create more system table indexes for faster cache lookups\n* fix indexscan() so it does leak memory by not requiring caller to free\n* Improve _bt_binsrch() to handle equal keys better, remove _bt_firsteq()(Tom)\n\nCACHE\n\n* Cache most recent query plan(s?)\n* Shared catalog cache, reduce lseek()'s by caching table size in shared area\n* elog() flushes cache, try invalidating just entries from current xact,\n perhaps using invalidation cache\n\n\nMISC\n\n* Allow compression of log and meta data\n* Update pg_statistic table to remove operator column\n* Allow char() not to use variable-sized header to reduce disk size\n* Do async I/O to do better read-ahead of data\n* Fix memory exhaustion when using many OR's\n* Get faster regex() code from Henry Spencer <[email protected]>\n when it is available\n* Use mmap() rather than SYSV shared memory(?)\n* Process const = const parts of OR clause in separate pass\n* Make oid use oidin/oidout not int4in/int4out in pg_type.h\n* Improve Subplan list handling\n* Allow Subplans to use efficient joins(hash, merge) with upper variable\n* use fmgr_info()/fmgr_faddr() instead of fmgr() calls in high-traffic\n places, like GROUP BY, UNIQUE, index processing, etc.\n* improve dynamic memory allocation by introducing tuple-context memory\n allocation\n* fix memory leak in cache code when non-existant table is referenced\n* In WHERE x=3 AND x=y, add y=3\n* pass atttypmod through parser in more cases(Bruce)\n* remove duplicate type in/out functions for disk and net\n\nSOURCE CODE\n-----------\n* Add use of 'const' for varibles in source tree\n* Fix C optimizer problem where fmgr_ptr calls return different types\n\n\n---------------------------------------------------------------------------\n\n\nDevelopers who have claimed items are:\n--------------------------------------\n\t* Billy is Billy G. Allie <[email protected]>\n\t* Brook is Brook Milligan <[email protected]>\n\t* Bruce is Bruce Momjian<[email protected]>\n\t* Bryan is Bryan Henderson<[email protected]>\n\t* D'Arcy is D'Arcy J.M. Cain <[email protected]>\n\t* David is David Hartwig <[email protected]>\n\t* Edmund is Edmund Mergl <[email protected]>\n\t* Goran is Goran Thyni <[email protected]>\n\t* Hiroshi is Hiroshi Inoue<[email protected]>\n\t* Jan is Jan Wieck <[email protected]>\n \t* Marc is Marc Fournier <[email protected]>\n\t* Massimo Dal Zotto <[email protected]>\n\t* Michael is Michael Meskes <[email protected]>\n\t* Oleg is Oleg Bartunov <[email protected]>\n\t* Peter is Peter T Mount <[email protected]>\n\t* Ryan is Ryan Bradetich <[email protected]>\n\t* Stefan Simkovics <[email protected]>\n\t* Tatsuo is Tatsuo Ishii <[email protected]>\n\t* Tom is Tom Lane <[email protected]>\n\t* Thomas is Thomas Lockhart <[email protected]>\n\t* TomH is Tom I Helbekkmo <[email protected]>\n\t* Vadim is \"Vadim B. Mikheev\" <[email protected]>\n\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 7 Jul 1999 23:34:19 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Updated TODO list" }, { "msg_contents": "> ADMIN\n> \n> * Better interface for adding to pg_group\n> * More access control over who can create tables and access the database\n> * Add syslog functionality\n> * Allow elog() to return error codes, not just messages\n> * Allow international error message support and add error codes\n> * Generate postmaster pid file and remove flock/fcntl lock code\n> * Add ability to specifiy location of lock/socket files\n\nHow about:\n* Not storing passwords in plain text\n\n", "msg_date": "Thu, 8 Jul 1999 10:37:02 +0400", "msg_from": "\"Gene Sokolov\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Updated TODO list" }, { "msg_contents": "[Charset iso-8859-1 unsupported, filtering to ASCII...]\n> > ADMIN\n> > \n> > * Better interface for adding to pg_group\n> > * More access control over who can create tables and access the database\n> > * Add syslog functionality\n> > * Allow elog() to return error codes, not just messages\n> > * Allow international error message support and add error codes\n> > * Generate postmaster pid file and remove flock/fcntl lock code\n> > * Add ability to specifiy location of lock/socket files\n> \n> How about:\n> * Not storing passwords in plain text\n\nBut we don't, do we? I thougth they were hashed.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 8 Jul 1999 11:28:58 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Updated TODO list" }, { "msg_contents": "[Charset iso-8859-1 unsupported, filtering to ASCII...]\n> > ADMIN\n> > \n> > * Better interface for adding to pg_group\n> > * More access control over who can create tables and access the database\n> > * Add syslog functionality\n> > * Allow elog() to return error codes, not just messages\n> > * Allow international error message support and add error codes\n> > * Generate postmaster pid file and remove flock/fcntl lock code\n> > * Add ability to specifiy location of lock/socket files\n> \n> How about:\n> * Not storing passwords in plain text\n\nI have just added this item:\n\n\t* Make postgres user have a password by default\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 8 Jul 1999 11:38:16 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Updated TODO list" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> [Charset iso-8859-1 unsupported, filtering to ASCII...]\n> > > ADMIN\n> > >\n> > > * Better interface for adding to pg_group\n> > > * More access control over who can create tables and access the database\n> > > * Add syslog functionality\n> > > * Allow elog() to return error codes, not just messages\n> > > * Allow international error message support and add error codes\n> > > * Generate postmaster pid file and remove flock/fcntl lock code\n> > > * Add ability to specifiy location of lock/socket files\n> >\n> > How about:\n> > * Not storing passwords in plain text\n> \n> But we don't, do we? I thougth they were hashed.\n\ndo\n select * from pg_shadow;\n\nI think that it was agreed that it is better when they can't bw snatched\nfrom \nnetwork than to have them hashed in db. \nUsing currently known technologies we must either either know the\noriginal password \nand use challenge-response on net, or else use plaintext (or equivalent)\non the wire.\n\n-------------------\nHannu\n", "msg_date": "Fri, 09 Jul 1999 10:58:52 +0300", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Updated TODO list" }, { "msg_contents": "From: Bruce Momjian <[email protected]>\n> > > ADMIN\n> > >\n> > How about:\n> > * Not storing passwords in plain text\n>\n> But we don't, do we? I thougth they were hashed.\n\nmaybe I miss something but it does not look so to me:\n\n[PostgreSQL 6.5.0 on i386-unknown-freebsd3.2, compiled by gcc 2.7.2.1]\n\ntest1=> select * from pg_shadow;\nusename |usesysid|usecreatedb|usetrace|usesuper|usecatupd|passwd|valuntil\n--------+--------+-----------+--------+--------+---------+------+-----------\n-----------------\npostgres| 2000|t |t |t |t | |Sat Jan 31\n09:00:00 2037 MSK\nafmmgr | 2001|f |t |f |t |mgrpwd|\nafmusr | 2002|f |t |f |t |usrpwd|\n(3 rows)\n\n\n\n\n", "msg_date": "Fri, 9 Jul 1999 14:41:10 +0400", "msg_from": "\"Gene Sokolov\" <[email protected]>", "msg_from_op": false, "msg_subject": "Hashing passwords (was Updated TODO list)" }, { "msg_contents": "From: Hannu Krosing <[email protected]>\n> > > How about:\n> > > * Not storing passwords in plain text\n> >\n> > But we don't, do we? I thougth they were hashed.\n>\n> do\n> select * from pg_shadow;\n>\n> I think that it was agreed that it is better when they can't bw snatched\n> from\n> network than to have them hashed in db.\n> Using currently known technologies we must either either know the\n> original password\n> and use challenge-response on net, or else use plaintext (or equivalent)\n> on the wire.\n\nI would be happier even with storing passwords at the server as a reversible\nhash. For example, xor all user passwords with some value (for example\n\"PostgreSQL\") and store base64(xor) strings instead of plain text.\n\nChallenge-response authentication based on MD5 or SHA hashing would be\nbetter, of course. A scheme like this would be reasonably secure:\n\n1. Client initiates connection.\n2. Server generates a long (16 byte) random value and passes it to the\nclient.\n3. Client generates a one way hash of the user ID, SHA(password), and the\nrandom number:\nhash := SHA(uid [+] SHA(password) [+] randomval)\nand sends openly uid and the hash back to the server\n4. Server reconstructs the hash using stored SHA(password) and compares it\nwith the received hash.\n\nEven more secure: don't store SHA(password) at the server but store\nSHA(password) XOR <mastervalue>.\n\nGene Sokolov.\n\n\n\n", "msg_date": "Fri, 9 Jul 1999 16:11:24 +0400", "msg_from": "\"Gene Sokolov\" <[email protected]>", "msg_from_op": false, "msg_subject": "Hashing passwords (was Updated TODO list)" }, { "msg_contents": "It would be nice if the password scheme you finally settle on can be\noptionally replaced (compile-time) by the password hash available native\non the OS. In the case of OpenBSD, the Blowfish-based replacement for the\nDES or MD5 based crypt(3) is better suited to resisting dictionary and\nother offline attacks by fast processors.\n\nThis suggestion is useful in case the shadow password file is compromised.\nIt is independent of any challenge-response protocol you apply upstream.\n\nCiao\n --Louis <[email protected]> \n\nLouis Bertrand http://www.bertrandtech.on.ca\nBertrand Technical Services, Bowmanville, ON, Canada \n\nOpenBSD: Because security matters. http://www.openbsd.org/\n\nOn Fri, 9 Jul 1999, Gene Sokolov wrote:\n\n> I would be happier even with storing passwords at the server as a reversible\n> hash. For example, xor all user passwords with some value (for example\n> \"PostgreSQL\") and store base64(xor) strings instead of plain text.\n> \n> Challenge-response authentication based on MD5 or SHA hashing would be\n> better, of course. A scheme like this would be reasonably secure:\n> \n> 1. Client initiates connection.\n> 2. Server generates a long (16 byte) random value and passes it to the\n> client.\n> 3. Client generates a one way hash of the user ID, SHA(password), and the\n> random number:\n> hash := SHA(uid [+] SHA(password) [+] randomval)\n> and sends openly uid and the hash back to the server\n> 4. Server reconstructs the hash using stored SHA(password) and compares it\n> with the received hash.\n> \n> Even more secure: don't store SHA(password) at the server but store\n> SHA(password) XOR <mastervalue>.\n> \n> Gene Sokolov.\n> \n> \n> \n> \n> \n> \n\n\n", "msg_date": "Fri, 9 Jul 1999 13:36:45 +0000 (GMT)", "msg_from": "Louis Bertrand <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Hashing passwords (was Updated TODO list)" }, { "msg_contents": "Gene Sokolov wrote:\n> \n> From: Hannu Krosing <[email protected]>\n> >\n> > I think that it was agreed that it is better when they can't bw snatched\n> > from\n> > network than to have them hashed in db.\n> > Using currently known technologies we must either either know the\n> > original password\n> > and use challenge-response on net, or else use plaintext (or equivalent)\n> > on the wire.\n\nThis seems correct to me.\n\n> [snip] \n> Challenge-response authentication based on MD5 or SHA hashing would be\n> better, of course. A scheme like this would be reasonably secure:\n> \n> 1. Client initiates connection.\n> 2. Server generates a long (16 byte) random value and passes it to the\n> client.\n> 3. Client generates a one way hash of the user ID, SHA(password), and the\n> random number:\n> hash := SHA(uid [+] SHA(password) [+] randomval)\n> and sends openly uid and the hash back to the server\n> 4. Server reconstructs the hash using stored SHA(password) and compares it\n> with the received hash.\n\nBut Hannu's point was that you can guard against network sniffing or you\ncan guard\nagainst the evil 'select * pg_shadow', but not both.\n\nWhile you're scheme _does_ secure against packet sniffing, it doesn't do\nanything\nagainst the select. So, you might as well just store 'password' and pass\nback\n\nhash := SHA(uid [+] password [+] randomval).\n\nBut I fully admit that cryptography is not my strong suit.\n\n\n> \n> Even more secure: don't store SHA(password) at the server but store\n> SHA(password) XOR <mastervalue>.\n\nI don't see how. I certainly know _my_ password. So I can compute\nSHA(password). So,\nif I can see what the database is storing, figuring out <mastervalue> is\na no brainer.\n-- \n\nMark Hollomon\[email protected]\nESN 451-9008 (302)454-9008\n", "msg_date": "Fri, 09 Jul 1999 09:44:24 -0400", "msg_from": "\"Mark Hollomon\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Hashing passwords (was Updated TODO list)" }, { "msg_contents": "From: Mark Hollomon <[email protected]>\n> > Challenge-response authentication based on MD5 or SHA hashing would be\n> > better, of course. A scheme like this would be reasonably secure:\n> >\n> > 1. Client initiates connection.\n> > 2. Server generates a long (16 byte) random value and passes it to the\n> > client.\n> > 3. Client generates a one way hash of the user ID, SHA(password), and\nthe\n> > random number:\n> > hash := SHA(uid [+] SHA(password) [+] randomval)\n> > and sends openly uid and the hash back to the server\n> > 4. Server reconstructs the hash using stored SHA(password) and compares\nit\n> > with the received hash.\n\n[snip]\n\n> While you're scheme _does_ secure against packet sniffing, it doesn't do\n> anything\n> against the select. So, you might as well just store 'password' and pass\n> back\n>\n> hash := SHA(uid [+] password [+] randomval).\n>\n> But I fully admit that cryptography is not my strong suit.\n\nI cannot fully agree:\n1. Select in this case would give you a value, which you have to use from a\n*custom* modified client. Any standard client would not be able to use it.\n2. Many people use the same or similar passwords for all logins. Just\nobfuscating the passwords would be beneficial for them.\n3. See below.\n\n> > Even more secure: don't store SHA(password) at the server but store\n> > SHA(password) XOR <mastervalue>.\n>\n> I don't see how. I certainly know _my_ password. So I can compute\n> SHA(password). So,\n> if I can see what the database is storing, figuring out <mastervalue> is\n> a no brainer.\n\nOk, make it SHA(pass) XOR SHA(mastervalue [+] uid). This way you can't get a\nuseful info without knowing the master value.\n\n\n\n", "msg_date": "Fri, 9 Jul 1999 18:08:12 +0400", "msg_from": "\"Gene Sokolov\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Hashing passwords (was Updated TODO list)" }, { "msg_contents": "> It would be nice if the password scheme you finally settle on can be\n> optionally replaced (compile-time) by the password hash available native\n> on the OS. In the case of OpenBSD, the Blowfish-based replacement for the\n> DES or MD5 based crypt(3) is better suited to resisting dictionary and\n> other offline attacks by fast processors.\n\nOnce you say \"strong encryption\", you also say \"export controls\", \"wasenaar\"\nand \"avoid it if you can\". It means PgSQL team would have to maintain two\ndistributions - one for the US and one for the rest of the world. It's not\nlike it cannot be done. I just see no benefit in using encryption instead of\nhashing. There is no need for DES or Blowfish to justify the pain.\n\nGene Sokolov.\n\n\n", "msg_date": "Fri, 9 Jul 1999 18:21:57 +0400", "msg_from": "\"Gene Sokolov\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Hashing passwords (was Updated TODO list)" }, { "msg_contents": "Gene Sokolov wrote:\n> \n> From: Mark Hollomon <[email protected]>\n> > While you're scheme _does_ secure against packet sniffing, it doesn't do\n> > anything\n> > against the select. So, you might as well just store 'password' and pass\n> > back\n> >\n> > hash := SHA(uid [+] password [+] randomval).\n> >\n> > But I fully admit that cryptography is not my strong suit.\n> \n> I cannot fully agree:\n> 1. Select in this case would give you a value, which you have to use from a\n> *custom* modified client. Any standard client would not be able to use it.\n> 2. Many people use the same or similar passwords for all logins. Just\n> obfuscating the passwords would be beneficial for them.\n> 3. See below.\n\nOkay, so only the 'casual cracker' is involved. I don't have a problem\nas long\nas long as that is stated.\n\n> \n> > > Even more secure: don't store SHA(password) at the server but store\n> > > SHA(password) XOR <mastervalue>.\n> >\n> > I don't see how. I certainly know _my_ password. So I can compute\n> > SHA(password). So,\n> > if I can see what the database is storing, figuring out <mastervalue> is\n> > a no brainer.\n> \n> Ok, make it SHA(pass) XOR SHA(mastervalue [+] uid). This way you can't get a\n> useful info without knowing the master value.\n\nThat is better. But, under the 'casual cracker' senario, I don't think\nthis is necessary.\n\nAlso, the unspoken assumption is that mastervalue is site specific, say\nrandomly\ngenerated at initdb time. But ouch, that sure makes upgrades dicey. And\nwe would\nhave to think carefully about where to store it.\n\n-- \n\nMark Hollomon\[email protected]\nESN 451-9008 (302)454-9008\n", "msg_date": "Fri, 09 Jul 1999 10:46:51 -0400", "msg_from": "\"Mark Hollomon\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Hashing passwords (was Updated TODO list)" }, { "msg_contents": "No, I am not suggesting PostgreSQL bundle any strong crypto: simply take\nadvantage of what's available native on the host OS.\n\nBTW, the US export controls only apply to code written in the US. The\nWassenaar Arrangement specifically excludes free/open software.\n\nCiao\n --Louis <[email protected]> \n\nLouis Bertrand http://www.bertrandtech.on.ca\nBertrand Technical Services, Bowmanville, ON, Canada \nTel: +1.905.623.8925 Fax: +1.905.623.3852\n\nOpenBSD: Secure by default. http://www.openbsd.org/\n\nOn Fri, 9 Jul 1999, Gene Sokolov wrote:\n\n> Once you say \"strong encryption\", you also say \"export controls\", \"wasenaar\"\n> and \"avoid it if you can\". It means PgSQL team would have to maintain two\n> distributions - one for the US and one for the rest of the world. It's not\n> like it cannot be done. I just see no benefit in using encryption instead of\n> hashing. There is no need for DES or Blowfish to justify the pain.\n> \n> Gene Sokolov.\n> \n> \n> \n> \n> \n\n\n", "msg_date": "Fri, 9 Jul 1999 15:14:41 +0000 (GMT)", "msg_from": "Louis Bertrand <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Hashing passwords (was Updated TODO list)" }, { "msg_contents": "> > But we don't, do we? I thougth they were hashed.\n> \n> do\n> select * from pg_shadow;\n> \n> I think that it was agreed that it is better when they can't bw snatched\n> from \n> network than to have them hashed in db. \n> Using currently known technologies we must either either know the\n> original password \n> and use challenge-response on net, or else use plaintext (or equivalent)\n> on the wire.\n\nYes, I remember now, we hash them with random salt before sending them\nto the client, and they are only visible to the postgres user.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 9 Jul 1999 12:40:45 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Updated TODO list" }, { "msg_contents": "[Charset iso-8859-1 unsupported, filtering to ASCII...]\n> From: Bruce Momjian <[email protected]>\n> > > > ADMIN\n> > > >\n> > > How about:\n> > > * Not storing passwords in plain text\n> >\n> > But we don't, do we? I thougth they were hashed.\n> \n> maybe I miss something but it does not look so to me:\n> \n> [PostgreSQL 6.5.0 on i386-unknown-freebsd3.2, compiled by gcc 2.7.2.1]\n> \n> test1=> select * from pg_shadow;\n> usename |usesysid|usecreatedb|usetrace|usesuper|usecatupd|passwd|valuntil\n> --------+--------+-----------+--------+--------+---------+------+-----------\n> -----------------\n> postgres| 2000|t |t |t |t | |Sat Jan 31\n> 09:00:00 2037 MSK\n> afmmgr | 2001|f |t |f |t |mgrpwd|\n> afmusr | 2002|f |t |f |t |usrpwd|\n> (3 rows)\n\nYes, I remember now. We keep them in clear, because we send random\nsalt-encrypted versions over the wire. Only Postgresql can read this\ntable.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 9 Jul 1999 12:46:31 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Hashing passwords (was Updated TODO list)" }, { "msg_contents": "Why should anyone be able to read cleartext passwords, or even need to?\nPeople have a habit of reusing the same password for logins elsewhere.\nHash the password as it's entered and compare hashes. This way, even if\nthe password file (PostgreSQL's or the system's) is compromised, the\nattacker gains no extra information.\n\nCiao\n --Louis <[email protected]> \n\nLouis Bertrand http://www.bertrandtech.on.ca\nBertrand Technical Services, Bowmanville, ON, Canada \nTel: +1.905.623.8925 Fax: +1.905.623.3852\n\nOpenBSD: Secure by default. http://www.openbsd.org/\n\nOn Fri, 9 Jul 1999, Bruce Momjian wrote:\n\n> [Charset iso-8859-1 unsupported, filtering to ASCII...]\n> > From: Bruce Momjian <[email protected]>\n> > > > > ADMIN\n> > > > >\n> > > > How about:\n> > > > * Not storing passwords in plain text\n> > >\n> > > But we don't, do we? I thougth they were hashed.\n> > \n> > maybe I miss something but it does not look so to me:\n> > \n> > [PostgreSQL 6.5.0 on i386-unknown-freebsd3.2, compiled by gcc 2.7.2.1]\n> > \n> > test1=> select * from pg_shadow;\n> > usename |usesysid|usecreatedb|usetrace|usesuper|usecatupd|passwd|valuntil\n> > --------+--------+-----------+--------+--------+---------+------+-----------\n> > -----------------\n> > postgres| 2000|t |t |t |t | |Sat Jan 31\n> > 09:00:00 2037 MSK\n> > afmmgr | 2001|f |t |f |t |mgrpwd|\n> > afmusr | 2002|f |t |f |t |usrpwd|\n> > (3 rows)\n> \n> Yes, I remember now. We keep them in clear, because we send random\n> salt-encrypted versions over the wire. Only Postgresql can read this\n> table.\n> \n> \n> -- \n> Bruce Momjian | http://www.op.net/~candle\n> [email protected] | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> \n> \n> \n\n\n", "msg_date": "Fri, 9 Jul 1999 21:34:23 +0000 (GMT)", "msg_from": "Louis Bertrand <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: Hashing passwords (was Updated TODO list)" }, { "msg_contents": "From: Louis Bertrand <[email protected]>\n> No, I am not suggesting PostgreSQL bundle any strong crypto: simply take\n> advantage of what's available native on the host OS.\n>\n> BTW, the US export controls only apply to code written in the US. The\n> Wassenaar Arrangement specifically excludes free/open software.\n\nNot every OS has strong encryption built-in. And that means PgSQL would have\nto maintain a crypto package for the OSes without the strong crypto. Thus\nthe benefits of using the OS encryption would be less significant.\n\nGene Sokolov.\n\n\n", "msg_date": "Mon, 12 Jul 1999 10:27:30 +0400", "msg_from": "\"Gene Sokolov\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Hashing passwords (was Updated TODO list)" }, { "msg_contents": "I completely agree with Louis. It's not just the hacker: there is no need\nfor sysadmin to know passwords as well. I believe the security scheme where\nsysadmin or anyone has to take action in order *not* to see passwords is\nflawed.\n\nI think the following solution would be satisfactory:\nStore SHA(password) XOR SHA(mastervalue [+] uid). In case it's difficult to\nalter the wire protocol, store password XOR SHA(mastervalue [+] uid). Either\nway no one can get useful info without knowing the master value. Even simple\npassword XOR <mastervalue> would be helpful.\n\nGene Sokolov.\n\nFrom: Louis Bertrand <[email protected]>\n> Why should anyone be able to read cleartext passwords, or even need to?\n> People have a habit of reusing the same password for logins elsewhere.\n> Hash the password as it's entered and compare hashes. This way, even if\n> the password file (PostgreSQL's or the system's) is compromised, the\n> attacker gains no extra information.\n>\n> > > From: Bruce Momjian <[email protected]>\n> > Yes, I remember now. We keep them in clear, because we send random\n> > salt-encrypted versions over the wire. Only Postgresql can read this\n> > table.\n\n\n", "msg_date": "Mon, 12 Jul 1999 10:37:47 +0400", "msg_from": "\"Gene Sokolov\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: Hashing passwords (was Updated TODO list)" }, { "msg_contents": "I can \"select * from pgshadow\" as the database owner.\n\n> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]]On Behalf Of Bruce Momjian\n> Sent: 09 July 1999 17:41\n> To: Hannu Krosing\n> Cc: Gene Sokolov; PostgreSQL-development\n> Subject: Re: [HACKERS] Updated TODO list\n> \n> \n> > > But we don't, do we? I thougth they were hashed.\n> > \n> > do\n> > select * from pg_shadow;\n> > \n> > I think that it was agreed that it is better when they \n> can't bw snatched\n> > from \n> > network than to have them hashed in db. \n> > Using currently known technologies we must either either know the\n> > original password \n> > and use challenge-response on net, or else use plaintext \n> (or equivalent)\n> > on the wire.\n> \n> Yes, I remember now, we hash them with random salt before sending them\n> to the client, and they are only visible to the postgres user.\n> \n> -- \n> Bruce Momjian | http://www.op.net/~candle\n> [email protected] | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, \n> Pennsylvania 19026\n> \n> \n", "msg_date": "Mon, 12 Jul 1999 10:09:51 +0100", "msg_from": "\"John Ridout\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: [HACKERS] Updated TODO list" }, { "msg_contents": "I found this at freshmeat.net:\n------------------------------\nSecure Remote Password (SRP) is a password-based authentication and\n key exchange mechanism where no information about the password is\n leaked during the authentication process. It does not require any\npublic\n key cryptography, yet even if one were to eavesdrop on the\n authentication process, no information which would aid in guessing\nthe\n password can be obtained (in theory). There are some reworked Telnet\n and FTP clients and servers available already.\nhttp://srp.stanford.edu/srp/\n\nIt stores encrypted passwords on the server (not simple XOR), sends\ndifferent\ndata over the wire every time, it's is impossible to listen on the wire\nand\ncompute the password (even with the simplest passwords).\n\nsee http://srp.stanford.edu/srp/design.html\n\n/* m */\n\nGene Sokolov wrote:\n> \n> I completely agree with Louis. It's not just the hacker: there is no need\n> for sysadmin to know passwords as well. I believe the security scheme where\n> sysadmin or anyone has to take action in order *not* to see passwords is\n> flawed.\n> \n> I think the following solution would be satisfactory:\n> Store SHA(password) XOR SHA(mastervalue [+] uid). In case it's difficult to\n> alter the wire protocol, store password XOR SHA(mastervalue [+] uid). Either\n> way no one can get useful info without knowing the master value. Even simple\n> password XOR <mastervalue> would be helpful.\n> \n> Gene Sokolov.\n> \n> From: Louis Bertrand <[email protected]>\n> > Why should anyone be able to read cleartext passwords, or even need to?\n> > People have a habit of reusing the same password for logins elsewhere.\n> > Hash the password as it's entered and compare hashes. This way, even if\n> > the password file (PostgreSQL's or the system's) is compromised, the\n> > attacker gains no extra information.\n> >\n> > > > From: Bruce Momjian <[email protected]>\n> > > Yes, I remember now. We keep them in clear, because we send random\n> > > salt-encrypted versions over the wire. Only Postgresql can read this\n> > > table.\n", "msg_date": "Mon, 12 Jul 1999 13:33:03 +0200", "msg_from": "Mattias Kregert <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: Hashing passwords (was Updated TODO list)" }, { "msg_contents": "Another nice thing with SRP is that it is a mutual authentication. A\nthird party cannot say \"hey i'm the server, please connect to me. Sure,\nyour password is correct, start sending queries... INSERT? ok, sure,\nINSERT 1 1782136. go on...\" and steal a lot of data... the SRP client\nalways knows if it is talking to the real thing. No more third party\nattacks...\nhttp://srp.stanford.edu/srp/others.html\n\n/* m */\n\n\nGene Sokolov wrote:\n> \n> I completely agree with Louis. It's not just the hacker: there is no need\n> for sysadmin to know passwords as well. I believe the security scheme where\n> sysadmin or anyone has to take action in order *not* to see passwords is\n> flawed.\n> \n> I think the following solution would be satisfactory:\n> Store SHA(password) XOR SHA(mastervalue [+] uid). In case it's difficult to\n> alter the wire protocol, store password XOR SHA(mastervalue [+] uid). Either\n> way no one can get useful info without knowing the master value. Even simple\n> password XOR <mastervalue> would be helpful.\n> \n> Gene Sokolov.\n> \n> From: Louis Bertrand <[email protected]>\n> > Why should anyone be able to read cleartext passwords, or even need to?\n> > People have a habit of reusing the same password for logins elsewhere.\n> > Hash the password as it's entered and compare hashes. This way, even if\n> > the password file (PostgreSQL's or the system's) is compromised, the\n> > attacker gains no extra information.\n> >\n> > > > From: Bruce Momjian <[email protected]>\n> > > Yes, I remember now. We keep them in clear, because we send random\n> > > salt-encrypted versions over the wire. Only Postgresql can read this\n> > > table.\n", "msg_date": "Mon, 12 Jul 1999 13:50:49 +0200", "msg_from": "Mattias Kregert <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: Hashing passwords (was Updated TODO list)" }, { "msg_contents": "> I can \"select * from pgshadow\" as the database owner.\n\nAre you saying you can do this as a database owner, not the postgres\nuser? I just tried it, and was not able to see the table contents:\n\n\txx=> select * from pg_shadow;\n\tERROR: pg_shadow: Permission denied.\n\nYes, only the installation owner can do that. No way to do password stuff\nunless the 'postgres' user can access the passwords, righ? Is that a\nproblem?\n\n\n> > -----Original Message-----\n> > From: [email protected]\n> > [mailto:[email protected]]On Behalf Of Bruce Momjian\n> > Sent: 09 July 1999 17:41\n> > To: Hannu Krosing\n> > Cc: Gene Sokolov; PostgreSQL-development\n> > Subject: Re: [HACKERS] Updated TODO list\n> > \n> > \n> > > > But we don't, do we? I thougth they were hashed.\n> > > \n> > > do\n> > > select * from pg_shadow;\n> > > \n> > > I think that it was agreed that it is better when they \n> > can't bw snatched\n> > > from \n> > > network than to have them hashed in db. \n> > > Using currently known technologies we must either either know the\n> > > original password \n> > > and use challenge-response on net, or else use plaintext \n> > (or equivalent)\n> > > on the wire.\n> > \n> > Yes, I remember now, we hash them with random salt before sending them\n> > to the client, and they are only visible to the postgres user.\n> > \n> > -- \n> > Bruce Momjian | http://www.op.net/~candle\n> > [email protected] | (610) 853-3000\n> > + If your life is a hard drive, | 830 Blythe Avenue\n> > + Christ can be your backup. | Drexel Hill, \n> > Pennsylvania 19026\n> > \n> > \n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 12 Jul 1999 09:23:25 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Updated TODO list" }, { "msg_contents": "I looked it up.\nOne problem with this protocol imho is extensive use of modular\nexponentiation. This operation is heavy. The login procedure would be\ncpu-intensive.\nSecond - the protocol covers secure authentication. Data is sent unencrypted\nanyway. I think it is not wise to spending a lot of effort on secure login\nwithout securing the data channel. \"Building secure PgSQL\" would be an\ninteresting subject of discussion though.\n\nGene Sokolov.\n\nFrom: Mattias Kregert <[email protected]>\n> Another nice thing with SRP is that it is a mutual authentication. A\n> third party cannot say \"hey i'm the server, please connect to me. Sure,\n> your password is correct, start sending queries... INSERT? ok, sure,\n> INSERT 1 1782136. go on...\" and steal a lot of data... the SRP client\n> always knows if it is talking to the real thing. No more third party\n> attacks...\n> http://srp.stanford.edu/srp/others.html\n>\n> /* m */\n\n\n", "msg_date": "Mon, 12 Jul 1999 17:32:46 +0400", "msg_from": "\"Gene Sokolov\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: Hashing passwords (was Updated TODO list)" }, { "msg_contents": "On Fri, 9 Jul 1999, Louis Bertrand wrote:\n\n> It would be nice if the password scheme you finally settle on can be\n> optionally replaced (compile-time) by the password hash available native\n> on the OS. In the case of OpenBSD, the Blowfish-based replacement for the\n> DES or MD5 based crypt(3) is better suited to resisting dictionary and\n> other offline attacks by fast processors.\n> \n> This suggestion is useful in case the shadow password file is compromised.\n> It is independent of any challenge-response protocol you apply upstream.\n\nPerhaps one could also allow the use of PAM where available. That would\nmake things infinitely easier for administrators.\n\n-- \nPeter Eisentraut\nPathWay Computing, Inc.\n\n", "msg_date": "Mon, 12 Jul 1999 09:34:55 -0400 (EDT)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Hashing passwords (was Updated TODO list)" }, { "msg_contents": "\nI'm using 6.4 so you may want to ignore everything I say.\nI created a new user with create db and create user permission.\nWith said new user I \"select * from pg_shadow\". Is that right?\n\nJohn.\n\n> \n> > I can \"select * from pgshadow\" as the database owner.\n> \n> Are you saying you can do this as a database owner, not the postgres\n> user? I just tried it, and was not able to see the table contents:\n> \n> \txx=> select * from pg_shadow;\n> \tERROR: pg_shadow: Permission denied.\n> \n> Yes, only the installation owner can do that. No way to do \n> password stuff\n> unless the 'postgres' user can access the passwords, righ? Is that a\n> problem?\n> \n> \n> > > -----Original Message-----\n> > > From: [email protected]\n> > > [mailto:[email protected]]On Behalf Of \n> Bruce Momjian\n> > > Sent: 09 July 1999 17:41\n> > > To: Hannu Krosing\n> > > Cc: Gene Sokolov; PostgreSQL-development\n> > > Subject: Re: [HACKERS] Updated TODO list\n> > > \n> > > \n> > > > > But we don't, do we? I thougth they were hashed.\n> > > > \n> > > > do\n> > > > select * from pg_shadow;\n> > > > \n> > > > I think that it was agreed that it is better when they \n> > > can't bw snatched\n> > > > from \n> > > > network than to have them hashed in db. \n> > > > Using currently known technologies we must either \n> either know the\n> > > > original password \n> > > > and use challenge-response on net, or else use plaintext \n> > > (or equivalent)\n> > > > on the wire.\n> > > \n> > > Yes, I remember now, we hash them with random salt before \n> sending them\n> > > to the client, and they are only visible to the postgres user.\n> > > \n> > > -- \n> > > Bruce Momjian | \nhttp://www.op.net/~candle\n> > [email protected] | (610) 853-3000\n> > + If your life is a hard drive, | 830 Blythe Avenue\n> > + Christ can be your backup. | Drexel Hill, \n> > Pennsylvania 19026\n> > \n> > \n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n", "msg_date": "Mon, 12 Jul 1999 14:45:02 +0100", "msg_from": "\"John Ridout\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: [HACKERS] Updated TODO list" }, { "msg_contents": "[Charset iso-8859-1 unsupported, filtering to ASCII...]\n> \n> I'm using 6.4 so you may want to ignore everything I say.\n> I created a new user with create db and create user permission.\n> With said new user I \"select * from pg_shadow\". Is that right?\n\n6.4 and 6.5 should be the same in this area. If you say the user is a\n'super-user' in createdb, he will be able to access pg_shadow. If not,\nno access.\n\nThe ability to create databases is unrelated to that.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 12 Jul 1999 10:48:03 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Updated TODO list" }, { "msg_contents": ">\n> I can \"select * from pgshadow\" as the database owner.\n>\n\n You must be a database superuser or a superuser must have\n granted SELECT right for pg_shadow to you.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Mon, 12 Jul 1999 17:40:58 +0200 (MET DST)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Updated TODO list" }, { "msg_contents": "From: Jan Wieck <[email protected]>\n> >\n> > I can \"select * from pgshadow\" as the database owner.\n> >\n>\n> You must be a database superuser or a superuser must have\n> granted SELECT right for pg_shadow to you.\n>\n>\n> Jan\n\nDB admin has no business knowing other's passwords. The current security\nscheme is seriously flawed.\n\nGene Sokolov.\n\n\n", "msg_date": "Tue, 13 Jul 1999 10:34:35 +0400", "msg_from": "\"Gene Sokolov\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Updated TODO list" }, { "msg_contents": "[Charset koi8-r unsupported, filtering to ASCII...]\n> From: Jan Wieck <[email protected]>\n> > >\n> > > I can \"select * from pgshadow\" as the database owner.\n> > >\n> >\n> > You must be a database superuser or a superuser must have\n> > granted SELECT right for pg_shadow to you.\n> >\n> >\n> > Jan\n> \n> DB admin has no business knowing other's passwords. The current security\n> scheme is seriously flawed.\n> \n\nBut it is the db passwords, not the Unix passwords. How are we supposed\nto make this work if the db doesn't know the passwords, AND use random\nsalt over the wire?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 13 Jul 1999 12:55:59 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Updated TODO list" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n>> DB admin has no business knowing other's passwords. The current security\n>> scheme is seriously flawed.\n\n> But it is the db passwords, not the Unix passwords.\n\nI think the original point was that some people use the same or related\npasswords for psql as for their login password.\n\nNonetheless, since we have no equivalent of \"passwd\" that would let a\ndb user change his db password for himself, it's a little silly to\ntalk about hiding db passwords from the admin who puts them in.\n\nIf this is a concern, we'd need to add both encrypted storage of\npasswords and a remote-password-change feature.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 13 Jul 1999 13:20:18 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Updated TODO list " }, { "msg_contents": "> Bruce Momjian <[email protected]> writes:\n> >> DB admin has no business knowing other's passwords. The current security\n> >> scheme is seriously flawed.\n> \n> > But it is the db passwords, not the Unix passwords.\n> \n> I think the original point was that some people use the same or related\n> passwords for psql as for their login password.\n> \n> Nonetheless, since we have no equivalent of \"passwd\" that would let a\n> db user change his db password for himself, it's a little silly to\n> talk about hiding db passwords from the admin who puts them in.\n> \n> If this is a concern, we'd need to add both encrypted storage of\n> passwords and a remote-password-change feature.\n\nDoing the random salt over the wire would still be a problem.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 13 Jul 1999 14:05:47 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Updated TODO list" }, { "msg_contents": ">\n> > Bruce Momjian <[email protected]> writes:\n> > >> DB admin has no business knowing other's passwords. The current security\n> > >> scheme is seriously flawed.\n> >\n> > > But it is the db passwords, not the Unix passwords.\n> >\n> > I think the original point was that some people use the same or related\n> > passwords for psql as for their login password.\n> >\n> > Nonetheless, since we have no equivalent of \"passwd\" that would let a\n> > db user change his db password for himself, it's a little silly to\n> > talk about hiding db passwords from the admin who puts them in.\n> >\n> > If this is a concern, we'd need to add both encrypted storage of\n> > passwords and a remote-password-change feature.\n>\n> Doing the random salt over the wire would still be a problem.\n\n And I don't like password's at all. Well, up to now the bare\n PostgreSQL doesn't need anything else. But would it really\n hurt to use ssl in the case someone needs security? I don't\n know exactly, but the authorized keys might reside in a new\n system catalog. So such a secure installation can live with a\n wide open hba.conf and who can be who is controlled by\n pg_authorizedkeys then.\n\n As a side effect, all communication between the backend and\n the client would be crypted, so no wire listener could see\n anything :-)\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Wed, 14 Jul 1999 00:21:54 +0200 (MET DST)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Updated TODO list" }, { "msg_contents": "From: Bruce Momjian <[email protected]>\n> > Bruce Momjian <[email protected]> writes:\n> > >> DB admin has no business knowing other's passwords. The current\nsecurity\n> > >> scheme is seriously flawed.\n> >\n> > > But it is the db passwords, not the Unix passwords.\n> >\n> > I think the original point was that some people use the same or related\n> > passwords for psql as for their login password.\n> >\n> > Nonetheless, since we have no equivalent of \"passwd\" that would let a\n> > db user change his db password for himself, it's a little silly to\n> > talk about hiding db passwords from the admin who puts them in.\n> >\n> > If this is a concern, we'd need to add both encrypted storage of\n> > passwords and a remote-password-change feature.\n>\n> Doing the random salt over the wire would still be a problem.\n\nThere is absolutely no technical problem with storing hashed passwords and\nstill sending salted hash over the wire. It was recently discussed in detail\nin \"Hashing passwords\" thread in pgsql-hackers list.\n\nGene Sokolov\n\n", "msg_date": "Wed, 14 Jul 1999 11:32:57 +0400", "msg_from": "\"Gene Sokolov\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Updated TODO list" }, { "msg_contents": "> I think the original point was that some people use the same or related\n> passwords for psql as for their login password.\n\nThis may sound cold, but isn't that their own problem. I can remmeber\nbeing told the first time i needed a passwd \"don't reuse this\" .\nThere should come a tiem when people take their own security a little\nmore into their own hands, but hey that's just me :)\n\njeff\n\n======================================================\nJeff MacDonald\n\[email protected]\twebpage: http://hub.org/~jeff\n\[email protected]\tirc: bignose on EFnet\n======================================================\n\n", "msg_date": "Wed, 14 Jul 1999 09:12:05 -0300 (ADT)", "msg_from": "Jeff MacDonald <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Updated TODO list " }, { "msg_contents": "\nI like MS SQL Server on NT, it has integerated security. ;-)\nAsking ordinary users to remember extra passwords\nis not going to work. Either they won't do it or they\nwill use lots of easy ones like 'mouse', child's name, etc.\n\nJohn.\n\n> > I think the original point was that some people use the \n> same or related\n> > passwords for psql as for their login password.\n> \n> This may sound cold, but isn't that their own problem. I can remmeber\n> being told the first time i needed a passwd \"don't reuse this\" .\n> There should come a tiem when people take their own security a little\n> more into their own hands, but hey that's just me :)\n> \n> jeff\n> \n> ======================================================\n> Jeff MacDonald\n> \[email protected]\twebpage: http://hub.org/~jeff\n> \[email protected]\tirc: bignose on EFnet\n> ======================================================\n> \n> \n> \n", "msg_date": "Wed, 14 Jul 1999 13:55:21 +0100", "msg_from": "\"John Ridout\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: [HACKERS] Updated TODO list " }, { "msg_contents": "> I think the original point was that some people use the same or related\n> passwords for psql as for their login password.\n\nWell, you can't expect the pedestrians out here to remember to different\npasswords. The fact that pgsql passwords are all lowercase makes this kind\nof tough though. So, then you have the option of storing passwords in\nplain readable to the db admin, which is unacceptable, or storing no\npassword at all which leaves you with ident.\n\nAlso, when you use things like PHP or run scripts/programs from cron, you\ncan't really have people enter a password. Hardcoding passwords seems to\nbe suggested by a lot of people, but that's ridiculous.\n\nI think what many people discussed about separating the authentication\nmethod into a compile-time option would be a good idea. Then the admin can\ndecide whether to use the current system, SSL, ssh(?), PAM, whatever.\nPerhaps that would also take some load of the developers who would\nprobably much rather develop a DBMS than authentication systems.\n\nI've posted this a while ago on one of the general lists, about whether\nthere is a PAM-enabling patch available, but evidently I got the answer\nhere. :(\n\n-- \nPeter Eisentraut\nPathWay Computing, Inc.\n\n", "msg_date": "Wed, 14 Jul 1999 10:22:13 -0400 (EDT)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Updated TODO list" }, { "msg_contents": "> > Doing the random salt over the wire would still be a problem.\n> \n> There is absolutely no technical problem with storing hashed passwords and\n> still sending salted hash over the wire. It was recently discussed in detail\n> in \"Hashing passwords\" thread in pgsql-hackers list.\n\nBut you are hashing it with a secret known by the database adminstrator,\nand someone knows any password, like their own, can guess the secret by\nlooking at the hashed version, no?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 14 Jul 1999 11:01:42 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Updated TODO list" }, { "msg_contents": "> > I think the original point was that some people use the same or related\n> > passwords for psql as for their login password.\n> \n> This may sound cold, but isn't that their own problem. I can remmeber\n> being told the first time i needed a passwd \"don't reuse this\" .\n> There should come a tiem when people take their own security a little\n> more into their own hands, but hey that's just me :)\n\nThis may be the issue. If we decided the postgres user has to be able\nto know the password, we are stuck requiring people to use a different\npassword for the database if the postgres user is not trusted as much as\nthe system owner.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 14 Jul 1999 11:14:00 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Updated TODO list" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> > > Doing the random salt over the wire would still be a problem.\n> >\n> > There is absolutely no technical problem with storing hashed passwords and\n> > still sending salted hash over the wire. It was recently discussed in detail\n> > in \"Hashing passwords\" thread in pgsql-hackers list.\n> \n> But you are hashing it with a secret known by the database adminstrator,\n> and someone knows any password, like their own, can guess the secret by\n> looking at the hashed version, no?\n\nIf the algorithm used for Unix passwords is used, it shouldn't be a\nproblem, as the \"secret\" is the password itself, which the client uses\nto encrypt a 64 bit block of zeros, then it encrypts the resulting\n64-bits of cyphertext with the password -- this process is repeated\nuntil the 64-bit block of zeros has been encrypted (usually with DES) 25\ntimes, using the salt to randomize things inside the DES algorithm\nitself -- DES hardware chips that don't know about salt can't deal with\nit. The encrypted zeros are then sent down the wire and compared to the\nstored \"hash\". \n\nUsing this algorithm, the only thing that goes over the wire is the\nsalted 64-bit cyphertext -- whose plaintext is a 64-bit block of zeros.\n\nSo, the only commonality between the stored \"passwords\" is the 64-bit\nblock of zeros -- which is why dictionary attacks and brute force\nmethods have to be used. Of course, you could always use another crypt\nalgorithm instead of DES. However, the standard unix crypt() call is\nexportable -- it's only 56-bits. MD5 and RSA have been used by various\nvendors to do the same job. \nAs an administrator, while I know the plaintext of the hash (64-bit\nblock of zeros), and I have the salted cyphertext of the hash (13\ncharacters, the first two of which are the salt), and I have the\nalgorithm -- but none of that helps me find out the secret used to\nencrypt the plaintext.\n\nThis makes it as secure as the Unix password system itself -- which,\nAFAIK, no one has ever been able to DIRECTLY extract the password for\nthe cyphertext (brute forcing the password doesn't count).\n\nHowever, there is a security risk with doing the crypt() in the client\n-- if I know someone else's hashed password, I can bypass the client's\ncrypt() call (by hacking the client), and get authentication without\never knowing the user's password. To solve this, you don't transmit the\nfinal hash or the plaintext password from the client to the server --\nyou wrap the password transmittal in SSL (session keys and all that),\nand the backend does the crypt() and compare.\n\nTo summarize -- to use hashed passwords in a client-server system in a\nrelatively secure fashion:\n1.)\tUse a one-way algorithm, like Unix crypt(), where a known plaintext\nis heavily encrypted by the password;\n2.)\tThe password is never sent from the client to the server in\nplaintext form;\n3.)\tAn SSL-like mechanism is used to encrypt the transmission of the\npassword to the backend, which does the crypt() and compares it against\nthe stored hash -- this prevents us from using a known hash attack with\na hacked client.\n\nThere are two layers to this security cake -- network, and database.\n\n\nLamar Owen\nWGCR Internet Radio\n", "msg_date": "Wed, 14 Jul 1999 12:02:02 -0400", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Password redux (was:Re: [HACKERS] Updated TODO list)" }, { "msg_contents": "From: Bruce Momjian <[email protected]>\n> > > Doing the random salt over the wire would still be a problem.\n> >\n> > There is absolutely no technical problem with storing hashed passwords\nand\n> > still sending salted hash over the wire. It was recently discussed in\ndetail\n> > in \"Hashing passwords\" thread in pgsql-hackers list.\n>\n> But you are hashing it with a secret known by the database adminstrator,\nYes, DB admin can gain useable info.\n\n> and someone knows any password, like their own, can guess the secret by\n> looking at the hashed version, no?\n\nNo. Not any password, <master value> only. SHA or MD5 hash is one-way. There\nare many schemes. What I proposed is just one solution. Others may propose\nsomething better.\n\nHere are my thoughts:\n1. Yes, database admin can compromise security of the whole installation, no\nmatter what security scheme is selected.\n2. Even if database admin can compromise security, I would rather opt for a\nbetter security scheme, rather then give up completely.\n3. When you enter your password at any login prompt, the password either\nappears as *** or does not appear at all. Why do you think it is done this\nway? Same applies to select * from pg_shadow.\n4. Storing hashes instead of plain text passwords would divert all casual\nand \"peek over the shoulder\" hackers. It's two really different tasks -\nmemorizing a password or memorizing 24 random-looking bytes of a base64 hash\npresentation.\n6. People tend to reuse passwords. Getting one password helps to get other\npasswords too.\n7. I do not understand why it's so important to keep passwords in plain\ntext. Just a simple hash would help a lot.\n\nGene Sokolov.\n\n", "msg_date": "Thu, 15 Jul 1999 11:06:04 +0400", "msg_from": "\"Gene Sokolov\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Updated TODO list" }, { "msg_contents": "From: Bruce Momjian <[email protected]>\n> > > I think the original point was that some people use the same or\nrelated\n> > > passwords for psql as for their login password.\n> >\n> > This may sound cold, but isn't that their own problem. I can remmeber\n> > being told the first time i needed a passwd \"don't reuse this\" .\n> > There should come a tiem when people take their own security a little\n> > more into their own hands, but hey that's just me :)\n>\n> This may be the issue. If we decided the postgres user has to be able\n> to know the password, we are stuck requiring people to use a different\n> password for the database if the postgres user is not trusted as much as\n> the system owner.\n\nAssuming that people have limited memory, they really have only two\nchoices - reuse passwords, possibly with some modifications, or write\npasswords down. I think the first choice is the lesser evil.\n There are perfect solutions to the authentication problem. It's just a\nmatter of accepting one of these solutions.\n\nGene Sokolov\n\n", "msg_date": "Thu, 15 Jul 1999 11:16:46 +0400", "msg_from": "\"Gene Sokolov\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Updated TODO list" }, { "msg_contents": "I agree with Gene. Following this discussion, I sense a reluctance to\nimplement stronger password schemes because of the extra development\nhassles (compatibility, crypto prohibitions, maintenance).\n\n1) Divide and conquer: the developers are concerned about both \"over the\nwire\" and server passwords. I suggest you focus on the server side and\nleave the over the wire security to the DB admin/sys.admin as an\ninstallation issue. If they choose to use SSL, SSH, IPsec or a home-grown\nauthentication handshake, that's of no concern to pgsql. Just think of it\nas a telnet session into the server.\n\n2) On the server side, use the native crypt(3) by default (or the NT\nequivalent) and store the password hash. The strength of the crypt will\nvary depending on the installation, but that's really up to the choice of\nOS and installation. If someone wants to patch for PAM, Kerberos or\nwhatever, that's fine too, as long as you can always revert back to the\nplain old crypt(3).\n\nCiao\n --Louis <[email protected]> \n\nLouis Bertrand http://www.bertrandtech.on.ca\nBertrand Technical Services, Bowmanville, ON, Canada \n\nOpenBSD: Secure by default. http://www.openbsd.org/\n\nOn Thu, 15 Jul 1999, Gene Sokolov wrote:\n\n> From: Bruce Momjian <[email protected]>\n> > > > Doing the random salt over the wire would still be a problem.\n> > >\n> > > There is absolutely no technical problem with storing hashed passwords\n> and\n> > > still sending salted hash over the wire. It was recently discussed in\n> detail\n> > > in \"Hashing passwords\" thread in pgsql-hackers list.\n> >\n> > But you are hashing it with a secret known by the database adminstrator,\n> Yes, DB admin can gain useable info.\n> \n> > and someone knows any password, like their own, can guess the secret by\n> > looking at the hashed version, no?\n> \n> No. Not any password, <master value> only. SHA or MD5 hash is one-way. There\n> are many schemes. What I proposed is just one solution. Others may propose\n> something better.\n> \n> Here are my thoughts:\n> 1. Yes, database admin can compromise security of the whole installation, no\n> matter what security scheme is selected.\n> 2. Even if database admin can compromise security, I would rather opt for a\n> better security scheme, rather then give up completely.\n> 3. When you enter your password at any login prompt, the password either\n> appears as *** or does not appear at all. Why do you think it is done this\n> way? Same applies to select * from pg_shadow.\n> 4. Storing hashes instead of plain text passwords would divert all casual\n> and \"peek over the shoulder\" hackers. It's two really different tasks -\n> memorizing a password or memorizing 24 random-looking bytes of a base64 hash\n> presentation.\n> 6. People tend to reuse passwords. Getting one password helps to get other\n> passwords too.\n> 7. I do not understand why it's so important to keep passwords in plain\n> text. Just a simple hash would help a lot.\n> \n> Gene Sokolov.\n> \n> \n> \n> \n\n\n", "msg_date": "Thu, 15 Jul 1999 14:10:11 +0000 (GMT)", "msg_from": "Louis Bertrand <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Updated TODO list" }, { "msg_contents": "> I agree with Gene. Following this discussion, I sense a reluctance to\n> implement stronger password schemes because of the extra development\n> hassles (compatibility, crypto prohibitions, maintenance).\n> \n> 1) Divide and conquer: the developers are concerned about both \"over the\n> wire\" and server passwords. I suggest you focus on the server side and\n> leave the over the wire security to the DB admin/sys.admin as an\n> installation issue. If they choose to use SSL, SSH, IPsec or a home-grown\n> authentication handshake, that's of no concern to pgsql. Just think of it\n> as a telnet session into the server.\n> \n> 2) On the server side, use the native crypt(3) by default (or the NT\n> equivalent) and store the password hash. The strength of the crypt will\n> vary depending on the installation, but that's really up to the choice of\n> OS and installation. If someone wants to patch for PAM, Kerberos or\n> whatever, that's fine too, as long as you can always revert back to the\n> plain old crypt(3).\n> \n\nI disagree. Over the wire seems more important than protecting the\npasswords from the eyes of the database administrator, which in _most_\ncases is the system owner anyway.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 15 Jul 1999 11:23:13 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Updated TODO list" }, { "msg_contents": "On Thu, 15 Jul 1999, Bruce Momjian wrote:\n\n> > 1) Divide and conquer: the developers are concerned about both \"over the\n> > wire\" and server passwords. I suggest you focus on the server side and\n> > leave the over the wire security to the DB admin/sys.admin as an\n> > installation issue. If they choose to use SSL, SSH, IPsec or a home-grown\n> > authentication handshake, that's of no concern to pgsql. Just think of it\n> > as a telnet session into the server.\n> > \n> > 2) On the server side, use the native crypt(3) by default (or the NT\n> > equivalent) and store the password hash. The strength of the crypt will\n> > vary depending on the installation, but that's really up to the choice of\n> > OS and installation. If someone wants to patch for PAM, Kerberos or\n> > whatever, that's fine too, as long as you can always revert back to the\n> > plain old crypt(3).\n> > \n> \n> I disagree. Over the wire seems more important than protecting the\n> passwords from the eyes of the database administrator, which in _most_\n> cases is the system owner anyway.\n\nAnd when it's not? People have a tendency to use passwords in more than\none place so they won't forget what they used (they can keep it narrowed\ndown to a couple passwords). Why would you want to make it visible to\nanyone? \n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> TEAM-OS2\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Thu, 15 Jul 1999 11:48:07 -0400 (EDT)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Updated TODO list" }, { "msg_contents": "Agreed: over the wire is _very_ important. The question remains: does the\npgsql development team want to take on the responsibility of implementing\nsuch a scheme at the application layer? There are plenty of very good\nlink/network/transport layer schemes in existence already. You'd just \nbe reinventing.\n\nIn a benign environment you don't worry about securing the wire (as with\nprotocols like telnet/rlogin, ftp and POP/IMAP). In a more cautious\nenvironment, you apply well-known and understood encryption/authentication\nschemes at a layer below the application (IPsec, SSH, SSL, Java security,\nKerberos, and there are plenty in the Microsoft world too).\n\nBut above all: do not store passwords in cleartext. It makes it\nridiculously easy for an attacker to take over user accounts. Let's say\nthey're kiddies who cracked root just by using a widely available exploit,\nthen, for free, they get the rest of the passwords in clear. Bonus!\n\nCiao\n --Louis <[email protected]> \n\nLouis Bertrand http://www.bertrandtech.on.ca\nBertrand Technical Services, Bowmanville, ON, Canada \nTel: +1.905.623.8925 Fax: +1.905.623.3852\n\nOpenBSD: Secure by default. http://www.openbsd.org/\n\nOn Thu, 15 Jul 1999, Bruce Momjian wrote:\n\n> > I agree with Gene. Following this discussion, I sense a reluctance to\n> > implement stronger password schemes because of the extra development\n> > hassles (compatibility, crypto prohibitions, maintenance).\n> > \n> > 1) Divide and conquer: the developers are concerned about both \"over the\n> > wire\" and server passwords. I suggest you focus on the server side and\n> > leave the over the wire security to the DB admin/sys.admin as an\n> > installation issue. If they choose to use SSL, SSH, IPsec or a home-grown\n> > authentication handshake, that's of no concern to pgsql. Just think of it\n> > as a telnet session into the server.\n> > \n> > 2) On the server side, use the native crypt(3) by default (or the NT\n> > equivalent) and store the password hash. The strength of the crypt will\n> > vary depending on the installation, but that's really up to the choice of\n> > OS and installation. If someone wants to patch for PAM, Kerberos or\n> > whatever, that's fine too, as long as you can always revert back to the\n> > plain old crypt(3).\n> > \n> \n> I disagree. Over the wire seems more important than protecting the\n> passwords from the eyes of the database administrator, which in _most_\n> cases is the system owner anyway.\n> \n> -- \n> Bruce Momjian | http://www.op.net/~candle\n> [email protected] | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> \n> \n> \n\n\n", "msg_date": "Thu, 15 Jul 1999 17:45:50 +0000 (GMT)", "msg_from": "Louis Bertrand <[email protected]>", "msg_from_op": false, "msg_subject": "Password thread (was: Re: [HACKERS] Updated TODO list)" }, { "msg_contents": "At 10:45 AM -0700 7/15/99, Louis Bertrand wrote:\n>Agreed: over the wire is _very_ important. The question remains: does the\n\n>But above all: do not store passwords in cleartext. It makes it\n>ridiculously easy for an attacker to take over user accounts. Let's say\n\nThere is a fundamental conflict here: If you want to encyrpt the stored\npasswords then they have to go over the wire in the clear. If you want the\npasswords encrypted over the wire then they need to be stored in the clear\non the machine. If you encrypt the channel (so you can encrypt the stored\npasswords and still protect the wire) then the conflict applies to how you\nset up the channel.\n\nI walked in in the middle of this discussion, but if we are creating a\nPG-unique authentication scheme I would hope that the PG passwords are not\nthose of the other unix user accounts.\n\nCurrently PG has a real grab-bag of authentication methods. This is nice,\nbut many of them are not very secure. If we can tie into something like\nSSH, IPsec, or SSL then that is definitely to be prefered to doing it all\nourselves.\n\nI wish I could recommend kerberos (which we already claim to support), but\nthe implementations I've seen seem buggy. NetBSD and Solaris both have it\nbuilt in, but there are subroutine name conflicts between the kerberos\nlibraries and some standard libraries on both platforms (different\nconflicts). I think it's an example of good US technology being destroyed\nby the ITAR restrictions. The overseas NetBSD developers, and a large\nfraction of the US ones, don't touch the kerberos stuff, so it suffers\nbitrot. Excuse the rant.\n\nSignature failed Preliminary Design Review.\nFeasibility of a new signature is currently being evaluated.\[email protected], or [email protected]\n", "msg_date": "Thu, 15 Jul 1999 16:34:25 -0700", "msg_from": "\"Henry B. Hotz\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Password thread (was: Re: [HACKERS] Updated TODO list)" }, { "msg_contents": "> I wish I could recommend kerberos (which we already claim to support), but\n> the implementations I've seen seem buggy. NetBSD and Solaris both have it\n> built in, but there are subroutine name conflicts between the kerberos\n> libraries and some standard libraries on both platforms (different\n> conflicts). I think it's an example of good US technology being destroyed\n> by the ITAR restrictions. The overseas NetBSD developers, and a large\n> fraction of the US ones, don't touch the kerberos stuff, so it suffers\n> bitrot. Excuse the rant.\n> \n\nTotally agree kerberos is a great way to go, if we can.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 15 Jul 1999 22:22:19 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Password thread (was: Re: [HACKERS] Updated TODO list)" }, { "msg_contents": "From: Henry B. Hotz <[email protected]>\n> >Agreed: over the wire is _very_ important. The question remains: does the\n>\n> >But above all: do not store passwords in cleartext. It makes it\n> >ridiculously easy for an attacker to take over user accounts. Let's say\n>\n> There is a fundamental conflict here: If you want to encyrpt the stored\n> passwords then they have to go over the wire in the clear. If you want\nthe\n\nI have repeated it several times already: there is NO conflict. The conflict\nis due to the present security scheme only. It's purely technical, nothing\nmore.\n\nYes, in any security scheme (short of full blown RSA) you still have to\nstore something at the server which can be used to gain access to the\ndatabase if stolen. But that does not have to be the cleartext password\nitself.\n\nGene Sokolov.\n\n\n\n", "msg_date": "Fri, 16 Jul 1999 12:10:36 +0400", "msg_from": "\"Gene Sokolov\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Password thread (was: Re: [HACKERS] Updated TODO list)" }, { "msg_contents": "Bruce Momjian wrote:\n\n> I disagree. Over the wire seems more important than protecting the\n> passwords from the eyes of the database administrator, which in _most_\n> cases is the system owner anyway.\n\nNo,\n\n both are equally important. There is a good reason why even\n root cannot see cleartext unix passwords. And there's a good\n reason for doing something different over the net (why do we\n use ssh when accessing hub.org?).\n\n Well, the sysadmin could run some password cracker against\n shadow files. But if I ever notice that Marc uses a brute\n force method to crack my ones, I'll take a trip and break his\n neck (after breaking every single finger, one by one, hour by\n hour - you'll hear him over there).\n\n Hosts I consider trusted ones are hosts where I trust the OS\n and the admin. It's O.K. if an admin takes a look into some\n files. And if he then finds some of my private xxx pics, so\n be it - as long as he doesn't pin them onto the blackboard\n under \"Jan's private pics\". But it's not O.K. if that look\n means he'll see cleartext passwords without having to take\n extra cracking steps.\n\n To store really crypted passwords in the database, I think\n it's required to send cleartext over the wire. So we have to\n protect that at least until the authentication is done -\n optionally until disconnect.\n\n I haven't found much documentation yet how to use OpenSSL,\n and I even don't know if it really is what we need. But it\n has an Apache like license (free for private and commercial\n use).\n\n If it is what I think so far, it should be possible to enable\n ssl during configure and then tell in the hba.conf if\n password auth has to be ssl protected. Then we could easily\n send cleartext passwords over a protected channel. Thus,\n local traffic could be high speed while net traffic is\n securely crypted. But the admin decides what \"local\" means,\n so traffic on the backbone net (web-server->db-server) might\n be considered secure.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Fri, 16 Jul 1999 17:57:29 +0200 (MET DST)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Updated TODO list" } ]
[ { "msg_contents": "subscribe\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n", "msg_date": "Wed, 7 Jul 1999 21:44:19 -0700 (PDT)", "msg_from": "wilson varghese <[email protected]>", "msg_from_op": true, "msg_subject": "None" } ]
[ { "msg_contents": "ADMIN\n\n* Allow international error message support and add error codes\n\nThe JDBC driver already has the international error message support. All\nit needs are some translations of the errors.properties file.\n\nIt would be easy to add error codes to it.\n\nTYPES\n\n* Large objects\n\to Fix large object mapping scheme, own typeid or reltype(Peter)\n\to Allow large text type to use large objects(Peter)\n\nI hope to these two done for 6.6\n\n\to Not to stuff everything as files in a single directory, hash\ndirs\n\to Allow large object vacuuming\n\nDo you mean vacuuming within a large objects table, or removing orphaned\nones?\nA solution for orphaning is in contrib, although it still needs some\nwork.\n\nOn the JDBC front, I'm planning for 6.6:\n\n* Array support\n* Large Object Stream support\n* More JDBC2 API implemented\n\nBut the big one will be to get the driver to use the current protocol.\nUp until now, it's been using the 6.3 backend protocol. Now that the\ncurrent driver supports JDBC2, I want to get it up to date.\n\nThis will mean that the 6.6 driver will not be backward compatible with\nearlier backends (I can't remember which one started using the current\nprotocol). The 6.5 driver does support 6.3 and 6.4.x backends.\n\nPeter\n\n-- \nPeter Mount\nEnterprise Support\nMaidstone Borough Council\nAny views stated are my own, and not those of Maidstone Borough Council.\n\n", "msg_date": "Thu, 8 Jul 1999 09:30:24 +0100 ", "msg_from": "Peter Mount <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] Updated TODO list" }, { "msg_contents": "> ADMIN\n> \n> * Allow international error message support and add error codes\n> \n> The JDBC driver already has the international error message support. All\n> it needs are some translations of the errors.properties file.\n> \n> It would be easy to add error codes to it.\n\nGood. I think Vadim wasn't codes from the backend for scripts and\nstuff.\n\n> \n> TYPES\n> \n> * Large objects\n> \to Fix large object mapping scheme, own typeid or reltype(Peter)\n> \to Allow large text type to use large objects(Peter)\n> \n> I hope to these two done for 6.6\n\nGood. I can help too.\n\n> \n> \to Not to stuff everything as files in a single directory, hash\n> dirs\n> \to Allow large object vacuuming\n\n> Do you mean vacuuming within a large objects table, or removing orphaned\n> ones?\n\nTom Lane says you can get multiple versions of a large object in the\nfile, and there is no way to remove them.\n\n> A solution for orphaning is in contrib, although it still needs some\n> work.\n> \n> On the JDBC front, I'm planning for 6.6:\n> \n> * Array support\n> * Large Object Stream support\n> * More JDBC2 API implemented\n> \n> But the big one will be to get the driver to use the current protocol.\n> Up until now, it's been using the 6.3 backend protocol. Now that the\n> current driver supports JDBC2, I want to get it up to date.\n\nCool.\n\n> This will mean that the 6.6 driver will not be backward compatible with\n> earlier backends (I can't remember which one started using the current\n> protocol). The 6.5 driver does support 6.3 and 6.4.x backends.\n\nThat is not a problem. We have given them enough releases that will\nwork with the new driver.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 8 Jul 1999 11:45:10 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Updated TODO list" } ]
[ { "msg_contents": "Well,\n\n doing arbitrary tuple size should be as generic as possible.\n Thus I think the best place to do it is down in the heapam\n routines (heap_fetch(), heap_getnext(), heap_insert(), ...).\n I'm not 100% sure but nothing should access a heap relation\n going around them. Anyway, if there are places, then it's\n time to clean them up.\n\n What about adding one more ItemPointerData to the tuple\n header which holds the ctid of a DATA continuation tuple. If\n a tuple doesn't fit into one block, this will tell where to\n get the next chunk of tuple data building a chain until an\n invalid ctid is found. The continuation tuples can have a\n negative t_natts to be easily identified and ignored by\n scanning routines.\n\n By doing it this way we could also sqeeze out some currently\n wasted space. All tuples that get inserted/updated are added\n to the end of the relation. If a tuple currently doesn't fit\n into the freespace of the actual last block, that freespace\n is wasted and the tuple is placed into a new allocated block\n at the end. So if there is 5K freespace and another 5.5K\n tuple is added, the relation grows effectively by 10.5K!\n\n I'm not sure how to handle this with vacuum, but I believe\n Vadim is able to put some well placed goto's that make it.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Thu, 8 Jul 1999 12:11:51 +0200 (MET DST)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": true, "msg_subject": "Arbitrary tuple size" }, { "msg_contents": "Jan Wieck wrote:\n> \n> What about adding one more ItemPointerData to the tuple\n> header which holds the ctid of a DATA continuation tuple. If\n\nOh no. Fortunately we need not in this: we can just add new flag\nto t_infomask and add continuation tid at the end of tuple chunk.\nOk?\n\n> a tuple doesn't fit into one block, this will tell where to\n> get the next chunk of tuple data building a chain until an\n> invalid ctid is found. The continuation tuples can have a\n> negative t_natts to be easily identified and ignored by\n> scanning routines.\n...\n> \n> I'm not sure how to handle this with vacuum, but I believe\n> Vadim is able to put some well placed goto's that make it.\n\n-:)))\nOk, ok - I have great number of goto-s in my pocket -:)\n\nVadim\n", "msg_date": "Thu, 08 Jul 1999 19:16:51 +0800", "msg_from": "Vadim Mikheev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Arbitrary tuple size" }, { "msg_contents": "> Well,\n> \n> doing arbitrary tuple size should be as generic as possible.\n> Thus I think the best place to do it is down in the heapam\n> routines (heap_fetch(), heap_getnext(), heap_insert(), ...).\n> I'm not 100% sure but nothing should access a heap relation\n> going around them. Anyway, if there are places, then it's\n> time to clean them up.\n> \n> What about adding one more ItemPointerData to the tuple\n> header which holds the ctid of a DATA continuation tuple. If\n> a tuple doesn't fit into one block, this will tell where to\n> get the next chunk of tuple data building a chain until an\n> invalid ctid is found. The continuation tuples can have a\n> negative t_natts to be easily identified and ignored by\n> scanning routines.\n> \n> By doing it this way we could also sqeeze out some currently\n> wasted space. All tuples that get inserted/updated are added\n> to the end of the relation. If a tuple currently doesn't fit\n> into the freespace of the actual last block, that freespace\n> is wasted and the tuple is placed into a new allocated block\n> at the end. So if there is 5K freespace and another 5.5K\n> tuple is added, the relation grows effectively by 10.5K!\n> \n> I'm not sure how to handle this with vacuum, but I believe\n> Vadim is able to put some well placed goto's that make it.\n\nI agree this is the way to go. There is nothing I can think of that is\nlimited to how large a tuple can be. It is just accessing it from the\nheap routines that is the problem. If the tuple is alloc'ed to be used,\nwe can paste together the parts on disk and return one tuple. If they\nare accessing the buffer copy directly, we would have to be smart about\ngoing off the end of the disk copy and moving to the next segment.\n\nThe code is very clear now about accessing tuples or tuple copies.\nHopefully locking will not be an issue because you only need to lock the\nmain tuple. No one is going to see the secondary part of the tuple.\n\nIf Vadim can do MVCC, he certainly can handle this, with the help of\ngoto. :-)\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 8 Jul 1999 11:53:58 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Arbitrary tuple size" }, { "msg_contents": "> Jan Wieck wrote:\n> > \n> > What about adding one more ItemPointerData to the tuple\n> > header which holds the ctid of a DATA continuation tuple. If\n> \n> Oh no. Fortunately we need not in this: we can just add new flag\n> to t_infomask and add continuation tid at the end of tuple chunk.\n> Ok?\n\nSounds good. I would rather not add stuff to the tuple header if we can\nprevent it.\n\n> > I'm not sure how to handle this with vacuum, but I believe\n> > Vadim is able to put some well placed goto's that make it.\n> \n> -:)))\n> Ok, ok - I have great number of goto-s in my pocket -:)\n\nI can send you more.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 8 Jul 1999 12:18:00 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Arbitrary tuple size" }, { "msg_contents": "Bruce Momjian wrote:\n\n> > -:)))\n> > Ok, ok - I have great number of goto-s in my pocket -:)\n>\n> I can send you more.\n\n I have some cheap, spare longjmp()'s over here - anyone need\n them? :-)\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Thu, 8 Jul 1999 18:38:03 +0200 (MET DST)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Arbitrary tuple size" }, { "msg_contents": "Bruce Momjian wrote:\n\n> > What about adding one more ItemPointerData to the tuple\n> > header which holds the ctid of a DATA continuation tuple. If\n> > a tuple doesn't fit into one block, this will tell where to\n> > get the next chunk of tuple data building a chain until an\n> > invalid ctid is found. The continuation tuples can have a\n> > negative t_natts to be easily identified and ignored by\n> > scanning routines.\n\n Yes, Vadim, putting a flag into the bits already there to\n tell it is much better. The information that a particular\n tuple is an extension tuple should also go there instead of\n misusing t_natts.\n\n>\n> I agree this is the way to go. There is nothing I can think of that is\n> limited to how large a tuple can be. It is just accessing it from the\n> heap routines that is the problem. If the tuple is alloc'ed to be used,\n> we can paste together the parts on disk and return one tuple. If they\n> are accessing the buffer copy directly, we would have to be smart about\n> going off the end of the disk copy and moving to the next segment.\n\n Who's accessing tuple attributes directly inside the buffer\n copy (not only the header which will still be unsplitted at\n the top of the chain)?\n\n Aren't these situations where it is done restricted to system\n catalogs? I think we can live with the restriction that the\n tuple split will not be available for system relations\n because the only place where the limit hit us is pg_rewrite\n and that can be handled by redesigning the storage of rules\n which is already required by the rule recompilation TODO.\n\n I can't think that anywhere in the code a buffer from a user\n relation (except for sequences and that's another story) is\n accessed that clumsy.\n\n>\n> The code is very clear now about accessing tuples or tuple copies.\n> Hopefully locking will not be an issue because you only need to lock the\n> main tuple. No one is going to see the secondary part of the tuple.\n\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Thu, 8 Jul 1999 19:07:32 +0200 (MET DST)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Arbitrary tuple size" }, { "msg_contents": "Bruce Momjian wrote:\n\n> I agree this is the way to go. There is nothing I can think of that is\n> limited to how large a tuple can be.\n\n Outch - I can.\n\n Having an index on a varlen field that now doesn't fit any\n more into an index block. Wouldn't this cause problems? Well\n it's bad database design to index fields that will receive\n that long data because indexing them will blow up the\n database but it must work anyway.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Thu, 8 Jul 1999 19:17:17 +0200 (MET DST)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Arbitrary tuple size" }, { "msg_contents": "[email protected] (Jan Wieck) writes:\n> Bruce Momjian wrote:\n>> I agree this is the way to go. There is nothing I can think of that is\n>> limited to how large a tuple can be.\n\n> Outch - I can.\n\n> Having an index on a varlen field that now doesn't fit any\n> more into an index block. Wouldn't this cause problems?\n\nAren't index tuples still tuples? Can't they be split just like\nregular tuples?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 08 Jul 1999 13:33:06 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Arbitrary tuple size " }, { "msg_contents": "Tom Lane wrote:\n\n>\n> [email protected] (Jan Wieck) writes:\n> > Bruce Momjian wrote:\n> >> I agree this is the way to go. There is nothing I can think of that is\n> >> limited to how large a tuple can be.\n>\n> > Outch - I can.\n>\n> > Having an index on a varlen field that now doesn't fit any\n> > more into an index block. Wouldn't this cause problems?\n>\n> Aren't index tuples still tuples? Can't they be split just like\n> regular tuples?\n\n Don't know, maybe.\n\n While looking for some places where tuple data might be\n accessed directly inside of the buffers I've searched for\n WriteBuffer() and friends. These are mostly used in the index\n access methods and some other places where I expected them,\n so index AM's have at least to be carefully visited when\n implementing tuple split.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Thu, 8 Jul 1999 20:14:02 +0200 (MET DST)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Arbitrary tuple size" }, { "msg_contents": "I wrote:\n\n>\n> Tom Lane wrote:\n> >\n> > Aren't index tuples still tuples? Can't they be split just like\n> > regular tuples?\n>\n> Don't know, maybe.\n\n Actually we have some problems with indices on text\n attributes when the content exceeds HALF of the blocksize:\n\n FATAL 1: btree: failed to add item to the page\n\n It crashes the backend AND seems to corrupt the index! Looks\n to me that at least the btree code needs to be able to store\n at minimum two items into one block and painfully fails if it\n can't.\n\n And just another one:\n\n pgsql=> create table t1 (a int4, b char(4000));\n CREATE\n pgsql=> create index t1_b on t1 (b);\n CREATE\n pgsql=> insert into t1 values (1, 'a');\n\n TRAP: Failed Assertion(\"!(( itid)->lp_flags & 0x01):\",\n File: \"nbtinsert.c\", Line: 361)\n\n Bruce: One more TODO item!\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Thu, 8 Jul 1999 21:01:32 +0200 (MET DST)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Arbitrary tuple size" }, { "msg_contents": "Going toward >8k tuples would be really good, but I suspect we may\nsome difficulties with LO stuffs once we implement it. Also it seems\nthat it's not worth to adapt LOs with newly designed tuples. I think\nthe design of current LOs are so broken that we need to redesign them.\n\no it's slow: accessing a LO need a open() that is not cheap. creating\nmany LOs makes data/base/DBNAME/ directory fat.\n\no it consumes lots of i-nodes\n\no it breaks the tuple abstraction: this makes difficult to maintain\nthe code.\n\nI would propose followings for the new version of LO:\n\no create a new data type that represents the LO\n\no when defining the LO data type in a table, it actually points to a\nLO \"body\" in another place where it is physically stored.\n\no the storage for LO bodies would be a hidden table that contains\nseveral LOs, not single one.\n\no we can have several tables for the LO bodies. Probably a LO body\ntable for each corresponding table (where LO data type is defined) is\nappropreate. \n\no it would be nice to place a LO table on a separate\ndirectory/partition from the original table where LO data type is\ndefined, since a LO body table could become huge.\n\nComments? Opinions?\n---\nTatsuo Ishii\n", "msg_date": "Fri, 09 Jul 1999 10:12:20 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Arbitrary tuple size " }, { "msg_contents": "Jan Wieck wrote:\n> \n> Bruce Momjian wrote:\n> \n> > I agree this is the way to go. There is nothing I can think of that is\n> > limited to how large a tuple can be.\n> \n> Outch - I can.\n> \n> Having an index on a varlen field that now doesn't fit any\n> more into an index block. Wouldn't this cause problems? Well\n> it's bad database design to index fields that will receive\n> that long data because indexing them will blow up the\n> database but it must work anyway.\n\nSeems that in other DBMSes len of index tuple is more restricted\nthan len of heap one. So I think we shouldn't worry about this case.\n\nVadim\n", "msg_date": "Fri, 09 Jul 1999 09:41:31 +0800", "msg_from": "Vadim Mikheev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Arbitrary tuple size" }, { "msg_contents": "At 10:12 9/07/99 +0900, Tatsuo Ishii wrote:\n>\n>o create a new data type that represents the LO\n>\n\n\n\n>o when defining the LO data type in a table, it actually points to a\n>LO \"body\" in another place where it is physically stored.\n\nMuch as the purist in me hates concept of hard links (as in Leon's suggestions), this *may* be a good application for them. Certainly that's how Dec(Oracle)/Rdb does it. Since most blobs will be totally rewritten when they are updated, this represents a slightly smaller problem in terms of MVCC.\n\n>o we can have several tables for the LO bodies. Probably a LO body\n>table for each corresponding table (where LO data type is defined) is\n>appropreate. \n\nDid you mean a table for each field? Or a table for each table (which may have more than 1 LO field). See comments below.\n\n>o it would be nice to place a LO table on a separate\n>directory/partition from the original table where LO data type is\n>defined, since a LO body table could become huge.\n\nI would very much like to see the ability to have multi-file databases and tables - ie. the ability to store and table or index in a separate file. Perhaps with a user-defined partitioning function for table rows. The idea being that:\n\n1. User specifies that a table can be stored in one of (say) three files.\n2. When a record is first stored, the partitioning function is called to determine the file 'storage area' to use. [or a random selection method is used]\n\nIf you are going to allow LOs to be stored in multiple files, it seems a pity not to add some or all of this feature.\n\n\nAdditionally, the issue of pg_dump support for LOs needs to be addressed.\n\n\nThat's sbout it for me,\n\nPhilip Warner.\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.C.N. 008 659 498) | /(@) ______---_\nTel: +61-03-5367 7422 | _________ \\\nFax: +61-03-5367 7430 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Fri, 09 Jul 1999 11:58:57 +1000", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Arbitrary tuple size " }, { "msg_contents": "> > I agree this is the way to go. There is nothing I can think of that is\n> > limited to how large a tuple can be. It is just accessing it from the\n> > heap routines that is the problem. If the tuple is alloc'ed to be used,\n> > we can paste together the parts on disk and return one tuple. If they\n> > are accessing the buffer copy directly, we would have to be smart about\n> > going off the end of the disk copy and moving to the next segment.\n> \n> Who's accessing tuple attributes directly inside the buffer\n> copy (not only the header which will still be unsplitted at\n> the top of the chain)?\n\n\nEvery call to heap_getnext(), for one. It locks the buffer, and returns\na pointer to the tuple. The next heap_getnext(), or heap_endscan()\nreleases the lock. The cost of returning every tuple as palloc'ed\nmemory would be huge. We may be able to get away with just returning\npalloc'ed stuff for long tuples. That may be a simple, clean solution\nthat would be isolated.\n\nIn fact, if we want a copy, we call heap_copytuple() to return a\npalloc'ed copy. This interface has been cleaned up so it should be\nclear what is happening. The old code was messy about this.\n\nSee my comments from heap_fetch(), which does require the user to supply\na buffer variable, so they can unlock it when they are done. The old\ncode allowed you to pass a NULL as a buffer pointer, so there was no\nlocking done, and that is bad!\n\n---------------------------------------------------------------------------\n\n/* ----------------\n * heap_fetch - retrive tuple with tid\n *\n * Currently ignores LP_IVALID during processing!\n *\n * Because this is not part of a scan, there is no way to\n * automatically lock/unlock the shared buffers.\n * For this reason, we require that the user retrieve the buffer\n * value, and they are required to BufferRelease() it when they\n * are done. If they want to make a copy of it before releasing it,\n * they can call heap_copytyple().\n * ----------------\n */\nvoid\nheap_fetch(Relation relation,\n Snapshot snapshot,\n HeapTuple tuple, \n Buffer *userbuf)\n\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 9 Jul 1999 00:33:10 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Arbitrary tuple size" }, { "msg_contents": "> > Aren't index tuples still tuples? Can't they be split just like\n> > regular tuples?\n> \n> Don't know, maybe.\n> \n> While looking for some places where tuple data might be\n> accessed directly inside of the buffers I've searched for\n> WriteBuffer() and friends. These are mostly used in the index\n> access methods and some other places where I expected them,\n> so index AM's have at least to be carefully visited when\n> implementing tuple split.\n\nSee my recent mail. heap_getnext and heap_fetch(). Can't get lower\naccess than that.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 9 Jul 1999 00:36:03 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Arbitrary tuple size" }, { "msg_contents": "\nI knew there had to be a reason that some tests where BLCKSZ/2 and some\nBLCKSZ.\n\nAdded to TODO:\n\n\t* Allow index on tuple greater than 1/2 block size\n\nSeems we have to allow columns over 1/2 block size for now. Most people\nwouln't index on them.\n\n\n> > Don't know, maybe.\n> \n> Actually we have some problems with indices on text\n> attributes when the content exceeds HALF of the blocksize:\n> \n> FATAL 1: btree: failed to add item to the page\n> \n> It crashes the backend AND seems to corrupt the index! Looks\n> to me that at least the btree code needs to be able to store\n> at minimum two items into one block and painfully fails if it\n> can't.\n> \n> And just another one:\n> \n> pgsql=> create table t1 (a int4, b char(4000));\n> CREATE\n> pgsql=> create index t1_b on t1 (b);\n> CREATE\n> pgsql=> insert into t1 values (1, 'a');\n> \n> TRAP: Failed Assertion(\"!(( itid)->lp_flags & 0x01):\",\n> File: \"nbtinsert.c\", Line: 361)\n> \n> Bruce: One more TODO item!\n> \n> \n> Jan\n> \n> --\n> \n> #======================================================================#\n> # It's easier to get forgiveness for being wrong than for being right. #\n> # Let's break this rule - forgive me. #\n> #========================================= [email protected] (Jan Wieck) #\n> \n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 9 Jul 1999 00:39:39 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Arbitrary tuple size" }, { "msg_contents": "\nIf we get wide tuples, we could just throw all large objects into one\ntable, and have an on it. We can then vacuum it to compact space, etc.\n\n\n> Going toward >8k tuples would be really good, but I suspect we may\n> some difficulties with LO stuffs once we implement it. Also it seems\n> that it's not worth to adapt LOs with newly designed tuples. I think\n> the design of current LOs are so broken that we need to redesign them.\n> \n> o it's slow: accessing a LO need a open() that is not cheap. creating\n> many LOs makes data/base/DBNAME/ directory fat.\n> \n> o it consumes lots of i-nodes\n> \n> o it breaks the tuple abstraction: this makes difficult to maintain\n> the code.\n> \n> I would propose followings for the new version of LO:\n> \n> o create a new data type that represents the LO\n> \n> o when defining the LO data type in a table, it actually points to a\n> LO \"body\" in another place where it is physically stored.\n> \n> o the storage for LO bodies would be a hidden table that contains\n> several LOs, not single one.\n> \n> o we can have several tables for the LO bodies. Probably a LO body\n> table for each corresponding table (where LO data type is defined) is\n> appropreate. \n> \n> o it would be nice to place a LO table on a separate\n> directory/partition from the original table where LO data type is\n> defined, since a LO body table could become huge.\n> \n> Comments? Opinions?\n> ---\n> Tatsuo Ishii\n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 9 Jul 1999 00:45:30 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Arbitrary tuple size" }, { "msg_contents": ">If we get wide tuples, we could just throw all large objects into one\n>table, and have an on it. We can then vacuum it to compact space, etc.\n\nI thought about that too. But if a table contains lots of LOs,\nscanning of it will take for a long time. On the otherhand, if LOs are\nstored outside the table, scanning time will be shorter as long as we\ndon't need to read the content of each LO type field.\n--\nTatsuo Ishii\n\n\n", "msg_date": "Fri, 09 Jul 1999 14:08:00 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Arbitrary tuple size " }, { "msg_contents": "Bruce Momjian wrote:\n> \n> If we get wide tuples, we could just throw all large objects into one\n> table, and have an on it. We can then vacuum it to compact space, etc.\n\nStoring 2Gb LO in table is not good thing.\n\nVadim\n", "msg_date": "Fri, 09 Jul 1999 13:27:55 +0800", "msg_from": "Vadim Mikheev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Arbitrary tuple size" }, { "msg_contents": "> >If we get wide tuples, we could just throw all large objects into one\n> >table, and have an on it. We can then vacuum it to compact space, etc.\n> \n> I thought about that too. But if a table contains lots of LOs,\n> scanning of it will take for a long time. On the otherhand, if LOs are\n> stored outside the table, scanning time will be shorter as long as we\n> don't need to read the content of each LO type field.\n\nUse an index to get to the LO's in the table.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 9 Jul 1999 01:29:18 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Arbitrary tuple size" }, { "msg_contents": "> Bruce Momjian wrote:\n> > \n> > If we get wide tuples, we could just throw all large objects into one\n> > table, and have an on it. We can then vacuum it to compact space, etc.\n> \n> Storing 2Gb LO in table is not good thing.\n> \n> Vadim\n> \n\nAh, but we have segemented tables now. It will auto-split at 1 gig.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 9 Jul 1999 01:29:46 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Arbitrary tuple size" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> > Bruce Momjian wrote:\n> > >\n> > > If we get wide tuples, we could just throw all large objects into one\n> > > table, and have an on it. We can then vacuum it to compact space, etc.\n> >\n> > Storing 2Gb LO in table is not good thing.\n> >\n> > Vadim\n> >\n> \n> Ah, but we have segemented tables now. It will auto-split at 1 gig.\n\nWell, now consider update of 2Gb row!\nI worry not due to non-overwriting but about writing\n2Gb log record to WAL - we'll not be able to do it, sure.\n\nIsn't it why Informix restrict tuple len to 32k only?\nAnd the same is what Oracle does.\nBoth of them have ability to use > 1 page for single row,\nbut they have this restriction anyway.\n\nI don't like _arbitrary_ tuple size.\nI vote for some limit. 32K or 64K, at max.\n\nVadim\n", "msg_date": "Fri, 09 Jul 1999 14:05:16 +0800", "msg_from": "Vadim Mikheev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Arbitrary tuple size" }, { "msg_contents": "Vadim Mikheev wrote:\n> \n> Bruce Momjian wrote:\n> >\n> > > Bruce Momjian wrote:\n> > > >\n> > > > If we get wide tuples, we could just throw all large objects into one\n> > > > table, and have an on it. We can then vacuum it to compact space, etc.\n> > >\n> > > Storing 2Gb LO in table is not good thing.\n> > >\n> > > Vadim\n> > >\n> >\n> > Ah, but we have segemented tables now. It will auto-split at 1 gig.\n> \n> Well, now consider update of 2Gb row!\n> I worry not due to non-overwriting but about writing\n> 2Gb log record to WAL - we'll not be able to do it, sure.\n\nCan't we write just some kind of diff (only changed pages) in WAL,\neither starting at some thresold or just based the seek/write logic of\nLOs?\n\nIt will add complexity, but having some arbitrary limits seems very\nwrong.\n\nIt will also make indexing LOs more complex, but as we don't currently\nindex \nthem anyway, its not a big problem yet.\n\nSetting the limit higher (like 16M, where all my current LOs would fit\n:) )\nis just postponing the problems. Does \"who will need more than 640k of\nRAM\"\nsound familiar ?\n\n> Isn't it why Informix restrict tuple len to 32k only?\n> And the same is what Oracle does.\n\nDoes anyone know what the limit for Oracle8i is ? As they advertise it\nas a \nreplacement file system among other things, I guess it can't be too low\n- \nI suspect 2G at minimum\n\n> Both of them have ability to use > 1 page for single row,\n> but they have this restriction anyway.\n> \n> I don't like _arbitrary_ tuple size.\n\nWhy not ?\n\nIMHO we should allow _arbitrary_ (in reality probably <= MAXINT), but \noptimize for some known size and tell the users that if they exceed it\nthe performance would suffer. \n\nSo when I have 99% of my LOs in 10k-80k range but have a few 512k-2M\nones \nI can just live with the bigger ones having bad performance instead \nimplementing an additional LO manager in the frontend too.\n\n> I vote for some limit.\n\nWhy limit ?\n\n> 32K or 64K, at max.\n\nWhy so low ? Please make it at least configurable, preferrably at\nruntime.\n\nAnd if you go that way, please assume this limit (in code) for tuple\nsize only,\nand not in FE/BE protocol - it will make it easier for someone to fix\nthe backend \nto work with larger ones later\n\nThe LOs should remain load-on-demant anyway, just it should be made more\ntransparent \nfor end-users.\n\n> Vadim\n", "msg_date": "Fri, 09 Jul 1999 10:14:55 +0300", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Arbitrary tuple size" }, { "msg_contents": "Tatsuo Ishii wrote:\n\n>\n> Going toward >8k tuples would be really good, but I suspect we may\n> some difficulties with LO stuffs once we implement it. Also it seems\n> that it's not worth to adapt LOs with newly designed tuples. I think\n> the design of current LOs are so broken that we need to redesign them.\n>\n> [... LO stuff deleted ...]\n\n I wasn't talking about a new datatype that can exceed the\n tuple limit. The general tuple split I want will also handle\n it if a row with 40 text attributes of each 1K gets stored.\n That's something different.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Fri, 9 Jul 1999 10:27:49 +0200 (MET DST)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Arbitrary tuple size" }, { "msg_contents": "Vadim wrote:\n\n>\n> Bruce Momjian wrote:\n> >\n> > > Bruce Momjian wrote:\n> > > >\n> > > > If we get wide tuples, we could just throw all large objects into one\n> > > > table, and have an on it. We can then vacuum it to compact space, etc.\n> > >\n> > > Storing 2Gb LO in table is not good thing.\n> > >\n> > > Vadim\n> > >\n> >\n> > Ah, but we have segemented tables now. It will auto-split at 1 gig.\n>\n> Well, now consider update of 2Gb row!\n> I worry not due to non-overwriting but about writing\n> 2Gb log record to WAL - we'll not be able to do it, sure.\n>\n> Isn't it why Informix restrict tuple len to 32k only?\n> And the same is what Oracle does.\n> Both of them have ability to use > 1 page for single row,\n> but they have this restriction anyway.\n>\n> I don't like _arbitrary_ tuple size.\n> I vote for some limit. 32K or 64K, at max.\n\n To have some limit seems reasonable for me (I've also read\n the other comments). When dealing with regular tuples, first\n off the query to insert or update them will get read into\n memory. Next the querytree with the Const vars is built,\n rewritten, planned. Then the tuple is built in memory and\n maybe somewhere else copied (fulltext index trigger). So the\n amount of memory will be allocated many times!\n\n There is some natural limit on the tuple size depending on\n the available swapspace. Not everyone has multiple GB\n swapspace setup. Making it a well known hard limit that\n doesn't hurt even if 20 backends do things simultaneously is\n better.\n\n I vote for a limit too. 64K should be enough.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Fri, 9 Jul 1999 11:19:53 +0200 (MET DST)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Arbitrary tuple size" }, { "msg_contents": "> > Ah, but we have segemented tables now. It will auto-split at 1 gig.\n> \n> Well, now consider update of 2Gb row!\n> I worry not due to non-overwriting but about writing\n> 2Gb log record to WAL - we'll not be able to do it, sure.\n> \n> Isn't it why Informix restrict tuple len to 32k only?\n> And the same is what Oracle does.\n> Both of them have ability to use > 1 page for single row,\n> but they have this restriction anyway.\n> \n> I don't like _arbitrary_ tuple size.\n> I vote for some limit. 32K or 64K, at max.\n\nYes, but having it all in one table prevents fopen() call for every\naccess, inode use for every large object, and allows vacuum to clean up\nmultiple versions. Just an idea. I realized your point.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 9 Jul 1999 12:29:57 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Arbitrary tuple size" }, { "msg_contents": "> > Well, now consider update of 2Gb row!\n> > I worry not due to non-overwriting but about writing\n> > 2Gb log record to WAL - we'll not be able to do it, sure.\n> \n> Can't we write just some kind of diff (only changed pages) in WAL,\n> either starting at some thresold or just based the seek/write logic of\n> LOs?\n> \n> It will add complexity, but having some arbitrary limits seems very\n> wrong.\n> \n> It will also make indexing LOs more complex, but as we don't currently\n> index \n> them anyway, its not a big problem yet.\n\nWell, we do indexing of large objects by using the OS directory code to\nfind a given directory entry.\n\n> Why not ?\n> \n> IMHO we should allow _arbitrary_ (in reality probably <= MAXINT), but \n> optimize for some known size and tell the users that if they exceed it\n> the performance would suffer. \n\nIf they go over a certain size, they can decide to store it in the file\nsystem, as many users are doing now.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 9 Jul 1999 12:32:33 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Arbitrary tuple size" }, { "msg_contents": " > I don't like _arbitrary_ tuple size.\n > I vote for some limit. 32K or 64K, at max.\n\n To have some limit seems reasonable for me (I've also read\n the other comments). When dealing with regular tuples, first\n\nIsn't anything other than arbitrary sizes just making us encounter the\nsame problem later. Clearly, there are real hardware limits, but we\nshouldn't build that into the code. It seems to me the solution is to\nhave arbitrary (e.g., hardware driven) limits, document what is\nnecessary to support certain operations, and let the fanatics buy\nmega-systems if they need to support huge tuples. As long as the code\nis optimized for more reasonable situations, there should be no\npenalty.\n\nCheers,\nBrook\n", "msg_date": "Fri, 9 Jul 1999 11:02:14 -0600 (MDT)", "msg_from": "Brook Milligan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Arbitrary tuple size" }, { "msg_contents": "On Fri, 9 Jul 1999, Vadim Mikheev wrote:\n\n> \n> Bruce Momjian wrote:\n> > \n> > > Bruce Momjian wrote:\n> > > >\n> > > > If we get wide tuples, we could just throw all large objects into one\n> > > > table, and have an on it. We can then vacuum it to compact space, etc.\n> > >\n> > > Storing 2Gb LO in table is not good thing.\n> > >\n> > > Vadim\n> > >\n> > \n> > Ah, but we have segemented tables now. It will auto-split at 1 gig.\n> \n> Well, now consider update of 2Gb row!\n> I worry not due to non-overwriting but about writing\n> 2Gb log record to WAL - we'll not be able to do it, sure.\n\nWhat I'm kinda curious about is *why* you would want to store a LO in the\ntable in the first place? And, consequently, as Bruce had\nsuggested...index it? Unless something has changed recently that I\ntotally missed, the only time the index would be used is if a query was\nbased on a) start of string (ie. ^<string>) or b) complete string (ie.\n^<string>$) ...\n\nSo what benefit would an index be on a LO?\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Wed, 28 Jul 1999 09:04:54 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Arbitrary tuple size" }, { "msg_contents": "At 09:04 28/07/99 -0300, The Hermit Hacker wrote:\n>On Fri, 9 Jul 1999, Vadim Mikheev wrote:\n>\n>> \n>> Bruce Momjian wrote:\n>> > \n>> > > Bruce Momjian wrote:\n>> > > >\n>> > > > If we get wide tuples, we could just throw all large objects into one\n>> > > > table, and have an on it. We can then vacuum it to compact space,\netc.\n>> > >\n>> > > Storing 2Gb LO in table is not good thing.\n>> > >\n>> > > Vadim\n>> > >\n>> > \n>> > Ah, but we have segemented tables now. It will auto-split at 1 gig.\n>> \n>> Well, now consider update of 2Gb row!\n>> I worry not due to non-overwriting but about writing\n>> 2Gb log record to WAL - we'll not be able to do it, sure.\n>\n>What I'm kinda curious about is *why* you would want to store a LO in the\n>table in the first place? And, consequently, as Bruce had\n>suggested...index it? Unless something has changed recently that I\n>totally missed, the only time the index would be used is if a query was\n>based on a) start of string (ie. ^<string>) or b) complete string (ie.\n>^<string>$) ...\n>\n>So what benefit would an index be on a LO?\n>\n\nSome systems (Dec RDB) won't even let you index the contents of an LO.\nAnyone know what other systems do?\n\nAlso, to repeat question from an earlier post: is there a plan for the BLOB\nimplementation that is available for comment/contribution?\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.C.N. 008 659 498) | /(@) ______---_\nTel: +61-03-5367 7422 | _________ \\\nFax: +61-03-5367 7430 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Wed, 28 Jul 1999 22:33:32 +1000", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Arbitrary tuple size" } ]
[ { "msg_contents": "\n---------- Forwarded message ----------\nDate: Wed, 7 Jul 1999 19:43:00 +0700 (GMT+0700)\nFrom: Nuchanach Klinjun <[email protected]>\nTo: [email protected]\nSubject: upgrade problem\n\nDear Sir/Madam,\n\nAfter Upgrade Database to the latest version I got some problem like this\n1. ODBC connection \n\tI used to link table from postgresql to my database file (MS\nAccess) but now I cannot and it gives me error \"Invalid field definition\n'accont' in definition of index or relationship\" \n\n2. Cannot execute some query which I could excute before upgrade.\nthe query is complicate but it used to work well both with MS Access\nand with db prompt it returned me error like it terminated my query\nabnormally i.e. \"Memory exhausted in AlloSetAlloc() (#1)\",\n\"No response from the backend, Socket has been closed (#1)\".\n\nmy sql is .. \n\"select date(s.stop) as accdate,sum(date_part('epoch',s.stop-s.start)/3600)\n,sum((s.rate/t.rate)*date_part('epoch',s.stop-s.start)/3600)\n,sum(s.value*100/107),sum(t.balance)\nfrom session s,ticket t \nwhere s.stop is not null and date(s.stop) between '1999/06/01' and\n'1999/06/15' group by accdate;\"\n\n3.Some query work abnormally since upgrade. \n\nI did attached the infomation I have\n\nTable Schema which I use in my query in 2. and I used to link it\ninto my Access Database file.\n\n=> \\d ticket\nTable = ticket\n+----------------------------------+----------------------------------+-------+\n| Field | Type |\nLength|\n+----------------------------------+----------------------------------+-------+\n| id | text not null |\nvar |\n| registered | datetime not null |\n8 |\n| lifetime | timespan not null default '@ 180 |\n12 |\n| value | money not null |\n4 |\n| balance | money not null default 0 |\n4 |\n| rate | float8 not null |\n8 |\n| account | text not null |\nvar |\n| free | bool not null default 'f' |\n1 |\n| priority | int4 not null default 0 |\n4 |\n| marker | int4 |\n4 |\n+----------------------------------+----------------------------------+-------+\nIndices: ticket_account\n ticket_pkey\n \n=> \\d session\nTable = session\n+----------------------------------+----------------------------------+-------+\n| Field | Type |\nLength|\n+----------------------------------+----------------------------------+-------+\n| id | int4 not null default nextval ( |\n4 |\n| seq | int4 not null default 1 |\n4 |\n| last | bool not null default 'f' |\n1 |\n| nas | text not null |\nvar |\n| sessionid | text not null |\nvar |\n| description | text |\nvar |\n| account | text not null |\nvar |\n| ticket | text not null |\nvar |\n| rate | float8 |\n8 |\n| value | money |\n4 |\n| start | datetime not null default dateti |\n8 |\n| stop | datetime |\n8 |\n| reserved | datetime |\n8 |\n| approved | datetime |\n8 |\n| cancelled | datetime |\n8 |\n| clientip | text |\nvar |\n+----------------------------------+----------------------------------+-------+\nIndices: i_session_stop\n session_account\n session_pkey\n session_ticket \n\nHere's the backtrace:\n\n(gdb) bt\n#0 0xd9a4c in AllocSetReset ()\n#1 0xdaa0a in EndPortalAllocMode ()\n#2 0x19147 in AtCommit_Memory ()\n#3 0x19302 in CommitTransaction ()\n#4 0x19550 in CommitTransactionCommand ()\n#5 0xa8665 in PostgresMain ()\n#6 0x8e4ba in DoBackend ()\n#7 0x8dfbe in BackendStartup ()\n#8 0x8d35e in ServerLoop ()\n#9 0x8cbd7 in PostmasterMain ()\n#10 0x4a102 in main ()\n \naccess=> select version();\nversion\n--------------------------------------------------------------\nPostgreSQL 6.5.0 on i386-unknown-freebsd2.2.8, compiled by cc\n\n> PostgreSQL ODBC driver 6.40.00.06 \nset up with valid user name and password, valid port.\n\nThank you very much and I really hope to hear from you really soon.\n\nCheers,\nNuchanach K.\n\n-----------------------------------------\nNuchanach Klinjun\nR&D Project. Internet Thailand\nEmail: [email protected]\n\n\n\n", "msg_date": "Thu, 8 Jul 1999 17:59:03 +0700 (GMT+0700)", "msg_from": "Nuchanach Klinjun <[email protected]>", "msg_from_op": true, "msg_subject": "upgrade problem " } ]
[ { "msg_contents": "[a possible answer to the RedHat 6.0 rpm -ba on Thomas' new src.rpm]\n\nJeff Johnson wrote:\n\n>Lamar Owen wrote:\n> > The error is manifested as a \"bad exit status\" after doing the\n> > recursive chgrp and before doing the recursive chmod of %setup.\n \n> Hmmm, I think I know this one.\n> \n> Rpm attempts to control for the behavior of tar when run by root\n> by dooing chown/chgrp/chmod -R. All would be well, except that\n> chgrp, when presented with a dangling symlink returns a non-zero\n> return code which causes the build to crap out.\n> \n> There are two approaches to a fix:\n> \n> 1) Take the dangling (i.e. the target of the symlink doesn't exist)\n> symlink out of the postgres tar ball.\n> \n> 2) Don't attempt the chgrp while building. You can nuke the macro\n> %_fixgrp in /usr/lib/rpm/macros. You can also add\n> %undefine _fixgrp\n> just before the %setup in the postgres spec file. However, you will\n> need to use the just released rpm-3.02 if you want to do the %undefine\n> successfully.\n> \n> Please post this wherever is appropriate.\n> \n> 73 de Jeff\n> \n> --\n> Jeff Johnson ARS N3NPQ\n> [email protected] ([email protected])\n> Chapel Hill, NC\n\nOk, where does the tarball have a dangling symlink....\nThere are four symlinks in the tarball:\n[root@utility postgresql-6.5]# find -type l -print\n./src/interfaces/odbc/port\n./src/interfaces/odbc/makefiles\n./src/interfaces/odbc/template\n./src/interfaces/odbc/config.h\n[root@utility postgresql-6.5]# ls -lR|grep lrwxrwxrwx\nlrwxrwxrwx 1 root root 24 Jul 8 10:18 config.h ->\n../.././inclu\nde/config.h\nlrwxrwxrwx 1 root root 17 Jul 8 10:18 makefiles ->\n../.././make\nfiles\nlrwxrwxrwx 1 root root 20 Jul 8 10:18 port ->\n../.././include/p\nort\nlrwxrwxrwx 1 root root 16 Jul 8 10:18 template ->\n../.././templ\nate\n\n[root@utility postgresql-6.5]# cd src/interfaces/odbc\n[root@utility odbc]# ls ../../.\nDEVELOPERS config.guess install-sh test\nGNUmakefile.in config.sub interfaces tools\nMakefile configure lextest tutorial\nMakefile.global.in configure.in makefiles utils\nMakefile.shlib corba man win32\nbackend data pl win32.mak\nbin include template\n\nHmmm... seems that config.h and port do not exist -- voila! The dangling\nsymlinks. Can empty port and config.h files be shipped, or will that\nbreak configure (which is what I'm assuming creates these files?)? \n\nThe 6.5-1.beta1 spec file from rawhide handles this using the second\nmethod, although it is incorrectly labeled as being a sparc/alpha fix:\n----from postgresql.spec, from\nftp://rawhide.redhat.com/SRPMS/SRPMS/postgresql-6.5-1.beta1.src.rpm-----\n# XXX work around sparc/alpha dangling symlink problem\n%undefine _fixgroup\n-----------------------------\nWhich is why my rpms, derived from this spec file, built on 6.0, while\nThomas' src.rpm, not derived from this spec file and built on 5.2,\ndidn't build on 6.0. So, there are two possibilities:\n\n1.)\tEliminate the dangling symlinks.\n2.)\tPut the _fixgroup kludge in place in the production SRPM. HOWEVER,\nthis ONLY works with rpm >= 3.0.x -- which means a RH 5.2 system running\nrpm 2.5 won't build it -- but we're supposed to update our rpm version\nanyway.\n\nThomas?\n\nAnd, thanks, Jeff!\n\nLamar Owen\nWGCR Internet Radio\nKF4MYT\n", "msg_date": "Thu, 08 Jul 1999 10:44:39 -0400", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": true, "msg_subject": "[Fwd: [HACKERS] Re: Postgresql 6.5-1 rpms on RedHat 6.0]" }, { "msg_contents": "Lamar Owen <[email protected]> writes:\n> Ok, where does the tarball have a dangling symlink....\n> There are four symlinks in the tarball:\n> [root@utility postgresql-6.5]# find -type l -print\n> ./src/interfaces/odbc/port\n> ./src/interfaces/odbc/makefiles\n> ./src/interfaces/odbc/template\n> ./src/interfaces/odbc/config.h\n\n> ... So, there are two possibilities:\n\n> 1.)\tEliminate the dangling symlinks.\n> 2.)\tPut the _fixgroup kludge in place in the production SRPM.\n\nThose symlinks should not be in the distribution; they should be\ncreated during \"configure\", which also creates the files/dirs they\npoint to. Unfortunately they were not getting removed by \"make\ndistclean\", so they are present in the 6.5 tarball.\n\nI have fixed \"make distclean\" to remove them, and that fix will be in\n6.5.1, but Thomas evidently built from the 6.5 tarball.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 08 Jul 1999 11:38:27 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [Fwd: [HACKERS] Re: Postgresql 6.5-1 rpms on RedHat 6.0] " }, { "msg_contents": "Tom Lane wrote:\n> Lamar Owen wrote:\n> > 1.) Eliminate the dangling symlinks.\n \n> Those symlinks should not be in the distribution;\n\n> I have fixed \"make distclean\" to remove them, and that fix will be in\n> 6.5.1, but Thomas evidently built from the 6.5 tarball.\n\nAh, this is the Right Thing to do. So, the SRPM spec file should not be\nkludged to work around them for the 6.5.1 release -- but, to build 6.5\nfrom pristine sources (RedHat policy), the \"%undefine _fixgroup\" hack\nshould be retained, to build under rpm 3.0.2, unless a patch that\nremoves those files is packaged with the 6.5 SRPM/RPMS.\n\nThanks Tom.\n\nLamar Owen\nWGCR Internet Radio\n", "msg_date": "Thu, 08 Jul 1999 11:43:51 -0400", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [Fwd: [HACKERS] Re: Postgresql 6.5-1 rpms on RedHat 6.0]" }, { "msg_contents": "Lamar Owen <[email protected]> writes:\n> Ah, this is the Right Thing to do. So, the SRPM spec file should not be\n> kludged to work around them for the 6.5.1 release -- but, to build 6.5\n> from pristine sources (RedHat policy), the \"%undefine _fixgroup\" hack\n> should be retained, to build under rpm 3.0.2, unless a patch that\n> removes those files is packaged with the 6.5 SRPM/RPMS.\n\nThere are several other bugfixes over 6.5 in the RPM already, so I\nsee no good reason not to just remove those four symlinks; it certainly\nseems cleaner than kluging the RPM script...\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 08 Jul 1999 13:05:02 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [Fwd: [HACKERS] Re: Postgresql 6.5-1 rpms on RedHat 6.0] " }, { "msg_contents": "Tom Lane wrote:\n \n> There are several other bugfixes over 6.5 in the RPM already, so I\n> see no good reason not to just remove those four symlinks; it certainly\n> seems cleaner than kluging the RPM script...\n\n> regards, tom lane\n\nYes, that's quite true.\n\nSpeaking of the rpms, I have just found a few issues that my other\ntesting had not found. Specifically:\n*\tThe data necessary to initdb is in postgresql-devel, not\npostgresql-server (the files in /usr/lib/pgsql, specifically the bki\nsources);\n*\tThere are no static libraries in postgresql-devel (libpq.a, et al --\nthese are normally located in /usr/lib)\n*\tIMHO, a warning should be printed about proper updgrade procedure --\nrpm -U just simply won't work as the rpms (and postgresql) are currently\nimplemented -- and, unfortunately, the rpm -Uvh style is the default\nmethod for most users, as well as RedHat version updates, from 5.2 to\n6.0, for example.\n\nAs far as enhancements go, the postgresql-server rpm could (not\nnecessarily should) check to see if a database structure exists in\n/var/lib/pgsql (if so, move it out of the way), and perform an initdb.\n\nWhere this comes into play is when upgrading postgresql versions using\nrpm -- the rpm uninstall does not blow away the whole PGDATA/base tree\n-- in fact, it leaves _everything_ there. So, to upgrade, you must\neither rm -rf the tree or mv it out of the way -- preferably before\ndoing an initdb.\n\nWhat IS the right way to do this in an automated fashion? Currently, to\nupgrade via rpm (on a box running SysV init, such as RedHat), you must\ndo the following:\n\n1.)\tas postgres, pg_dumpall\n2.)\tas postgres, backup pg_hba.conf\n3.)\tas root, rpm -e all-old-postgresql-rpms (found using rpm -qa|grep\npostgres) (automateable -- rpm -qa|grep postgresql|xargs rpm -e (check\nthat xargs syntax...))\n4.)\tas root, blow away the /var/lib/pgsql tree, taking care not to blow\naway your backup\n5.)\tas root, rpm -i select-new-postgresql-rpms\n6.)\tas postgres, initdb --pglib=/usr/lib/pgsql --pgdata=/var/lib/pgsql\n7.)\tas root, Edit /etc/rc.d/init.d/postgresql as needed (to add -i, FE)\n(to automate this, simply include -i by default, or give user a choice,\nand sed away...)\n8.)\tas root, start postmaster (/etc/rc.d/init.d/postgresql start)\n9.)\tas postgres, psql -e template1 < pg_dumpall-backup-from-step-1\n10.)\tas postgres, restore pg_hba.conf\n11.)\tRestart production tasks, after testing.\n\nHave I left anything out? Is it even desireable to automate this? (In my\ncase, I'm going to build a script to keep around to do this...)\n\nLamar Owen\nWGCR Internet Radio\n", "msg_date": "Thu, 08 Jul 1999 14:30:33 -0400", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [Fwd: [HACKERS] Re: Postgresql 6.5-1 rpms on RedHat 6.0]" } ]
[ { "msg_contents": "Hey developers - \nAnonymous CVS access has been broken for several days. I vaguely remember\nseeing an email address to report postgresql.org admin problems to once,\nlong ago, but I can't seem to find it in my mailing list archives. Where\nshould I copy this too?\n\nwallace$ cvs update\n Fatal error, aborting.\n : no such user\n \n\nTracing the connection indicates that the initial login to cvs works - \nthe error occurs when trying to actually update.\n\n\nAh, here's something from my archives:\n\n>From: The Hermit Hacker <[email protected]>\n>To: Brian P Millett <[email protected]>\n>cc: postgres <[email protected]>\n>Subject: Re: [HACKERS] cvs checkout working?\n>In-Reply-To: <[email protected]>\n>Precedence: bulk\n>X-Mozilla-Status: 8011\n>\n>\n>Check it now and let me know...FreeBSD has an older version that it\n>installs when you upgrade the OS :( I always forget about it...\n>\n\nSo, I'll copy [email protected] on this...\n\nRoss\n-- \nRoss J. Reedstrom, Ph.D., <[email protected]> \nNSBRI Research Scientist/Programmer\nComputer and Information Technology Institute\nRice University, 6100 S. Main St., Houston, TX 77005\n", "msg_date": "Thu, 8 Jul 1999 11:06:43 -0500", "msg_from": "\"Ross J. Reedstrom\" <[email protected]>", "msg_from_op": true, "msg_subject": "CVS broken" }, { "msg_contents": "\nTry now...\n\n\n\nOn Thu, 8 Jul 1999, Ross J. Reedstrom wrote:\n\n> Hey developers - \n> Anonymous CVS access has been broken for several days. I vaguely remember\n> seeing an email address to report postgresql.org admin problems to once,\n> long ago, but I can't seem to find it in my mailing list archives. Where\n> should I copy this too?\n> \n> wallace$ cvs update\n> Fatal error, aborting.\n> : no such user\n> \n> \n> Tracing the connection indicates that the initial login to cvs works - \n> the error occurs when trying to actually update.\n> \n> \n> Ah, here's something from my archives:\n> \n> >From: The Hermit Hacker <[email protected]>\n> >To: Brian P Millett <[email protected]>\n> >cc: postgres <[email protected]>\n> >Subject: Re: [HACKERS] cvs checkout working?\n> >In-Reply-To: <[email protected]>\n> >Precedence: bulk\n> >X-Mozilla-Status: 8011\n> >\n> >\n> >Check it now and let me know...FreeBSD has an older version that it\n> >installs when you upgrade the OS :( I always forget about it...\n> >\n> \n> So, I'll copy [email protected] on this...\n> \n> Ross\n> -- \n> Ross J. Reedstrom, Ph.D., <[email protected]> \n> NSBRI Research Scientist/Programmer\n> Computer and Information Technology Institute\n> Rice University, 6100 S. Main St., Houston, TX 77005\n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Fri, 9 Jul 1999 13:11:05 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CVS broken" }, { "msg_contents": "On Fri, Jul 09, 1999 at 01:11:05PM -0300, The Hermit Hacker wrote:\n> \n> Try now...\n> \nHurrah! It works again!\n\nThanks,\n\nRoss\n\n-- \nRoss J. Reedstrom, Ph.D., <[email protected]> \nNSBRI Research Scientist/Programmer\nComputer and Information Technology Institute\nRice University, 6100 S. Main St., Houston, TX 77005\n", "msg_date": "Fri, 9 Jul 1999 11:16:33 -0500", "msg_from": "\"Ross J. Reedstrom\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: CVS broken" } ]
[ { "msg_contents": "Hi,\n\n did anyone compile latest CVS with v1.31\n utils/adt/numutils.c?\n\n Marc activated some range checks in pg_atoi() and now I have\n a very interesting behaviour on a Linux box running gcc 2.8.1\n glibc-2.\n\n Inside of pg_atoi(), the value is read into a long. Comparing\n a small positive long like 24 against INT_MIN returns TRUE -\n dunno how. Putting INT_MIN into another long variable and\n comparing the two returns the expected FALSE - so what's\n going on here? long, int32 and int have all 4 bytes here.\n\n Someone any clue?\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Thu, 8 Jul 1999 18:14:02 +0200 (MET DST)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": true, "msg_subject": "\"24\" < INT_MIN returns TRUE ???" }, { "msg_contents": "> Hi,\n> \n> did anyone compile latest CVS with v1.31\n> utils/adt/numutils.c?\n> \n> Marc activated some range checks in pg_atoi() and now I have\n> a very interesting behaviour on a Linux box running gcc 2.8.1\n> glibc-2.\n> \n> Inside of pg_atoi(), the value is read into a long. Comparing\n> a small positive long like 24 against INT_MIN returns TRUE -\n> dunno how. Putting INT_MIN into another long variable and\n> comparing the two returns the expected FALSE - so what's\n> going on here? long, int32 and int have all 4 bytes here.\n\nI just reversed out the patch. It was causing initdb to fail!\n\nI don't understand why it fails either, but have sent the report back to\nthe patch author.\n\nI would love to hear the answer on this one.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 9 Jul 1999 00:01:27 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] \"24\" < INT_MIN returns TRUE ???" }, { "msg_contents": ">> Inside of pg_atoi(), the value is read into a long. Comparing\n>> a small positive long like 24 against INT_MIN returns TRUE -\n>> dunno how. Putting INT_MIN into another long variable and\n>> comparing the two returns the expected FALSE - so what's\n>> going on here? long, int32 and int have all 4 bytes here.\n\nI believe the problem is that the compiler is deciding that INT_MIN\nis of type \"unsigned int\" or \"unsigned long\", whereupon the type\npromotion rules will cause the comparison to be done in\nunsigned-long arithmetic. And indeed, 24 < 0x80000000 in unsigned\narithmetic. When you compare two long variables,\nyou get the desired behavior of signed long comparison.\n\nDo you have <limits.h>, and if so how does it define INT_MIN?\n\nThe default INT_MIN provided at the top of numutils.c is clearly\nprone to cause this problem:\n\t#ifndef INT_MIN\n\t#define INT_MIN (-0x80000000L)\n\t#endif\nIf long is 32 bits then the constant 0x80000000L will be classified\nas unsigned long by an ANSI-compliant compiler, whereupon the test\nin pg_atoi fails.\n\nThe two systems I have here both avoid this problem in <limits.h>,\nbut I wonder whether you guys have different or no <limits.h>.\n\nI recommend a two-pronged approach to dealing with this bug:\n\n1. The default INT_MIN ought to read\n\t#define INT_MIN (-INT_MAX-1)\nso that it is classified as a signed rather than unsigned long.\n(This is how both of my systems define it in <limits.h>.)\n\n2. The two tests in pg_atoi ought to read\n\tif (l < (long) INT_MIN)\n\t...\n\tif (l > (long) INT_MAX)\n\nThe second change should ensure we get a signed-long comparison\neven if <limits.h> has provided a broken definition of INT_MIN.\nThe first change is not necessary to fix this particular bug\nif we also apply the second change, but I think leaving INT_MIN\nthe way it is is trouble waiting to happen.\n\nBruce, I cannot check this since my system won't show the failure;\nwould you un-reverse-out the prior patch, add these changes, and\nsee if it works for you?\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 09 Jul 1999 10:49:36 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] \"24\" < INT_MIN returns TRUE ??? " }, { "msg_contents": "I said:\n> Do you have <limits.h>, and if so how does it define INT_MIN?\n\nActually, looking closer, it doesn't matter whether you have <limits.h>,\nbecause there is yet a *third* bug in numutils.c:\n\n\t#ifdef HAVE_LIMITS\n\t#include <limits.h>\n\t#endif\n\nshould be\n\n\t#ifdef HAVE_LIMITS_H\n\t...\n\nbecause that is how configure and config.h spell the configuration\nsymbol. Thus, <limits.h> is never included on *any* platform,\nand our broken default INT_MIN is always used.\n\nWhoever wrote this code was not having a good day...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 09 Jul 1999 11:02:12 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] \"24\" < INT_MIN returns TRUE ??? " }, { "msg_contents": "> I said:\n> > Do you have <limits.h>, and if so how does it define INT_MIN?\n> \n> Actually, looking closer, it doesn't matter whether you have <limits.h>,\n> because there is yet a *third* bug in numutils.c:\n> \n> \t#ifdef HAVE_LIMITS\n> \t#include <limits.h>\n> \t#endif\n> \n> should be\n> \n> \t#ifdef HAVE_LIMITS_H\n> \t...\n> \n> because that is how configure and config.h spell the configuration\n> symbol. Thus, <limits.h> is never included on *any* platform,\n> and our broken default INT_MIN is always used.\n\nYes, I caught this when you made that comment about the LIMIT test. I\nam checking all the other HAVE_ tests.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 9 Jul 1999 13:07:04 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] \"24\" < INT_MIN returns TRUE ???" }, { "msg_contents": "> Yes, I caught this when you made that comment about the LIMIT test. I\n> am checking all the other HAVE_ tests.\n\nOK, patch reapplyed, and change made to #define test, and default\nMAX/MIN values. It works fine now.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 9 Jul 1999 13:36:14 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] \"24\" < INT_MIN returns TRUE ???" } ]
[ { "msg_contents": "I added stuff to the TODO list to reduce my mailbox size, but now I am\ngetting more messages than ever. What have I done? :-)\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 8 Jul 1999 12:24:37 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Mailing list volume" }, { "msg_contents": "On Thu, 8 Jul 1999, Bruce Momjian wrote:\n\n> I added stuff to the TODO list to reduce my mailbox size, but now I am\n> getting more messages than ever. What have I done? :-)\n\nIf someone wishes to suggest a split of -hackers, I'm more then willing to\ndo so, just not sure *where* to split it :(\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Fri, 9 Jul 1999 13:11:56 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Mailing list volume" }, { "msg_contents": "> On Thu, 8 Jul 1999, Bruce Momjian wrote:\n> \n> > I added stuff to the TODO list to reduce my mailbox size, but now I am\n> > getting more messages than ever. What have I done? :-)\n> \n> If someone wishes to suggest a split of -hackers, I'm more then willing to\n> do so, just not sure *where* to split it :(\n\nIt was only a joke. No way to split it, and even if you did, I would\nhave to be on both lists.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 9 Jul 1999 13:08:14 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Mailing list volume" }, { "msg_contents": "On Fri, 9 Jul 1999, Bruce Momjian wrote:\n\n> > On Thu, 8 Jul 1999, Bruce Momjian wrote:\n> > \n> > > I added stuff to the TODO list to reduce my mailbox size, but now I am\n> > > getting more messages than ever. What have I done? :-)\n> > \n> > If someone wishes to suggest a split of -hackers, I'm more then willing to\n> > do so, just not sure *where* to split it :(\n> \n> It was only a joke. No way to split it, and even if you did, I would\n> have to be on both lists.\n\nYa, but at least it makes it easier to filter :)\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Fri, 9 Jul 1999 23:19:59 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Mailing list volume" } ]
[ { "msg_contents": "[As part of my discussion with Jeff Johnson at RedHat regarding rpms, he\nposed the following question, which I am unable to answer. His e-mail\naddress is [email protected].]\n\nJeff Johnson wrote:\n \n> Do you know if 6.5.1 will run on alpha? We (Red Hat) will probably\n> put out an errata for postgresql as soon as it runs on alpha.\n> \n> My latest information is that there is still a problem with spinlocks on\n> alpha. If acceess to hardware is a problem, then we might be able to help.\n\nThanks to all. I saw the advice to turn off optimization (-o0), and\nalready forwarded that info to him.\n\nLamar Owen\nWGCR Internet Radio\n", "msg_date": "Thu, 08 Jul 1999 12:42:09 -0400", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": true, "msg_subject": "Postgresql 6.5.1 and alpha -- a question from RedHat" } ]
[ { "msg_contents": "\nBruce, \n\nMy email (below) to the pgsql-ports list with my workaround patch was\nbounced, but you were cc'ed also. Could you forward the appropriate info\nto whomever would appreciate it most?\n\nThanks,\nDavid\n\n-- \n/==============================\\\n| David Mansfield |\n| [email protected] |\n\\==============================/\n\n---------- Forwarded message ----------\n\n> \n> Jan, this is yours.\n> \n\nActually, I made a 'workaround' fix to this problem (after posting this\nbug report). I'll append the patch. I don't pretend to understand the\ninternals of this system, but my guess is that a simple statement like\n'notify xyz' went through the planner, got an spi_plan, but the 'plan\nlist' was empty. So here's my patch, which may be fixing a symptom, not\nthe cause, YMMV:\n\n--- pl_exec.c~ Wed May 26 03:07:39 1999\n+++ pl_exec.c Fri Jun 25 11:00:53 1999\n@@ -2482,6 +2482,10 @@\n \n plan = (Plan *) lfirst(spi_plan->ptlist);\n \n+ /* it would seem as though plan can be null... --DAVID */\n+ if (plan == NULL)\n+ return;\n+\n /* ----------\n * 2. It must be a RESULT plan --> no scan's required\n * ----------\n\n\n\n\n", "msg_date": "Thu, 8 Jul 1999 13:15:36 -0400 (EDT)", "msg_from": "David Mansfield <[email protected]>", "msg_from_op": true, "msg_subject": "RE: (bounced, help me!) [PORTS] Port Bug Report: calling notify in\n\tpl/pgsql proc causes core dump" } ]
[ { "msg_contents": "\nHello,\n\nI have a product which uses Postgres 6.3. I believe it really is 6.3 and\nnot 6.3.1 or 6.3.2. I know this version is over a year old, and I've\ntried to convince the other developers to upgrade, but they hadn't any\nproblems at all with the database and didn't want to introduce any new\nvariables.\n\nA problem I've seen recently are \"spurious\" transaction aborted errors.\nThe exact message is: transaction aborted -- queries ignored until END.\nI don't why they are happening, and I haven't yet been able to track down\nwhat part of the code is emitting them. They seem to happen randomly\nand not all that frequently, so they are hard to reproduce. Is it possible\nthat multiple processes accessing the database could trigger this message?\n\nAnother quick question while I have your attention: several places in our\nclient code there are commits being done \"early\" with comments like\n\"clear read locks\". It seems, though, that you can never get blocked by a\ntransaction you started yourself, so I'm trying to figure out why the\nprevious set of developers felt it necessary to perform all of these\nearly commits. For the most part our Postgres instance only has one\nclient, so the need to clear read locks is puzzling.\n\nAny help with these issues would be greatly appreciated!\n\nErik Rantapaa\[email protected]\n\n", "msg_date": "Thu, 8 Jul 1999 13:34:12 -0500", "msg_from": "Erik Rantapaa <[email protected]>", "msg_from_op": true, "msg_subject": "6.3 spurious transaction aborted problem" } ]
[ { "msg_contents": "Hello,\n\nI have a product which uses Postgres 6.3. I believe it really is 6.3 and\nnot 6.3.1 or 6.3.2. I know this version is over a year old, and I've\ntried to convince the other developers to upgrade, but they hadn't any\nproblems at all with the database and didn't want to introduce any new\nvariables.\n\nA problem I've seen recently are \"spurious\" transaction aborted errors.\nThe exact message is: transaction aborted -- queries ignored until END.\nI don't why they are happening, and I haven't yet been able to track down\nwhat part of the code is emitting them. They seem to happen randomly\nand not all that frequently, so they are hard to reproduce. Is it possible\nthat multiple processes accessing the database could trigger this message?\n\nAnother quick question while I have your attention: several places in our\nclient code there are commits being done \"early\" with comments like\n\"clear read locks\". It seems, though, that you can never get blocked by a\ntransaction you started yourself, so I'm trying to figure out why the\nprevious set of developers felt it necessary to perform all of these\nearly commits. For the most part our Postgres instance only has one\nclient, so the need to clear read locks is puzzling.\n\nAny help with these issues would be greatly appreciated!\n\nErik Rantapaa\[email protected]\n\n", "msg_date": "Thu, 8 Jul 1999 17:04:25 -0500", "msg_from": "Erik Rantapaa <[email protected]>", "msg_from_op": true, "msg_subject": "6.3 spurious transaction aborted problem" } ]
[ { "msg_contents": "I posted this to the wrong list a while back, and only got one response. I'd be interested in a more general set of comments if possible:\n\nGiven the various complexities of the optimizer, would there be any way of allowing developers to specify strategy, or give hints. This is allowed in Dec (now Oracle) RDB, and in some fashion in both SQL/Server and Sybase.\n\nThe sorts of things that might be desirable are: which indexes to join on and the order of joining the tables.\n\nDigital Equipment Corp used to maintain that the optimizer should know best, and that developers make more mistakes, and that is true. But an experienced DBA, along with details of common complex transactions, can almost always do better than an optimizer.\n\nSadly, I don't know much about optimizers, but the Rdb one seems to do quite a lot of clever things, and for the most part works. But once a piece of code get sufficiently complex, the chance of 'eccentric' behavior under some circumstances increases. I presume that for the most part these special cases are found in the beta phase, and removed.\n\nBut some will still get through, and bugs will also get through. *This* is where 'hand-tuned' queries are most useful. I appreciate that as new features are added to the back end (and as tables grow), fixed query strategies are a liability. But in my view, they do have a place.\n\nFor those of you not familiar with Rdb 'Outlines' (the way it allows you to specify stretegies), it is *something* like:\n\n1. Do an 'Explain Select...', and save the output. This is formatted nicely. (Rdb doesn't really have 'explain', but the idea is the same).\n\n2. Edit the output to reflect the query strategy you would like to use.\n\n3. Do a 'Create Outline...' statement using the above strategy details.\n\nThe output from step 1 also has a hash value for the input query. After step 3 is run, any query with the same hash value will invoke the outline (or at least consider it, depending on options in the 'create outline' statement).\n\nSanity checks are also performed at execution time to make sure the Outline makes sense in the query context.\n\nI'm not saying this is the ideal approach, but it demonstrates at least one technique.\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.C.N. 008 659 498) | /(@) ______---_\nTel: +61-03-5367 7422 | _________ \\\nFax: +61-03-5367 7430 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Fri, 09 Jul 1999 10:55:35 +1000", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": true, "msg_subject": "Saving Optimizer Strategies?" } ]
[ { "msg_contents": "> At 01:05 AM 7/8/99 -0400, you wrote:\n> >> I get the same for indexing TIMESTAMP in version 6.4.2. Has this been\n> >> fixed in 6.5?\n> >> \n> >\n> >No. DATETIME works, though.\n> \n> \n> Not really, since I need to access it from JDBC and the\n> ResultSet.getTimestamp() method doesn't work with DATETIME columns (at\n> least under my installation of 6.4.2).\n> \n> Thanks, anyway.\n> \n\nPeter, we are getting requests for DATETIME in jdbc. Can you add it?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 8 Jul 1999 23:59:27 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [SQL] Index on Type Numeric" }, { "msg_contents": "> Peter, we are getting requests for DATETIME in jdbc. Can you add it?\n\nIf it matters...\n\nI'm planning on folding datetime into timestamp for the next release\n(unless there are objections). Both names will be synonyms, but the\nunderlying code will be say \"timestamp\", and will look like the\ncurrent \"datetime\".\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Tue, 27 Jul 1999 05:54:13 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [SQL] Index on Type Numeric" } ]
[ { "msg_contents": ">\n>I knew there had to be a reason that some tests where BLCKSZ/2 and some\n>BLCKSZ.\n>\n>Added to TODO:\n>\n> * Allow index on tuple greater than 1/2 block size\n>\n>Seems we have to allow columns over 1/2 block size for now. Most people\n>wouln't index on them.\n\n\nSince an index header page has to point to at least 2 other leaf or\nheader pages, it stores at least 2 keys per page.\n\nI would alter the todo to say:\n\n* fix btree to give a useful elog when key > 1/2 (page - overhead) and not\nabort\n\nto fix the:\n>\n> FATAL 1: btree: failed to add item to the page\n>\n\nA key of more than 4k will want a more efficient index type than btree\nfor such data anyway.\n\nAndreas\n\n", "msg_date": "Fri, 9 Jul 1999 18:01:15 +0200", "msg_from": "\"Zeugswetter Andreas\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Arbitrary tuple size" }, { "msg_contents": "Done. Thanks.\n\n> >\n> >I knew there had to be a reason that some tests where BLCKSZ/2 and some\n> >BLCKSZ.\n> >\n> >Added to TODO:\n> >\n> > * Allow index on tuple greater than 1/2 block size\n> >\n> >Seems we have to allow columns over 1/2 block size for now. Most people\n> >wouln't index on them.\n> \n> \n> Since an index header page has to point to at least 2 other leaf or\n> header pages, it stores at least 2 keys per page.\n> \n> I would alter the todo to say:\n> \n> * fix btree to give a useful elog when key > 1/2 (page - overhead) and not\n> abort\n> \n> to fix the:\n> >\n> > FATAL 1: btree: failed to add item to the page\n> >\n> \n> A key of more than 4k will want a more efficient index type than btree\n> for such data anyway.\n> \n> Andreas\n> \n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 9 Jul 1999 14:01:09 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Arbitrary tuple size" } ]
[ { "msg_contents": "I decided to look at some system statistics (DISK IO to be specific), so I\nreran the regression test on my RH5.2 box. I was surprised when it failed\nafter createlang was called for plpgsql. It should be a quick fix for Jan.\nEither fix createlang to exit with a 0 status on language already installed:\n143c143\n< exit 1\n---\n> exit 0\n\nOr fix it to exit with some other status and test for that status in the\nRegression GNUMakefile.\n\nHope this helps,\n\tDEJ\n", "msg_date": "Fri, 9 Jul 1999 12:00:20 -0500 ", "msg_from": "\"Jackson, DeJuan\" <[email protected]>", "msg_from_op": true, "msg_subject": "Regression Test fail to run if PLPGSQL in template1" }, { "msg_contents": "> I decided to look at some system statistics (DISK IO to be specific), so I\n> reran the regression test on my RH5.2 box. I was surprised when it failed\n> after createlang was called for plpgsql. It should be a quick fix for Jan.\n> Either fix createlang to exit with a 0 status on language already installed:\n> 143c143\n> < exit 1\n> ---\n> > exit 0\n> \n> Or fix it to exit with some other status and test for that status in the\n> Regression GNUMakefile.\n\ncreatelang now returns 2 for languages already exists, and regression\nnow allows 2 as a valid return value for createlang. Please test.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 9 Jul 1999 13:57:54 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Regression Test fail to run if PLPGSQL in template1" } ]
[ { "msg_contents": ">> > > Storing 2Gb LO in table is not good thing.\n\nat least vacuum and sequential scan will need to read it,\nso I agree storing a large LO in the row is a no-no.\n\n>> Well, now consider update of 2Gb row!\n>> I worry not due to non-overwriting but about writing\n>> 2Gb log record to WAL - we'll not be able to do it, sure.\n\nThis is imho no different than with an external LO, since\nfor a rollforward we need the new value one way or another.\nI don't see a special problem other than performance.\n\nInformix has many ways to configure LO storage, 2 of which are:\n1. store LO in the Tablespace (then all changes are written\nto the Transaction log directly, and all LO IO is buffered)\nLO's are always stored on separate pages in this tablespace,\nand not with the row.\n2. store LO in a separate blobspace\nWhat Informix then does is to not write LO changes to the log,\nonly a reference, and the process that backs up the logs\nthen also reads the new LO's and writes them to tape.\nIn this setup all LO IO bypasses the bufferpool and is\nsynchronous.\n\n>\n>Can't we write just some kind of diff (only changed pages) in WAL,\n>either starting at some thresold or just based the seek/write logic of\n>LOs?\n>\n>It will add complexity, but having some arbitrary limits seems very\n>wrong.\n\nThe same holds true for the whole row. Only the changed columns\nwould need to go to the log. Consider a refcount and a large text column.\nWe would not want to log the text column with 4k if only the 4 byte refcount\nchanged.\n\nAndreas\n\n", "msg_date": "Fri, 9 Jul 1999 19:28:15 +0200", "msg_from": "\"Zeugswetter Andreas\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Arbitrary tuple size" } ]
[ { "msg_contents": "[Man, I am applying all the fixes. If there is a problem with 6.5.1,\nthey are going to know it was me.]\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 9 Jul 1999 14:00:10 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "6.5.1" }, { "msg_contents": "On Fri, 9 Jul 1999, Bruce Momjian wrote:\n\n> Date: Fri, 9 Jul 1999 14:00:10 -0400 (EDT)\n> From: Bruce Momjian <[email protected]>\n> To: PostgreSQL-development <[email protected]>\n> Subject: [HACKERS] 6.5.1\n> \n> [Man, I am applying all the fixes. If there is a problem with 6.5.1,\n> they are going to know it was me.]\n\nJust updated cvs and source dosn't compiled on my Linux box 2.0.37\nusing egcs 1.12 release:\n\nmake[2]: Entering directory /home/postgres/cvs/pgsql/src/backend/access'\nmake -C common SUBSYS.o\nmake[3]: Entering directory /home/postgres/cvs/pgsql/src/backend/access/common'\ngcc -I../../../include -I../../../backend -O2 -mpentium -Wall -Wmissing-prototypes -I../.. -c heaptuple.c -o heaptuple.o\nIn file included from heaptuple.c:22:\n../../../include/access/heapam.h:30: parse error before \u0014ime_t'\n../../../include/access/heapam.h:30: warning: no semicolon at end of struct or union\n../../../include/access/heapam.h:31: warning: type defaults to \tnt' in declaration of \focal_reset_timestamp'\n../../../include/access/heapam.h:31: warning: data definition has no type or storage class\n../../../include/access/heapam.h:32: parse error before \fast_request_timestamp'\n../../../include/access/heapam.h:32: warning: type defaults to \tnt' in declaration of \fast_request_timestamp'\n../../../include/access/heapam.h:32: warning: data definition has no type or storage class\n../../../include/access/heapam.h:79: parse error before }'\n../../../include/access/heapam.h:79: warning: type defaults to \tnt' in declaration of \beapAccessStatisticsData'\n../../../include/access/heapam.h:79: warning: data definition has no type or storage class\n../../../include/access/heapam.h:81: parse error before *'\n../../../include/access/heapam.h:81: warning: type defaults to \tnt' in declaration of \beapAccessStatistics'\n../../../include/access/heapam.h:81: warning: data definition has no type or storage class\n../../../include/access/heapam.h:238: parse error before \beap_access_stats'\n../../../include/access/heapam.h:238: warning: type defaults to \tnt' in declaration of \beap_access_stats'\n../../../include/access/heapam.h:238: warning: data definition has no type or storage class\n../../../include/access/heapam.h:284: parse error before \u0013tats'\nmake[3]: *** [heaptuple.o] Error 1\n\n\n\tRegards,\n\t\tOleg\n\n> \n> -- \n> Bruce Momjian | http://www.op.net/~candle\n> [email protected] | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> \n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Fri, 9 Jul 1999 23:48:59 +0400 (MSD)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] 6.5.1" }, { "msg_contents": "> On Fri, 9 Jul 1999, Bruce Momjian wrote:\n> \n> > Date: Fri, 9 Jul 1999 14:00:10 -0400 (EDT)\n> > From: Bruce Momjian <[email protected]>\n> > To: PostgreSQL-development <[email protected]>\n> > Subject: [HACKERS] 6.5.1\n> > \n> > [Man, I am applying all the fixes. If there is a problem with 6.5.1,\n> > they are going to know it was me.]\n> \n> Just updated cvs and source dosn't compiled on my Linux box 2.0.37\n> using egcs 1.12 release:\n> \n> make[2]: Entering directory /home/postgres/cvs/pgsql/src/backend/access'\n> make -C common SUBSYS.o\n> make[3]: Entering directory /home/postgres/cvs/pgsql/src/backend/access/common'\n> gcc -I../../../include -I../../../backend -O2 -mpentium -Wall -Wmissing-prototypes -I../.. -c heaptuple.c -o heaptuple.o\n> In file included from heaptuple.c:22:\n> ../../../include/access/heapam.h:30: parse error before \u0014ime_t'\n> ../../../include/access/heapam.h:30: warning: no semicolon at end of struct or union\n> ../../../include/access/heapam.h:31: warning: type defaults to \tnt' in declaration of \focal_reset_timestamp'\n> ../../../include/access/heapam.h:31: warning: data definition has no type or storage class\n> ../../../include/access/heapam.h:32: parse error before \fast_request_timestamp'\n> ../../../include/access/heapam.h:32: warning: type defaults to \tnt' in declaration of \fast_request_timestamp'\n> ../../../include/access/heapam.h:32: warning: data definition has no type or storage class\n> ../../../include/access/heapam.h:79: parse error before }'\n> ../../../include/access/heapam.h:79: warning: type defaults to \tnt' in declaration of \beapAccessStatisticsData'\n> ../../../include/access/heapam.h:79: warning: data definition has no type or storage class\n> ../../../include/access/heapam.h:81: parse error before *'\n> ../../../include/access/heapam.h:81: warning: type defaults to \tnt' in declaration of \beapAccessStatistics'\n> ../../../include/access/heapam.h:81: warning: data definition has no type or storage class\n> ../../../include/access/heapam.h:238: parse error before \beap_access_stats'\n> ../../../include/access/heapam.h:238: warning: type defaults to \tnt' in declaration of \beap_access_stats'\n> ../../../include/access/heapam.h:238: warning: data definition has no type or storage class\n> ../../../include/access/heapam.h:284: parse error before \u0013tats'\n> make[3]: *** [heaptuple.o] Error 1\n\nOleg, you are not going to catch me this easily. :-)\n\nLooks like a problem on your end. Try removing heapam.h and re-cvs'ing\nit.\n\nI just did a:\n\n\t#$ rm heapam.h \n\t#$ pgcvs update heapam.h\n\nand line 30 looks fine:\n\n time_t init_global_timestamp; /* time global statistics started */\n time_t local_reset_timestamp; /* last time local reset was done */\n\t\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 9 Jul 1999 16:10:19 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] 6.5.1" }, { "msg_contents": "yOn Fri, 9 Jul 1999, Bruce Momjian wrote:\n\n> Date: Fri, 9 Jul 1999 16:10:19 -0400 (EDT)\n> From: Bruce Momjian <[email protected]>\n> To: Oleg Bartunov <[email protected]>\n> Cc: PostgreSQL-development <[email protected]>\n> Subject: Re: [HACKERS] 6.5.1\n> \n>> Oleg, you are not going to catch me this easily. :-)\n> \n> Looks like a problem on your end. Try removing heapam.h and re-cvs'ing\n> it.\n> \n> I just did a:\n> \n> \t#$ rm heapam.h \n> \t#$ pgcvs update heapam.h\n> \n> and line 30 looks fine:\n> \n> time_t init_global_timestamp; /* time global statistics started */\n> time_t local_reset_timestamp; /* last time local reset was done */\n\nHmm, removed heapam.h, resynced source, checked heapam.h, see nothing wrong. \nBut the problem persists :-(\nWill try FreeBSD, the same problem . Compiler is the same: egsc 1.12 release\n6.5 was compiled fine on both platforms\n\n\tRegards,\n\t\tOleg\n> \t\n> -- \n> Bruce Momjian | http://www.op.net/~candle\n> [email protected] | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> \n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Sat, 10 Jul 1999 00:58:42 +0400 (MSD)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] 6.5.1" }, { "msg_contents": "> > I just did a:\n> > \n> > \t#$ rm heapam.h \n> > \t#$ pgcvs update heapam.h\n> > \n> > and line 30 looks fine:\n> > \n> > time_t init_global_timestamp; /* time global statistics started */\n> > time_t local_reset_timestamp; /* last time local reset was done */\n> \n> Hmm, removed heapam.h, resynced source, checked heapam.h, see nothing wrong. \n> But the problem persists :-(\n> Will try FreeBSD, the same problem . Compiler is the same: egsc 1.12 release\n> 6.5 was compiled fine on both platforms\n\nOK, what do you see in those files. Do you see a ^T there? I have just\nrecompiled everything here, and have no problems. Strange.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 9 Jul 1999 17:15:49 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] 6.5.1" }, { "msg_contents": "On Fri, 9 Jul 1999, Bruce Momjian wrote:\n\n> Date: Fri, 9 Jul 1999 17:15:49 -0400 (EDT)\n> From: Bruce Momjian <[email protected]>\n> To: Oleg Bartunov <[email protected]>\n> Cc: PostgreSQL-development <[email protected]>\n> Subject: Re: [HACKERS] 6.5.1\n> \n> > > I just did a:\n> > > \n> > > \t#$ rm heapam.h \n> > > \t#$ pgcvs update heapam.h\n> > > \n> > > and line 30 looks fine:\n> > > \n> > > time_t init_global_timestamp; /* time global statistics started */\n> > > time_t local_reset_timestamp; /* last time local reset was done */\n> > \n> > Hmm, removed heapam.h, resynced source, checked heapam.h, see nothing wrong. \n> > But the problem persists :-(\n> > Will try FreeBSD, the same problem . Compiler is the same: egsc 1.12 release\n> > 6.5 was compiled fine on both platforms\n> \n> OK, what do you see in those files. Do you see a ^T there? I have just\n> recompiled everything here, and have no problems. Strange.\n\nNothing strange in heapam.h I just tried standard gcc 2.7.2.3 and \nended with the same problem. Probably configure problem on Linux system\n\nOleg\n\n> \n> -- \n> Bruce Momjian | http://www.op.net/~candle\n> [email protected] | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> \n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Sat, 10 Jul 1999 02:06:35 +0400 (MSD)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] 6.5.1" }, { "msg_contents": "Oleg Bartunov <[email protected]> writes:\n> Nothing strange in heapam.h I just tried standard gcc 2.7.2.3 and \n> ended with the same problem. Probably configure problem on Linux system\n\nI'm seeing no problem with cvs sources from yesterday evening. I think\nyou must have a corrupted copy of one of the files --- not heapam.h,\nevidently, but maybe something it depends on. Try removing and\nrefetching everything that was pulled by your last cvs run.\n\nThis isn't the first time we've seen this sort of report. Perhaps\n'cvs update' is subject to file damage over a flaky connection?\nI wonder if there's anything we can do to increase its reliability.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 10 Jul 1999 11:06:51 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] 6.5.1 " }, { "msg_contents": "On Sat, 10 Jul 1999, Tom Lane wrote:\n\n> Date: Sat, 10 Jul 1999 11:06:51 -0400\n> From: Tom Lane <[email protected]>\n> To: Oleg Bartunov <[email protected]>\n> Cc: Bruce Momjian <[email protected]>,\n> PostgreSQL-development <[email protected]>\n> Subject: Re: [HACKERS] 6.5.1 \n> \n> Oleg Bartunov <[email protected]> writes:\n> > Nothing strange in heapam.h I just tried standard gcc 2.7.2.3 and \n> > ended with the same problem. Probably configure problem on Linux system\n> \n> I'm seeing no problem with cvs sources from yesterday evening. I think\n> you must have a corrupted copy of one of the files --- not heapam.h,\n> evidently, but maybe something it depends on. Try removing and\n> refetching everything that was pulled by your last cvs run.\n\nStill no luck :-( I did fresh cvs checkout.\n\n> \n> This isn't the first time we've seen this sort of report. Perhaps\n> 'cvs update' is subject to file damage over a flaky connection?\n> I wonder if there's anything we can do to increase its reliability.\n> \n> \t\t\tregards, tom lane\n> \n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Sun, 11 Jul 1999 00:44:02 +0400 (MSD)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] 6.5.1 " }, { "msg_contents": "Oleg Bartunov <[email protected]> writes:\n>> I'm seeing no problem with cvs sources from yesterday evening. I think\n>> you must have a corrupted copy of one of the files --- not heapam.h,\n>> evidently, but maybe something it depends on. Try removing and\n>> refetching everything that was pulled by your last cvs run.\n\n> Still no luck :-( I did fresh cvs checkout.\n\nI just did one too, and diffed it against what I had before.\nThere's still nothing that looks broken.\n\nAfter looking again at your message, I wonder whether the rest of us\nare chasing the wrong idea. The message you sent looked to be corrupted\ntext, because it mentioned '^Time_t' and so forth. But now I wonder\nwhether that error wasn't just in your cutting and pasting of the\nerror message. If we take the messages at face value they seem to\nindicate that type time_t is not known to the compiler when it processes\nheapam.h, which would make sense if <time.h> hasn't been included yet.\n\nAnd, right offhand, I'm not seeing where <time.h> gets included before\nheapam.h is read.\n\nHas anyone changed anything that might affect where <time.h> gets\nincluded? Perhaps this is a configuration problem.\n\nOleg, how long ago did you last pull a working fileset?\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 10 Jul 1999 17:51:00 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] 6.5.1 " }, { "msg_contents": "I wrote:\n> And, right offhand, I'm not seeing where <time.h> gets included before\n> heapam.h is read.\n\nI dug into this and found that on my own machine, <sys/time.h> is pulled\nin by <arpa/inet.h> which is pulled in by config.h (if the right\nconfiguration symbols are defined). It looks to me like there is\nnoplace that explicitly pulls in <time.h> before heapam.h is read.\n\nIn short, what we've got here is code that only works because of\ninterdependencies among system headers. Not too portable.\n\nI added \"#include <time.h>\" to heapam.h, which I think will fix Oleg's\nproblem, but I'm a little bit mystified why we didn't find this long\nago. Someone must have removed an #include somewhere that covered up\nthe problem...\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 10 Jul 1999 18:16:01 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] 6.5.1 " }, { "msg_contents": "One thing I want to do for 6.6 is make sure each include file has the\nproper includes to compile just itself, and try to remove extra\nnon-system includes in the C files.\n\n> I wrote:\n> > And, right offhand, I'm not seeing where <time.h> gets included before\n> > heapam.h is read.\n> \n> I dug into this and found that on my own machine, <sys/time.h> is pulled\n> in by <arpa/inet.h> which is pulled in by config.h (if the right\n> configuration symbols are defined). It looks to me like there is\n> noplace that explicitly pulls in <time.h> before heapam.h is read.\n> \n> In short, what we've got here is code that only works because of\n> interdependencies among system headers. Not too portable.\n> \n> I added \"#include <time.h>\" to heapam.h, which I think will fix Oleg's\n> problem, but I'm a little bit mystified why we didn't find this long\n> ago. Someone must have removed an #include somewhere that covered up\n> the problem...\n> \n> \t\t\tregards, tom lane\n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 10 Jul 1999 21:59:41 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] 6.5.1" }, { "msg_contents": "> One thing I want to do for 6.6 is make sure each include file has the\n> proper includes to compile just itself, and try to remove extra\n> non-system includes in the C files.\n\nI realize not to touch the system includes, because just because my OS\ndoesn't need it, doesn't mean others don't.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 10 Jul 1999 22:47:14 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] 6.5.1" }, { "msg_contents": "On Sat, 10 Jul 1999, Tom Lane wrote:\n\n> Date: Sat, 10 Jul 1999 17:51:00 -0400\n> From: Tom Lane <[email protected]>\n> To: Oleg Bartunov <[email protected]>\n> Cc: Bruce Momjian <[email protected]>,\n> PostgreSQL-development <[email protected]>\n> Subject: Re: [HACKERS] 6.5.1 \n> \n> Oleg Bartunov <[email protected]> writes:\n> >> I'm seeing no problem with cvs sources from yesterday evening. I think\n> >> you must have a corrupted copy of one of the files --- not heapam.h,\n> >> evidently, but maybe something it depends on. Try removing and\n> >> refetching everything that was pulled by your last cvs run.\n> \n> > Still no luck :-( I did fresh cvs checkout.\n> \n> I just did one too, and diffed it against what I had before.\n> There's still nothing that looks broken.\n> \n> After looking again at your message, I wonder whether the rest of us\n> are chasing the wrong idea. The message you sent looked to be corrupted\n> text, because it mentioned '^Time_t' and so forth. But now I wonder\n> whether that error wasn't just in your cutting and pasting of the\n> error message. If we take the messages at face value they seem to\n\nOoh, soorry. This is known problem (at least for me) with cut-n-paste\nin xterm !\n\n> indicate that type time_t is not known to the compiler when it processes\n> heapam.h, which would make sense if <time.h> hasn't been included yet.\n> \n\nSure, something is broken in configure\n\n> And, right offhand, I'm not seeing where <time.h> gets included before\n> heapam.h is read.\n> \n> Has anyone changed anything that might affect where <time.h> gets\n> included? Perhaps this is a configuration problem.\n> \n> Oleg, how long ago did you last pull a working fileset?\n\nFirst time I noticed the problem was about 2 weeks ago.\n\n> \n> \t\t\tregards, tom lane\n> \n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Sun, 11 Jul 1999 09:57:03 +0400 (MSD)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] 6.5.1 " }, { "msg_contents": "> I just did one too, and diffed it against what I had before.\n> There's still nothing that looks broken.\n> \n> After looking again at your message, I wonder whether the rest of us\n> are chasing the wrong idea. The message you sent looked to be corrupted\n> text, because it mentioned '^Time_t' and so forth. But now I wonder\n> whether that error wasn't just in your cutting and pasting of the\n> error message. If we take the messages at face value they seem to\n> indicate that type time_t is not known to the compiler when it processes\n> heapam.h, which would make sense if <time.h> hasn't been included yet.\n> \n> And, right offhand, I'm not seeing where <time.h> gets included before\n> heapam.h is read.\n> \n> Has anyone changed anything that might affect where <time.h> gets\n> included? Perhaps this is a configuration problem.\n> \n> Oleg, how long ago did you last pull a working fileset?\n\nOK, I think I have found the cause. At one point, include/access/htup.h\nhad included #include <utils/nabstime.h>, though it did not need that\nfile. I removed the include because it was not needed by that file. \nNow, it turns out other files needed it. I am putting it back in, and\nthis will all be cleaned up by 6.6.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 12 Jul 1999 09:31:41 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] 6.5.1" } ]
[ { "msg_contents": "Don't worry about it Bruce, we'd blame you anyway. :)\n\t-DEJ\n\n> -----Original Message-----\n> From:\tBruce Momjian [SMTP:[email protected]]\n> Sent:\tFriday, July 09, 1999 1:00 PM\n> To:\tPostgreSQL-development\n> Subject:\t[HACKERS] 6.5.1\n> \n> [Man, I am applying all the fixes. If there is a problem with 6.5.1,\n> they are going to know it was me.]\n> \n> -- \n> Bruce Momjian | http://www.op.net/~candle\n> [email protected] | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 9 Jul 1999 15:37:06 -0500 ", "msg_from": "\"Jackson, DeJuan\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] 6.5.1" }, { "msg_contents": "> Don't worry about it Bruce, we'd blame you anyway. :)\n> \t-DEJ\n\nWhat a group of guys. :-)\n\n\n> \n> > -----Original Message-----\n> > From:\tBruce Momjian [SMTP:[email protected]]\n> > Sent:\tFriday, July 09, 1999 1:00 PM\n> > To:\tPostgreSQL-development\n> > Subject:\t[HACKERS] 6.5.1\n> > \n> > [Man, I am applying all the fixes. If there is a problem with 6.5.1,\n> > they are going to know it was me.]\n> > \n> > -- \n> > Bruce Momjian | http://www.op.net/~candle\n> > [email protected] | (610) 853-3000\n> > + If your life is a hard drive, | 830 Blythe Avenue\n> > + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 9 Jul 1999 17:13:49 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] 6.5.1" } ]
[ { "msg_contents": "> > Hmm, removed heapam.h, resynced source, checked heapam.h, see nothing wrong. \n> > But the problem persists :-(\n> > Will try FreeBSD, the same problem . Compiler is the same: egsc 1.12 release\n> > 6.5 was compiled fine on both platforms\n> \n> OK, what do you see in those files. Do you see a ^T there? I have just\n> recompiled everything here, and have no problems. Strange.\n\nHonest, guys, I didn't introduce this bug. Honest. No, Marc, noooo... \n:-)\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 9 Jul 1999 17:24:27 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] 6.5.1" } ]
[ { "msg_contents": " 11) What is configure all about?\n \n The files configure and configure.in are part of the GNU autoconf\n package. Configure allows us to test for various capabilities of the\n OS, and to set variables that can then be tested in C programs and\n Makefiles. Autoconf is installed on the PostgreSQL main server. To add\n options to configure, edit configure.in, and then run autoconf to\n generate configure.\n \n When configure is run by the user, it tests various OS capabilities,\n stores those in config.status and config.cache, and modifies a list of\n *.in files. For example, if there exists a Makefile.in, configure\n generates a Makefile that contains substitutions for all @var@\n parameters found by configure.\n \n When you need to edit files, make sure you don't waste time modifying\n files generated by configure. Edit the *.in file, and re-run configure\n to recreate the needed file. If you run make distclean from the\n top-level source directory, all files derived by configure are\n removed, so you see only the file contained in the source\n distribution.\n \n 12) How do I add a new port?\n \n There are a variety of places that need to be modified to add a new\n port. First, start in the src/template directory. Add an appropriate\n entry for your OS. Also, use src/config.guess to add your OS to\n src/template/.similar. You shouldn't match the OS version exactly. The\n configure test will look for an exact OS version number, and if not\n found, find a match without version number. Edit src/configure.in to\n add your new OS. (See configure item above.) You will need to run\n autoconf, or patch src/configure too.\n \n Then, check src/include/port and add your new OS file, with\n appropriate values. Hopefully, there is already locking code in\n src/include/storage/s_lock.h for your CPU. There is a backend/port\n directory if you need special files for your OS.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 9 Jul 1999 20:52:40 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "New Developers FAQ items" } ]
[ { "msg_contents": "For Digital UNIX 4.0D, shared libraries are created by:\n\t$ ld -shared -expect_unresolved \"*\" -o foo.so [objects]\n\nThis presents a problem for mkMakefile.tcldefs.sh.in. In tclConfig.sh:\n\tTCL_SHLIB_LD='ld -shared -expect_unresolved \"*\"'\n\nIn mkMakefile.tcldefs.sh.in:\n\tcat @TCL_CONFIG_SH@ |\n\tegrep '^TCL_|^TK_' |\n\twhile read inp\n\tdo\n\t\teval eval echo $inp\n\tdone >Makefile.tcldefs\n\nBecause of this, we wind up with the following in Makefile.tcldefs to\ncreated shared libraries on Digital UNIX because of the eval:\n\tTCL_SHLIB_LD=ld -shared -expect_unresolved *\n\nThe \"*\" needs to be quoted to avoid shell expansion. How about the\nfollowing:\n\tcat @TCL_CONFIG_SH@ |\n\tegrep '^TCL_|^TK_' |\n\tsed -e \"s/^\\([^=]*\\)='\\(.*\\)'$/\\1=\\2/\"\n\n-- \nalbert chin ([email protected])\n", "msg_date": "Sat, 10 Jul 1999 00:42:46 -0500 (CDT)", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Problems with src/pl/tcl/mkMakefile.tcldefs.sh.in in 6.5" }, { "msg_contents": "I didn't understand this the first time you sent it either.\n\nSend me a patch to review, please.\n\n\n> For Digital UNIX 4.0D, shared libraries are created by:\n> \t$ ld -shared -expect_unresolved \"*\" -o foo.so [objects]\n> \n> This presents a problem for mkMakefile.tcldefs.sh.in. In tclConfig.sh:\n> \tTCL_SHLIB_LD='ld -shared -expect_unresolved \"*\"'\n> \n> In mkMakefile.tcldefs.sh.in:\n> \tcat @TCL_CONFIG_SH@ |\n> \tegrep '^TCL_|^TK_' |\n> \twhile read inp\n> \tdo\n> \t\teval eval echo $inp\n> \tdone >Makefile.tcldefs\n> \n> Because of this, we wind up with the following in Makefile.tcldefs to\n> created shared libraries on Digital UNIX because of the eval:\n> \tTCL_SHLIB_LD=ld -shared -expect_unresolved *\n> \n> The \"*\" needs to be quoted to avoid shell expansion. How about the\n> following:\n> \tcat @TCL_CONFIG_SH@ |\n> \tegrep '^TCL_|^TK_' |\n> \tsed -e \"s/^\\([^=]*\\)='\\(.*\\)'$/\\1=\\2/\"\n> \n> -- \n> albert chin ([email protected])\n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 10 Jul 1999 02:45:47 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Problems with src/pl/tcl/mkMakefile.tcldefs.sh.in in\n 6.5" }, { "msg_contents": "On Sat, Jul 10, 1999 at 02:45:47AM -0400, Bruce Momjian wrote:\n> I didn't understand this the first time you sent it either.\n> \n> Send me a patch to review, please.\n\n--- src/pl/tcl/mkMakefile.tcldefs.sh.in.orig\tFri Jul 9 08:29:09 1999\n+++ src/pl/tcl/mkMakefile.tcldefs.sh.in\tFri Jul 9 08:29:49 1999\n@@ -8,9 +8,6 @@\n \n cat @TCL_CONFIG_SH@ |\n egrep '^TCL_|^TK_' |\n- while read inp\n- do\n-\t eval eval echo $inp\n- done >Makefile.tcldefs\n+ sed -e \"s/^\\([^=]*\\)='\\(.*\\)'$/\\1=\\2/\" >Makefile.tcldefs\n \n exit 0\n\n> \n> > For Digital UNIX 4.0D, shared libraries are created by:\n> > \t$ ld -shared -expect_unresolved \"*\" -o foo.so [objects]\n> > \n> > This presents a problem for mkMakefile.tcldefs.sh.in. In tclConfig.sh:\n> > \tTCL_SHLIB_LD='ld -shared -expect_unresolved \"*\"'\n> > \n> > In mkMakefile.tcldefs.sh.in:\n> > \tcat @TCL_CONFIG_SH@ |\n> > \tegrep '^TCL_|^TK_' |\n> > \twhile read inp\n> > \tdo\n> > \t\teval eval echo $inp\n> > \tdone >Makefile.tcldefs\n> > \n> > Because of this, we wind up with the following in Makefile.tcldefs to\n> > created shared libraries on Digital UNIX because of the eval:\n> > \tTCL_SHLIB_LD=ld -shared -expect_unresolved *\n> > \n> > The \"*\" needs to be quoted to avoid shell expansion. How about the\n> > following:\n> > \tcat @TCL_CONFIG_SH@ |\n> > \tegrep '^TCL_|^TK_' |\n> > \tsed -e \"s/^\\([^=]*\\)='\\(.*\\)'$/\\1=\\2/\"\n> > \n> > -- \n> > albert chin ([email protected])\n> \n> -- \n> Bruce Momjian | http://www.op.net/~candle\n\n-- \nalbert chin ([email protected])\n\n", "msg_date": "Sat, 10 Jul 1999 02:07:47 -0500 (CDT)", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Problems with src/pl/tcl/mkMakefile.tcldefs.sh.in in\n 6.5" }, { "msg_contents": "On Sat, Jul 10, 1999 at 02:45:47AM -0400, Bruce Momjian wrote:\n> I didn't understand this the first time you sent it either.\n> \n> Send me a patch to review, please.\n\n--- src/pl/tcl/mkMakefile.tcldefs.sh.in.orig\tFri Jul 9 08:29:09 1999\n+++ src/pl/tcl/mkMakefile.tcldefs.sh.in\tFri Jul 9 08:29:49 1999\n@@ -8,9 +8,6 @@\n \n cat @TCL_CONFIG_SH@ |\n egrep '^TCL_|^TK_' |\n- while read inp\n- do\n-\t eval eval echo $inp\n- done >Makefile.tcldefs\n+ sed -e \"s/^\\([^=]*\\)='\\(.*\\)'$/\\1=\\2/\" >Makefile.tcldefs\n \n exit 0\n\n> \n> > For Digital UNIX 4.0D, shared libraries are created by:\n> > \t$ ld -shared -expect_unresolved \"*\" -o foo.so [objects]\n> > \n> > This presents a problem for mkMakefile.tcldefs.sh.in. In tclConfig.sh:\n> > \tTCL_SHLIB_LD='ld -shared -expect_unresolved \"*\"'\n> > \n> > In mkMakefile.tcldefs.sh.in:\n> > \tcat @TCL_CONFIG_SH@ |\n> > \tegrep '^TCL_|^TK_' |\n> > \twhile read inp\n> > \tdo\n> > \t\teval eval echo $inp\n> > \tdone >Makefile.tcldefs\n> > \n> > Because of this, we wind up with the following in Makefile.tcldefs to\n> > created shared libraries on Digital UNIX because of the eval:\n> > \tTCL_SHLIB_LD=ld -shared -expect_unresolved *\n> > \n> > The \"*\" needs to be quoted to avoid shell expansion. How about the\n> > following:\n> > \tcat @TCL_CONFIG_SH@ |\n> > \tegrep '^TCL_|^TK_' |\n> > \tsed -e \"s/^\\([^=]*\\)='\\(.*\\)'$/\\1=\\2/\"\n> > \n> > -- \n> > albert chin ([email protected])\n> \n> -- \n> Bruce Momjian | http://www.op.net/~candle\n\n-- \nalbert chin ([email protected])\n", "msg_date": "Sat, 10 Jul 1999 02:07:47 -0500 (CDT)", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Problems with src/pl/tcl/mkMakefile.tcldefs.sh.in in\n 6.5" }, { "msg_contents": "> On Sat, Jul 10, 1999 at 02:45:47AM -0400, Bruce Momjian wrote:\n> > I didn't understand this the first time you sent it either.\n> > \n> > Send me a patch to review, please.\n> \n> --- src/pl/tcl/mkMakefile.tcldefs.sh.in.orig\tFri Jul 9 08:29:09 1999\n> +++ src/pl/tcl/mkMakefile.tcldefs.sh.in\tFri Jul 9 08:29:49 1999\n> @@ -8,9 +8,6 @@\n> \n> cat @TCL_CONFIG_SH@ |\n> egrep '^TCL_|^TK_' |\n> - while read inp\n> - do\n> -\t eval eval echo $inp\n> - done >Makefile.tcldefs\n> + sed -e \"s/^\\([^=]*\\)='\\(.*\\)'$/\\1=\\2/\" >Makefile.tcldefs\n> \n\nI understand what your patch does, and it looks OK, but any idea why the\n'eval eval' was there, and is it safe to skip it? I can apply this to\n6.6.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 10 Jul 1999 10:36:25 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Problems with src/pl/tcl/mkMakefile.tcldefs.sh.in in\n 6.5" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> I understand what your patch does, and it looks OK, but any idea why the\n> 'eval eval' was there, and is it safe to skip it?\n\nI think the idea is to expand any shell variable references that appear\nin tclConfig.sh. If so, the code is wrong anyway, since the expansion\nwill occur in a subshell that hasn't actually executed tclConfig.sh,\nand therefore does not have definitions for the referenced variables.\nWe'll always get empty strings substituted for the references, and that\nmay be the wrong thing.\n\nTaking an example that actually appears in my system's tclConfig.sh:\nTCL_LIB_FILE='libtcl8.0${TCL_DBGX}.a'\n\nI stick this into shell variable inp:\n\n$ read inp\nTCL_LIB_FILE='libtcl8.0${TCL_DBGX}.a'\t\t--- typed by me\n$ echo $inp\nTCL_LIB_FILE='libtcl8.0${TCL_DBGX}.a'\n\nNow one eval gets rid of the outer single quotes:\n\n$ eval echo $inp\nTCL_LIB_FILE=libtcl8.0${TCL_DBGX}.a\n\nand another one will perform a round of shell interpretation on what's\ninside the quotes:\n\n$ eval eval echo $inp\nTCL_LIB_FILE=libtcl8.0.a\n\nwhich is not what we want. In this particular case it's harmless\nbecause TCL_DBGX should be empty, but if I had a debugging build\nof Tcl installed here then the makefile would fail because it would\nhave the wrong value of TCL_LIB_FILE.\n\nIf we do something like what Albert is proposing, the sed script\nwill need to convert ${...} to $(...) so that shell variable references\nbecome make variable references.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 10 Jul 1999 11:37:24 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Problems with src/pl/tcl/mkMakefile.tcldefs.sh.in in\n\t6.5" }, { "msg_contents": "Bruce Momjian wrote:\n\n>\n> > On Sat, Jul 10, 1999 at 02:45:47AM -0400, Bruce Momjian wrote:\n> > > I didn't understand this the first time you sent it either.\n> > >\n> > > Send me a patch to review, please.\n> >\n> > --- src/pl/tcl/mkMakefile.tcldefs.sh.in.orig Fri Jul 9 08:29:09 1999\n> > +++ src/pl/tcl/mkMakefile.tcldefs.sh.in Fri Jul 9 08:29:49 1999\n> > @@ -8,9 +8,6 @@\n> >\n> > cat @TCL_CONFIG_SH@ |\n> > egrep '^TCL_|^TK_' |\n> > - while read inp\n> > - do\n> > - eval eval echo $inp\n> > - done >Makefile.tcldefs\n> > + sed -e \"s/^\\([^=]*\\)='\\(.*\\)'$/\\1=\\2/\" >Makefile.tcldefs\n> >\n>\n> I understand what your patch does, and it looks OK, but any idea why the\n> 'eval eval' was there, and is it safe to skip it? I can apply this to\n> 6.6.\n\n As far as I can recall, the first of all versions I've\n created did it mainly that way (with a simple sed(1) call).\n But since tclConfig.sh is a shell script, there have to be\n shell variable expansions done on some platforms and that\n resulted finally in the double eval. So I would consider the\n above a little step for a man, but a big leap backward for\n mankind.\n\n Instead, the result of the double eval must get special\n characters quoted in some way.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Sun, 11 Jul 1999 15:46:26 +0200 (MET DST)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: [ADMIN] Re: [HACKERS] Problems with\n\tsrc/pl/tcl/mkMakefile.tcldefs.sh.in in 6.5" }, { "msg_contents": "On Sun, Jul 11, 1999 at 03:46:26PM +0200, Jan Wieck wrote:\n> Bruce Momjian wrote:\n> \n> >\n> > > On Sat, Jul 10, 1999 at 02:45:47AM -0400, Bruce Momjian wrote:\n> > > > I didn't understand this the first time you sent it either.\n> > > >\n> > > > Send me a patch to review, please.\n> > >\n> > > --- src/pl/tcl/mkMakefile.tcldefs.sh.in.orig Fri Jul 9 08:29:09 1999\n> > > +++ src/pl/tcl/mkMakefile.tcldefs.sh.in Fri Jul 9 08:29:49 1999\n> > > @@ -8,9 +8,6 @@\n> > >\n> > > cat @TCL_CONFIG_SH@ |\n> > > egrep '^TCL_|^TK_' |\n> > > - while read inp\n> > > - do\n> > > - eval eval echo $inp\n> > > - done >Makefile.tcldefs\n> > > + sed -e \"s/^\\([^=]*\\)='\\(.*\\)'$/\\1=\\2/\" >Makefile.tcldefs\n> > >\n> >\n> > I understand what your patch does, and it looks OK, but any idea why the\n> > 'eval eval' was there, and is it safe to skip it? I can apply this to\n> > 6.6.\n> \n> As far as I can recall, the first of all versions I've\n> created did it mainly that way (with a simple sed(1) call).\n> But since tclConfig.sh is a shell script, there have to be\n> shell variable expansions done on some platforms and that\n> resulted finally in the double eval. So I would consider the\n> above a little step for a man, but a big leap backward for\n> mankind.\n> \n> Instead, the result of the double eval must get special\n> characters quoted in some way.\n\nI just looked at the man for make on Solaris, Digital UNIX, HP-UX, and\nIRIX and all support $() and ${} for variable expansion. BTW, I also\nlooked at the Makefile generated by Tk and it assumes make can handle\n${}. It basically does one eval of tclConfig.sh and uses the result in\nmake variables. As Tk assumes make can handle ${}, can we safely\nassume the same? With this, we'd do one eval rather than two before.\nThis is the same as the sed line I posted because everything in\ntclConfig.sh is VAR='VAL' so the one eval would just strip the single\nquotes (which sed did). But, if tclConfig.sh changes to some use of\ndouble quotes in the future, we won't break.\n\n> Jan\n\n-- \nalbert chin ([email protected])\n\n", "msg_date": "Sun, 11 Jul 1999 13:54:06 -0500 (CDT)", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Re: [ADMIN] Re: [HACKERS] Problems with\n\tsrc/pl/tcl/mkMakefile.tcldefs.sh.in in 6.5" }, { "msg_contents": "I believe this is fixed.\n\n> For Digital UNIX 4.0D, shared libraries are created by:\n> \t$ ld -shared -expect_unresolved \"*\" -o foo.so [objects]\n> \n> This presents a problem for mkMakefile.tcldefs.sh.in. In tclConfig.sh:\n> \tTCL_SHLIB_LD='ld -shared -expect_unresolved \"*\"'\n> \n> In mkMakefile.tcldefs.sh.in:\n> \tcat @TCL_CONFIG_SH@ |\n> \tegrep '^TCL_|^TK_' |\n> \twhile read inp\n> \tdo\n> \t\teval eval echo $inp\n> \tdone >Makefile.tcldefs\n> \n> Because of this, we wind up with the following in Makefile.tcldefs to\n> created shared libraries on Digital UNIX because of the eval:\n> \tTCL_SHLIB_LD=ld -shared -expect_unresolved *\n> \n> The \"*\" needs to be quoted to avoid shell expansion. How about the\n> following:\n> \tcat @TCL_CONFIG_SH@ |\n> \tegrep '^TCL_|^TK_' |\n> \tsed -e \"s/^\\([^=]*\\)='\\(.*\\)'$/\\1=\\2/\"\n> \n> -- \n> albert chin ([email protected])\n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 21 Sep 1999 17:39:54 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Problems with src/pl/tcl/mkMakefile.tcldefs.sh.in in\n 6.5" }, { "msg_contents": "Bruce Momjian wrote:\n\n>\n> I believe this is fixed.\n\n Again one of the frequently appearing items. So I would call\n it more \"hacked quiet - for now\" instead of \"fixed\".\n\n The script is only called if PostgreSQL is configured\n --with_tcl. In that case, missing the Tcl(/Tk) includes\n and/or libs would cause errors and a compilation abort. Can't\n we assume that if the user configured with Tcl, she would at\n least have a working tclsh(1)? I think we can. I don't know\n of any \"normal\" Tcl-installation where the libs are present\n but no working tclsh(1).\n\n Since Tcl itself has much better capabilities than a sh(1) or\n sed(1), it might be reasonable to source in the tclConfig.sh\n into mkMakefile.tclsh.sh and pipe a \"set\" trough a tcl script\n that does the real conversion into proper Makefile escaping.\n\n An advantage would be that the Tcl script could check if the\n version of the systems default tclsh(1) is the same as the\n one in the choosen tclConfig.sh file and notice the user if\n not. Using different Tcl versions in the libs and includes\n than in the tclsh(1) executable could cause horrible\n problems. I'm unhappy with the current libpgtcl for a long\n time, but the changes I have in mind would make it\n incompatible with pre-8.0 Tcl. So the changes will cause a\n bunch of #if...#else...#endif that MUST match the later used\n tclsh(1) at compile time or the dynamic loader of Tcl would\n fail.\n\n\nJan\n\nBTW: Is it only me or do others too wonder why their private\n wish-list is sometimes longer than our official TODO?\n\n>\n> > For Digital UNIX 4.0D, shared libraries are created by:\n> > $ ld -shared -expect_unresolved \"*\" -o foo.so [objects]\n> >\n> > This presents a problem for mkMakefile.tcldefs.sh.in. In tclConfig.sh:\n> > TCL_SHLIB_LD='ld -shared -expect_unresolved \"*\"'\n> >\n> > In mkMakefile.tcldefs.sh.in:\n> > cat @TCL_CONFIG_SH@ |\n> > egrep '^TCL_|^TK_' |\n> > while read inp\n> > do\n> > eval eval echo $inp\n> > done >Makefile.tcldefs\n> >\n> > Because of this, we wind up with the following in Makefile.tcldefs to\n> > created shared libraries on Digital UNIX because of the eval:\n> > TCL_SHLIB_LD=ld -shared -expect_unresolved *\n> >\n> > The \"*\" needs to be quoted to avoid shell expansion. How about the\n> > following:\n> > cat @TCL_CONFIG_SH@ |\n> > egrep '^TCL_|^TK_' |\n> > sed -e \"s/^\\([^=]*\\)='\\(.*\\)'$/\\1=\\2/\"\n> >\n> > --\n> > albert chin ([email protected])\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Wed, 22 Sep 1999 00:55:35 +0200 (MET DST)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Problems with src/pl/tcl/mkMakefile.tcldefs.sh.in in\n 6.5" }, { "msg_contents": "> Jan\n> \n> BTW: Is it only me or do others too wonder why their private\n> wish-list is sometimes longer than our official TODO?\n\nThat's bad. Our official TODO is very large.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 21 Sep 1999 19:38:42 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Problems with src/pl/tcl/mkMakefile.tcldefs.sh.in in\n 6.5" }, { "msg_contents": "[email protected] (Jan Wieck) writes:\n> Bruce Momjian wrote:\n>> I believe this is fixed.\n\n> Again one of the frequently appearing items. So I would call\n> it more \"hacked quiet - for now\" instead of \"fixed\".\n\nNo, it's really fixed, or anyway Albert's complaint is fixed\n(there was at least one other related complaint recently, too).\n\nThe problem was not whether we could find tclConfig.sh, it was\nwhether we were coping correctly with shell metacharacters in\nthe variable values. We were not, but I fixed that with a redesign\nof the way the script read the file...\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 21 Sep 1999 21:27:34 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Problems with src/pl/tcl/mkMakefile.tcldefs.sh.in in\n\t6.5" } ]
[ { "msg_contents": "\nDoes anyone know if someone else picked up the maintenance of Wdb-p95? \nDoug Dunlop (the guy that was) seems to have disappeared along with the\npackage. \n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> TEAM-OS2\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n", "msg_date": "Sat, 10 Jul 1999 10:59:08 -0400 (EDT)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": true, "msg_subject": "Wdb-p95" }, { "msg_contents": "On Sat, 10 Jul 1999, Vince Vielhaber wrote:\n\n> Date: Sat, 10 Jul 1999 10:59:08 -0400 (EDT)\n> From: Vince Vielhaber <[email protected]>\n> To: [email protected]\n> Subject: [HACKERS] Wdb-p95\n> \n> \n> Does anyone know if someone else picked up the maintenance of Wdb-p95? \n> Doug Dunlop (the guy that was) seems to have disappeared along with the\n> package. \n\nCheck WDBI - http://www.wdbi.net/\n\n\tRegards,\n\t\tOleg\n> \n> Vince.\n> -- \n> ==========================================================================\n> Vince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n> # include <std/disclaimers.h> TEAM-OS2\n> Online Campground Directory http://www.camping-usa.com\n> Online Giftshop Superstore http://www.cloudninegifts.com\n> ==========================================================================\n> \n> \n> \n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Sat, 10 Jul 1999 21:24:42 +0400 (MSD)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Wdb-p95" }, { "msg_contents": "\nOn 10-Jul-99 Oleg Bartunov wrote:\n> On Sat, 10 Jul 1999, Vince Vielhaber wrote:\n> \n>> Date: Sat, 10 Jul 1999 10:59:08 -0400 (EDT)\n>> From: Vince Vielhaber <[email protected]>\n>> To: [email protected]\n>> Subject: [HACKERS] Wdb-p95\n>> \n>> \n>> Does anyone know if someone else picked up the maintenance of Wdb-p95? \n>> Doug Dunlop (the guy that was) seems to have disappeared along with the\n>> package. \n> \n> Check WDBI - http://www.wdbi.net/\n\nPerfect! Thanks!\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> TEAM-OS2\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n", "msg_date": "Sat, 10 Jul 1999 13:27:52 -0400 (EDT)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Wdb-p95" } ]
[ { "msg_contents": "I have updated the needed files in preparation for 6.5.1.\n\nThomas, release.sgml and install.sgml are ready to be converted to text\nfiles.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 10 Jul 1999 13:16:21 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "6.5.1 release" }, { "msg_contents": "> I have updated the needed files in preparation for 6.5.1.\n> Thomas, release.sgml and install.sgml are ready to be converted to text\n> files.\n\nJust back from vacation, and still wading through 800 e-mail messages\n:(\n\nWhere are we on this?\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Wed, 14 Jul 1999 15:03:57 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 6.5.1 release" }, { "msg_contents": "Thomas Lockhart <[email protected]> writes:\n> Where are we on this?\n\nCVS tree split was done yesterday, we're planning 6.5.1 release Monday,\nI believe.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 14 Jul 1999 11:16:03 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: 6.5.1 release " }, { "msg_contents": "> > I have updated the needed files in preparation for 6.5.1.\n> > Thomas, release.sgml and install.sgml are ready to be converted to text\n> > files.\n> \n> Just back from vacation, and still wading through 800 e-mail messages\n> :(\n> \n> Where are we on this?\n\nWaiting for you. Release is 19th.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 14 Jul 1999 11:18:29 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: 6.5.1 release" }, { "msg_contents": "On Wed, 14 Jul 1999, Tom Lane wrote:\n\n> Thomas Lockhart <[email protected]> writes:\n> > Where are we on this?\n> \n> CVS tree split was done yesterday, we're planning 6.5.1 release Monday,\n> I believe.\n\nCorrect, but if Thomas needs a little more time to go over things, I have\nno problems with that either...that's the poin tof hte branch ;)\n\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Wed, 14 Jul 1999 12:36:39 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: 6.5.1 release " }, { "msg_contents": "> > > Where are we on this?\n> > CVS tree split was done yesterday, we're planning 6.5.1 release Monday,\n> > I believe.\n> Correct, but if Thomas needs a little more time to go over things, I have\n> no problems with that either...that's the poin tof hte branch ;)\n\nI think I'm OK with the schedule, and I've caught up with e-mail so I\nthink I know what's going on ;)\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Wed, 14 Jul 1999 16:30:19 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: 6.5.1 release" }, { "msg_contents": "> I have updated the needed files in preparation for 6.5.1.\n> Thomas, release.sgml and install.sgml are ready to be converted to \n> text files.\n\nOK. I've got patches for INSTALL and RELEASE ready to go, and will fix\nup postgres.tar.gz and admin.tar.gz soon.\n\nBut I've got cvs troubles on postgresql.org:\n\n> cvs -Q checkout -rREL6_5 pgsql\n> cvs commit HISTORY\ncvs commit: sticky tag `REL6_5' for file `HISTORY' is not a branch\n\nRight. I saw something about that in my mail backlog. So scrappy\napparently created a branch after creating the tag which wanted to be\na branch. So try again:\n\n> cvs checkout -rREL6_5_PATCHES pgsql\ncvs [checkout aborted]: cannot write\n/usr/local/cvsroot/CVSROOT/val-tags: Permission denied\n\n?? Any suggestions?\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Wed, 14 Jul 1999 21:25:36 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 6.5.1 release" }, { "msg_contents": "\nGuess is as good as mine?\n\nLooking into it right now, give me a couple...\n\nOn Wed, 14 Jul 1999, Thomas Lockhart wrote:\n\n> > I have updated the needed files in preparation for 6.5.1.\n> > Thomas, release.sgml and install.sgml are ready to be converted to \n> > text files.\n> \n> OK. I've got patches for INSTALL and RELEASE ready to go, and will fix\n> up postgres.tar.gz and admin.tar.gz soon.\n> \n> But I've got cvs troubles on postgresql.org:\n> \n> > cvs -Q checkout -rREL6_5 pgsql\n> > cvs commit HISTORY\n> cvs commit: sticky tag `REL6_5' for file `HISTORY' is not a branch\n> \n> Right. I saw something about that in my mail backlog. So scrappy\n> apparently created a branch after creating the tag which wanted to be\n> a branch. So try again:\n> \n> > cvs checkout -rREL6_5_PATCHES pgsql\n> cvs [checkout aborted]: cannot write\n> /usr/local/cvsroot/CVSROOT/val-tags: Permission denied\n> \n> ?? Any suggestions?\n> \n> - Thomas\n> \n> -- \n> Thomas Lockhart\t\t\t\[email protected]\n> South Pasadena, California\n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Wed, 14 Jul 1999 19:27:59 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 6.5.1 release" }, { "msg_contents": "\nOdd...try it now. \n\nOn Wed, 14 Jul 1999, Thomas Lockhart wrote:\n\n> > I have updated the needed files in preparation for 6.5.1.\n> > Thomas, release.sgml and install.sgml are ready to be converted to \n> > text files.\n> \n> OK. I've got patches for INSTALL and RELEASE ready to go, and will fix\n> up postgres.tar.gz and admin.tar.gz soon.\n> \n> But I've got cvs troubles on postgresql.org:\n> \n> > cvs -Q checkout -rREL6_5 pgsql\n> > cvs commit HISTORY\n> cvs commit: sticky tag `REL6_5' for file `HISTORY' is not a branch\n> \n> Right. I saw something about that in my mail backlog. So scrappy\n> apparently created a branch after creating the tag which wanted to be\n> a branch. So try again:\n> \n> > cvs checkout -rREL6_5_PATCHES pgsql\n> cvs [checkout aborted]: cannot write\n> /usr/local/cvsroot/CVSROOT/val-tags: Permission denied\n> \n> ?? Any suggestions?\n> \n> - Thomas\n> \n> -- \n> Thomas Lockhart\t\t\t\[email protected]\n> South Pasadena, California\n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Wed, 14 Jul 1999 19:30:25 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 6.5.1 release" }, { "msg_contents": "> Odd...try it now.\n\nOK, works. Thanks.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Wed, 14 Jul 1999 23:39:58 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 6.5.1 release" } ]
[ { "msg_contents": "\n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Sun, 11 Jul 1999 00:09:02 +0400 (MSD)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": true, "msg_subject": "None" } ]
[ { "msg_contents": "I got a problem with query:\n\nselect distinct (date), bytes from access_log;\n\nwhich works but not as I expect. I thought this query will select\nall rows with distinct values of 'date' column, but it get\ndistinct pairs 'date, bytes' . From documnetation I see\n\n\"DISTINCT will eliminate all duplicate rows from the selection. \nDISTINCT ON column will eliminate all duplicates in the specified column; \nthis is equivalent to using GROUP BY column. \nALL will return all candidate rows, including duplicates.\"\n \ndiscovery=> select distinct on date,bytes from access_log;\nERROR: parser: parse error at or near \",\"\n\n\tRegards,\n\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n\n", "msg_date": "Sun, 11 Jul 1999 00:16:39 +0400 (MSD)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": true, "msg_subject": "SELECT DISTINCT question" }, { "msg_contents": "Oleg Bartunov <[email protected]> writes:\n> discovery=> select distinct on date,bytes from access_log;\n> ERROR: parser: parse error at or near \",\"\n\nThe syntax for SELECT DISTINCT ON is just as brain-damaged as the\nfunctionality itself: there's no comma after the column name.\nYou want\n\nselect distinct on date date,bytes from access_log;\n\nThe reason the functionality is brain-damaged is that there's no way to\nknow which tuple out of the set of tuples with a given \"date\" value will\nbe the one returned.\n\nSELECT DISTINCT ON is not in SQL92, and I think it shouldn't be in\nPostgres either...\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 10 Jul 1999 17:18:28 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] SELECT DISTINCT question " }, { "msg_contents": "On Sat, 10 Jul 1999, Tom Lane wrote:\n\n> Date: Sat, 10 Jul 1999 17:18:28 -0400\n> From: Tom Lane <[email protected]>\n> To: Oleg Bartunov <[email protected]>\n> Cc: [email protected], [email protected]\n> Subject: [SQL] Re: [HACKERS] SELECT DISTINCT question \n> \n> Oleg Bartunov <[email protected]> writes:\n> > discovery=> select distinct on date,bytes from access_log;\n> > ERROR: parser: parse error at or near \",\"\n> \n> The syntax for SELECT DISTINCT ON is just as brain-damaged as the\n> functionality itself: there's no comma after the column name.\n> You want\n> \n> select distinct on date date,bytes from access_log;\n> \n\nthanks, this works. But why parser complains about such query:\n\ndiscovery=> select distinct on a.date a.date, a.bytes from access_log a;\nERROR: parser: parse error at or near \".\"\n\nIn this query I could just omit '.', but in more complex query \nI probably could need one.\n\n> The reason the functionality is brain-damaged is that there's no way to\n> know which tuple out of the set of tuples with a given \"date\" value will\n> be the one returned.\n> \n> SELECT DISTINCT ON is not in SQL92, and I think it shouldn't be in\n> Postgres either...\n\n\nI'm not an SQL expert, but if it works and this feature is in standard\nbut only syntax is diffrent, why just not to use standard\n\nselect distinct(date), bytes from access_log;\n\nOr I'm missing here ?\n\n\n\tRegards,\n\t\tOleg\n> \n> \t\t\tregards, tom lane\n> \n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Sun, 11 Jul 1999 10:09:24 +0400 (MSD)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [SQL] Re: [HACKERS] SELECT DISTINCT question " }, { "msg_contents": "Oleg Bartunov <[email protected]> writes:\n> thanks, this works. But why parser complains about such query:\n\n> discovery=> select distinct on a.date a.date, a.bytes from access_log a;\n> ERROR: parser: parse error at or near \".\"\n\nProbably the grammar specifies just <column name> and not anything\nmore complex after DISTINCT ON. It'd be risky to try to accept a\ngeneral expression after ON, due to the silly decision to leave out\nany terminating punctuation.\n\n>> SELECT DISTINCT ON is not in SQL92, and I think it shouldn't be in\n>> Postgres either...\n\n> I'm not an SQL expert, but if it works and this feature is in standard\n> but only syntax is diffrent,\n\nNo, there is no feature in SQL that allows DISTINCT on a subset of\ncolumns, period. This is not merely a matter of syntax, it's a\nfundamental semantic issue.\n\n> why just not to use standard\n>\n> select distinct(date), bytes from access_log;\n>\n> Or I'm missing here ?\n\nI don't think that does what you expect it to (hint: the\nparentheses are redundant).\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 11 Jul 1999 11:38:43 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [SQL] Re: [HACKERS] SELECT DISTINCT question " }, { "msg_contents": "Oleg Bartunov wrote:\n> \n> I got a problem with query:\n> \n> select distinct (date), bytes from access_log;\n> \n> which works but not as I expect. I thought this query will select\n> all rows with distinct values of 'date' column, but it get\n> distinct pairs 'date, bytes' . From documnetation I see\n> \n> \"DISTINCT will eliminate all duplicate rows from the selection.\n> DISTINCT ON column will eliminate all duplicates in the specified column;\n> this is equivalent to using GROUP BY column.\n\nIf it is equivalent to GROUP BY then it should allow only aggregates \nin non-distinct columns, like:\n\nselect distinct on date date, sum(bytes) from access_log;\n\nIf it does not, then it should be files as a bug imho.\n\n-----------------\nHannu\n", "msg_date": "Tue, 13 Jul 1999 23:50:57 +0300", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] SELECT DISTINCT question" }, { "msg_contents": "Hannu Krosing <[email protected]> writes:\n>> \"DISTINCT will eliminate all duplicate rows from the selection.\n>> DISTINCT ON column will eliminate all duplicates in the specified column;\n>> this is equivalent to using GROUP BY column.\"\n\n> If it is equivalent to GROUP BY then it should allow only aggregates \n> in non-distinct columns, like:\n> select distinct on date date, sum(bytes) from access_log;\n> If it does not, then it should be files as a bug imho.\n\nIt does not. Whether that is a bug is hard to say, since there is no\nstandard I know of that says what it *is* supposed to do.\n\nIf you look at the select_distinct_on regress test outputs, I bet you\nwill be even less happy:\n\nQUERY: SELECT DISTINCT ON string4 two, string4, ten\n\t FROM tmp\n ORDER BY two using <, string4 using <, ten using <;\ntwo|string4|ten\n---+-------+---\n 0|AAAAxx | 0\n 0|HHHHxx | 0\n 0|OOOOxx | 0\n 0|VVVVxx | 0\n 1|AAAAxx | 1\n 1|HHHHxx | 1\n 1|OOOOxx | 1\n 1|VVVVxx | 1\n(8 rows)\n\nThat's not exactly my idea of \"distinct\" values of string4 ---\nbut apparently whoever made up the regress test thought it was OK!\n\nCan anyone defend this feature or provide a coherent definition\nof what it's supposed to be doing? My urge to rip it out is\ngrowing stronger and stronger...\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 13 Jul 1999 17:31:48 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] SELECT DISTINCT question " }, { "msg_contents": "Tom, any status on this DISTINCT ON ripout?\n\n\n> Hannu Krosing <[email protected]> writes:\n> >> \"DISTINCT will eliminate all duplicate rows from the selection.\n> >> DISTINCT ON column will eliminate all duplicates in the specified column;\n> >> this is equivalent to using GROUP BY column.\"\n> \n> > If it is equivalent to GROUP BY then it should allow only aggregates \n> > in non-distinct columns, like:\n> > select distinct on date date, sum(bytes) from access_log;\n> > If it does not, then it should be files as a bug imho.\n> \n> It does not. Whether that is a bug is hard to say, since there is no\n> standard I know of that says what it *is* supposed to do.\n> \n> If you look at the select_distinct_on regress test outputs, I bet you\n> will be even less happy:\n> \n> QUERY: SELECT DISTINCT ON string4 two, string4, ten\n> \t FROM tmp\n> ORDER BY two using <, string4 using <, ten using <;\n> two|string4|ten\n> ---+-------+---\n> 0|AAAAxx | 0\n> 0|HHHHxx | 0\n> 0|OOOOxx | 0\n> 0|VVVVxx | 0\n> 1|AAAAxx | 1\n> 1|HHHHxx | 1\n> 1|OOOOxx | 1\n> 1|VVVVxx | 1\n> (8 rows)\n> \n> That's not exactly my idea of \"distinct\" values of string4 ---\n> but apparently whoever made up the regress test thought it was OK!\n> \n> Can anyone defend this feature or provide a coherent definition\n> of what it's supposed to be doing? My urge to rip it out is\n> growing stronger and stronger...\n> \n> \t\t\tregards, tom lane\n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 23 Sep 1999 13:18:18 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [SQL] Re: [HACKERS] SELECT DISTINCT question" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> Tom, any status on this DISTINCT ON ripout?\n\nI haven't done anything about it, if that's what you mean.\n\nI didn't get any indication of whether anyone else agreed with me\n(maybe the lack of loud complaints should be taken as consent?)\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 23 Sep 1999 17:27:37 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [SQL] Re: [HACKERS] SELECT DISTINCT question " }, { "msg_contents": "> Bruce Momjian <[email protected]> writes:\n> > Tom, any status on this DISTINCT ON ripout?\n> \n> I haven't done anything about it, if that's what you mean.\n> \n> I didn't get any indication of whether anyone else agreed with me\n> (maybe the lack of loud complaints should be taken as consent?)\n\nI think so.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 23 Sep 1999 17:56:18 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [SQL] Re: [HACKERS] SELECT DISTINCT question" }, { "msg_contents": "> Bruce Momjian <[email protected]> writes:\n> > Tom, any status on this DISTINCT ON ripout?\n> \n> I haven't done anything about it, if that's what you mean.\n> \n> I didn't get any indication of whether anyone else agreed with me\n> (maybe the lack of loud complaints should be taken as consent?)\n\nI will wrap up the mail messages, and put it on the TODO list.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 23 Sep 1999 17:58:05 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [SQL] Re: [HACKERS] SELECT DISTINCT question" }, { "msg_contents": "unsubscribe\n\n", "msg_date": "Fri, 24 Sep 1999 09:40:43 +1000", "msg_from": "\"Sean Mullen\" <[email protected]>", "msg_from_op": false, "msg_subject": "" } ]
[ { "msg_contents": "--- src/interfaces/libpq++/Makefile.in.orig\tThu Jul 8 00:10:39 1999\n+++ src/interfaces/libpq++/Makefile.in\tThu Jul 8 10:26:12 1999\n@@ -27,9 +27,7 @@\n # because of our inclusion of c.h and we don't know how to stop that.\n \n ifeq ($(CXX), g++)\n-CXXFLAGS= -Wno-error\n-else\n-CXXFLAGS=\n+CXXFLAGS+= -Wno-error\n endif\n \n CXXFLAGS+= -I$(SRCDIR)/backend \\\n@@ -52,10 +50,6 @@\n \n # Shared library stuff, also default 'all' target\n include $(SRCDIR)/Makefile.shlib\n-\n-\n-# Pull shared-lib CFLAGS into CXXFLAGS\n-CXXFLAGS+= $(CFLAGS)\n \n \n .PHONY: examples\n--- src/template/generic.orig\tSat Jul 10 02:09:44 1999\n+++ src/template/generic\tSat Jul 10 09:50:54 1999\n@@ -1,10 +1,7 @@\n AROPT:crs\n-CFLAGS:\n-SHARED_LIB:\n ALL:\n SRCH_INC:\n SRCH_LIB:\n USE_LOCALE:no\n-DLSUFFIX:.so\n YFLAGS:-d\n YACC:\n--- src/pl/tcl/mkMakefile.tcldefs.sh.in.orig\tSat Jul 10 14:09:52 1999\n+++ src/pl/tcl/mkMakefile.tcldefs.sh.in\tSat Jul 10 14:15:50 1999\n@@ -6,11 +6,11 @@\n exit 1\n fi\n \n+# Strip outer quotes from variable and expand ${VAR} to $(VAR) for\n+# interpretation by make\n cat @TCL_CONFIG_SH@ |\n egrep '^TCL_|^TK_' |\n- while read inp\n- do\n-\t eval eval echo $inp\n- done >Makefile.tcldefs\n+ sed -e \"s/^\\([^=]*\\)='\\(.*\\)'$/\\1=\\2/g\" \\\n+ -e 's/\\${\\([^}][^}]*\\)}/$(\\1)/g' >Makefile.tcldefs\n \n exit 0\n", "msg_date": "Sat, 10 Jul 1999 20:32:13 -0500 (CDT)", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "None" }, { "msg_contents": "[email protected] writes:\n> --- src/interfaces/libpq++/Makefile.in.orig\tThu Jul 8 00:10:39 1999\n> +++ src/interfaces/libpq++/Makefile.in\tThu Jul 8 10:26:12 1999\n> -# Pull shared-lib CFLAGS into CXXFLAGS\n> -CXXFLAGS+= $(CFLAGS)\n\nAlbert, if you're going to send in patches that potentially break things\non some platforms, you'd better include an explanation of why you think\nthe change is a good idea.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 11 Jul 1999 12:17:41 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "None" }, { "msg_contents": "On Sun, Jul 11, 1999 at 12:17:41PM -0400, Tom Lane wrote:\n> [email protected] writes:\n> > --- src/interfaces/libpq++/Makefile.in.orig\tThu Jul 8 00:10:39 1999\n> > +++ src/interfaces/libpq++/Makefile.in\tThu Jul 8 10:26:12 1999\n> > -# Pull shared-lib CFLAGS into CXXFLAGS\n> > -CXXFLAGS+= $(CFLAGS)\n> \n> Albert, if you're going to send in patches that potentially break things\n> on some platforms, you'd better include an explanation of why you think\n> the change is a good idea.\n\nI accidentally sent the wrong patch. Sorry.\n\n> \t\t\tregards, tom lane\n\n-- \nalbert chin ([email protected])\n\n", "msg_date": "Sun, 11 Jul 1999 13:18:37 -0500 (CDT)", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Re: your mail" } ]
[ { "msg_contents": "Here are the new items for 6.5.1. Changes?\n\n---------------------------------------------------------------------------\n\nAdd NT README file\nPortability fixes for linux_ppc, Irix, linux_alpha, OpenBSD, alpha\nRemove QUERY_LIMIT, use SELECT...LIMIT\nFix for EXPLAIN on inheritance(Tom)\nPatch to allow vacuum on multi-segment tables(Hiroshi)\nR=Tree optimizer selectivity fix(Tom)\nACL file descriptor leak fix(Atsushi Ogawa)\nNew expresssion subtree code(Tom)\nAvoid disk writes for read-only transactions(Vadim)\nFix for removal of temp tables if last transaction was aborted(Bruce)\nFix to prevent too large tuple from being created(Bruce)\nplpgsql fixes\nAllow port numbers 32k - 64k(Bruce)\nAdd ^ precidence(Bruce)\nRename sort files called pg_temp to pg_sorttemp(Bruce)\nFix for microseconds in time values(Tom)\nTutorial source cleanup\nNew linux_m68k port\nFix for sorting of NULL's in rare cases(Tom)\n", "msg_date": "Sat, 10 Jul 1999 22:00:36 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "6.5.1 CHANGES" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> Here are the new items for 6.5.1. Changes?\n> ...\n> Fix for sorting of NULL's in rare cases(Tom)\n\nI dunno if it was \"rare\" or not --- basically, anytime you did a\nmulticolumn sort where some tuples match up to and including a\nNULL column, you'd find that the columns to the right of the NULL\nweren't sorted. Maybe instead write\n\n* Fix for sorting of NULLs in multicolumn sorts\n\nA couple other things to add:\n\n* Shared library dependencies fixed (this time for sure ;-))\n* Fixed glitches affecting GROUP BY in subselects\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 11 Jul 1999 11:52:14 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] 6.5.1 CHANGES " }, { "msg_contents": "Done.\n\n> Bruce Momjian <[email protected]> writes:\n> > Here are the new items for 6.5.1. Changes?\n> > ...\n> > Fix for sorting of NULL's in rare cases(Tom)\n> \n> I dunno if it was \"rare\" or not --- basically, anytime you did a\n> multicolumn sort where some tuples match up to and including a\n> NULL column, you'd find that the columns to the right of the NULL\n> weren't sorted. Maybe instead write\n> \n> * Fix for sorting of NULLs in multicolumn sorts\n> \n> A couple other things to add:\n> \n> * Shared library dependencies fixed (this time for sure ;-))\n> * Fixed glitches affecting GROUP BY in subselects\n> \n> \t\t\tregards, tom lane\n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 11 Jul 1999 14:01:49 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] 6.5.1 CHANGES" }, { "msg_contents": "This time I have committed following changes for multi-byte support:\n\no Fix some compiler warnings (contributed by Tomoaki Nishiyama)\no Add Win1250 (Czech) support (contributed Pavel Behal)\n---\nTatsuo Ishii\n", "msg_date": "Mon, 12 Jul 1999 07:59:50 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] 6.5.1 CHANGES " }, { "msg_contents": "Done.\n\n> This time I have committed following changes for multi-byte support:\n> \n> o Fix some compiler warnings (contributed by Tomoaki Nishiyama)\n> o Add Win1250 (Czech) support (contributed Pavel Behal)\n> ---\n> Tatsuo Ishii\n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 11 Jul 1999 22:25:56 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] 6.5.1 CHANGES" } ]
[ { "msg_contents": "Marc, do you have a release date set? I remember July 15. I know\nThomas has to regenerate the INSTALL and HISTORY files.\n\nI am glad you did not split the CVS tree. Tom Lane and I are both\nsitting on patches, but it is easier to sit on patches than to\ndouble-patch.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 10 Jul 1999 23:57:51 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "6.5.1 release date" }, { "msg_contents": "On Sat, 10 Jul 1999, Bruce Momjian wrote:\n\n> Marc, do you have a release date set? I remember July 15. I know\n> Thomas has to regenerate the INSTALL and HISTORY files.\n> \n> I am glad you did not split the CVS tree. Tom Lane and I are both\n> sitting on patches, but it is easier to sit on patches than to\n> double-patch.\n\nLet's say July 19th, which is a Monday, we do the 6.5.1 release *and* CVS\nsplit? I like Monday releases, since it means most ppl are around for a\nfew days afterwards \"just in case\" :)\n\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Sun, 11 Jul 1999 19:36:39 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] 6.5.1 release date" }, { "msg_contents": "> On Sat, 10 Jul 1999, Bruce Momjian wrote:\n> \n> > Marc, do you have a release date set? I remember July 15. I know\n> > Thomas has to regenerate the INSTALL and HISTORY files.\n> > \n> > I am glad you did not split the CVS tree. Tom Lane and I are both\n> > sitting on patches, but it is easier to sit on patches than to\n> > double-patch.\n> \n> Let's say July 19th, which is a Monday, we do the 6.5.1 release *and* CVS\n> split? I like Monday releases, since it means most ppl are around for a\n> few days afterwards \"just in case\" :)\n\nSounds good. Tom Lane is breathing on that CVS tree. I don't know if I\ncan hold him back until then. :-)\n\nShould we do the split now? Tom?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 11 Jul 1999 22:23:22 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] 6.5.1 release date" }, { "msg_contents": "I can't do cvs update for about 2 days:\n\nall I get is repeated message:\n\ncvs server: [03:16:34] waiting for anoncvs's lock in /usr/local/cvsroot/pgsql/src/include/rewrite\n\n\n\tRegards,\n\n\t\tOleg\nOn Sun, 11 Jul 1999, The Hermit Hacker wrote:\n\n> Date: Sun, 11 Jul 1999 19:36:39 -0300 (ADT)\n> From: The Hermit Hacker <[email protected]>\n> To: Bruce Momjian <[email protected]>\n> Cc: PostgreSQL-development <[email protected]>\n> Subject: Re: [HACKERS] 6.5.1 release date\n> \n> On Sat, 10 Jul 1999, Bruce Momjian wrote:\n> \n> > Marc, do you have a release date set? I remember July 15. I know\n> > Thomas has to regenerate the INSTALL and HISTORY files.\n> > \n> > I am glad you did not split the CVS tree. Tom Lane and I are both\n> > sitting on patches, but it is easier to sit on patches than to\n> > double-patch.\n> \n> Let's say July 19th, which is a Monday, we do the 6.5.1 release *and* CVS\n> split? I like Monday releases, since it means most ppl are around for a\n> few days afterwards \"just in case\" :)\n> \n> \n> Marc G. Fournier ICQ#7615664 IRC Nick: Scrappy\n> Systems Administrator @ hub.org \n> primary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n> \n> \n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Mon, 12 Jul 1999 11:19:28 +0400 (MSD)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] 6.5.1 release date" }, { "msg_contents": "\nTry now...\n\nOn Mon, 12 Jul 1999, Oleg Bartunov wrote:\n\n> I can't do cvs update for about 2 days:\n> \n> all I get is repeated message:\n> \n> cvs server: [03:16:34] waiting for anoncvs's lock in /usr/local/cvsroot/pgsql/src/include/rewrite\n> \n> \n> \tRegards,\n> \n> \t\tOleg\n> On Sun, 11 Jul 1999, The Hermit Hacker wrote:\n> \n> > Date: Sun, 11 Jul 1999 19:36:39 -0300 (ADT)\n> > From: The Hermit Hacker <[email protected]>\n> > To: Bruce Momjian <[email protected]>\n> > Cc: PostgreSQL-development <[email protected]>\n> > Subject: Re: [HACKERS] 6.5.1 release date\n> > \n> > On Sat, 10 Jul 1999, Bruce Momjian wrote:\n> > \n> > > Marc, do you have a release date set? I remember July 15. I know\n> > > Thomas has to regenerate the INSTALL and HISTORY files.\n> > > \n> > > I am glad you did not split the CVS tree. Tom Lane and I are both\n> > > sitting on patches, but it is easier to sit on patches than to\n> > > double-patch.\n> > \n> > Let's say July 19th, which is a Monday, we do the 6.5.1 release *and* CVS\n> > split? I like Monday releases, since it means most ppl are around for a\n> > few days afterwards \"just in case\" :)\n> > \n> > \n> > Marc G. Fournier ICQ#7615664 IRC Nick: Scrappy\n> > Systems Administrator @ hub.org \n> > primary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n> > \n> > \n> \n> _____________________________________________________________\n> Oleg Bartunov, sci.researcher, hostmaster of AstroNet,\n> Sternberg Astronomical Institute, Moscow University (Russia)\n> Internet: [email protected], http://www.sai.msu.su/~megera/\n> phone: +007(095)939-16-83, +007(095)939-23-83\n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Mon, 12 Jul 1999 09:02:09 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] 6.5.1 release date" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n>> Let's say July 19th, which is a Monday, we do the 6.5.1 release *and* CVS\n>> split? I like Monday releases, since it means most ppl are around for a\n>> few days afterwards \"just in case\" :)\n\n> Should we do the split now? Tom?\n\nI'd be inclined to say the split should happen a few days before the\nrelease ... but I'm not the one doing the work ;-). Release on the\n19th sounds fine to me, but I think we could do the split anytime now.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 12 Jul 1999 10:53:26 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] 6.5.1 release date " }, { "msg_contents": "> Bruce Momjian <[email protected]> writes:\n> >> Let's say July 19th, which is a Monday, we do the 6.5.1 release *and* CVS\n> >> split? I like Monday releases, since it means most ppl are around for a\n> >> few days afterwards \"just in case\" :)\n> \n> > Should we do the split now? Tom?\n> \n> I'd be inclined to say the split should happen a few days before the\n> release ... but I'm not the one doing the work ;-). Release on the\n> 19th sounds fine to me, but I think we could do the split anytime now.\n> \n> \t\t\tregards, tom lane\n> \n\nAgreed. I think we are pretty much done addressing user problems from\n6.5.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 12 Jul 1999 11:12:31 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] 6.5.1 release date" } ]
[ { "msg_contents": "I just began to learn rules with 6.5 and notice:\ntest=> \\dt\nDatabase = test\n +------------------+----------------------------------+----------+\n | Owner | Relation | Type |\n +------------------+----------------------------------+----------+\n | megera | access_log | table |\n | megera | hits | table |\n | megera | junk_qwerty | table |\n +------------------+----------------------------------+----------+\n\ntest=> create rule log_hits as on update to hits do instead insert into hits values ( NEW.msg_id, 1);\nCREATE\ntest=> \\dt\nDatabase = test\n +------------------+----------------------------------+----------+\n | Owner | Relation | Type |\n +------------------+----------------------------------+----------+\n | megera | access_log | table |\n | megera | hits | view? |\n | megera | junk_qwerty | table |\n +------------------+----------------------------------+----------+\n\nTable hits now becomes view ? \n\n\n\tRegards,\n\n\t\tOleg\n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Sun, 11 Jul 1999 21:27:22 +0400 (MSD)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": true, "msg_subject": "create rule changes table to view ?" }, { "msg_contents": "Can someone comment on this?\n\n> I just began to learn rules with 6.5 and notice:\n> test=> \\dt\n> Database = test\n> +------------------+----------------------------------+----------+\n> | Owner | Relation | Type |\n> +------------------+----------------------------------+----------+\n> | megera | access_log | table |\n> | megera | hits | table |\n> | megera | junk_qwerty | table |\n> +------------------+----------------------------------+----------+\n> \n> test=> create rule log_hits as on update to hits do instead insert into hits values ( NEW.msg_id, 1);\n> CREATE\n> test=> \\dt\n> Database = test\n> +------------------+----------------------------------+----------+\n> | Owner | Relation | Type |\n> +------------------+----------------------------------+----------+\n> | megera | access_log | table |\n> | megera | hits | view? |\n> | megera | junk_qwerty | table |\n> +------------------+----------------------------------+----------+\n> \n> Table hits now becomes view ? \n> \n> \n> \tRegards,\n> \n> \t\tOleg\n> \n> _____________________________________________________________\n> Oleg Bartunov, sci.researcher, hostmaster of AstroNet,\n> Sternberg Astronomical Institute, Moscow University (Russia)\n> Internet: [email protected], http://www.sai.msu.su/~megera/\n> phone: +007(095)939-16-83, +007(095)939-23-83\n> \n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 23 Sep 1999 13:27:11 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] create rule changes table to view ?" }, { "msg_contents": "I have noticed and lived with this problem for quite a while.\n\nThere's nothing in pg_class that tells a table from a view, they're both\n\"relations\". Since a view is implemented as in effect an empty table with\non select rules on it, psql thinks every table with a rule on it is a\nview. This is just the output, nothing on the table changes.\n\nA fix would be to display both tables and views as \"relation\". As far as I\nknow there is now 100% deterministic way to tell a table from a view. I\nthink one fine day Jan is going to change that but for now we don't have\nto worry about it.\n\nPeter\n\nOn Sep 23, Bruce Momjian mentioned:\n\n> Can someone comment on this?\n> \n> > I just began to learn rules with 6.5 and notice:\n> > test=> \\dt\n> > Database = test\n> > +------------------+----------------------------------+----------+\n> > | Owner | Relation | Type |\n> > +------------------+----------------------------------+----------+\n> > | megera | access_log | table |\n> > | megera | hits | table |\n> > | megera | junk_qwerty | table |\n> > +------------------+----------------------------------+----------+\n> > \n> > test=> create rule log_hits as on update to hits do instead insert into hits values ( NEW.msg_id, 1);\n> > CREATE\n> > test=> \\dt\n> > Database = test\n> > +------------------+----------------------------------+----------+\n> > | Owner | Relation | Type |\n> > +------------------+----------------------------------+----------+\n> > | megera | access_log | table |\n> > | megera | hits | view? |\n> > | megera | junk_qwerty | table |\n> > +------------------+----------------------------------+----------+\n> > \n> > Table hits now becomes view ? \n> > \n> > \n> > \tRegards,\n> > \n> > \t\tOleg\n> > \n> > _____________________________________________________________\n> > Oleg Bartunov, sci.researcher, hostmaster of AstroNet,\n> > Sternberg Astronomical Institute, Moscow University (Russia)\n> > Internet: [email protected], http://www.sai.msu.su/~megera/\n> > phone: +007(095)939-16-83, +007(095)939-23-83\n> > \n> > \n> > \n> \n> \n> \n\n-- \nPeter Eisentraut - [email protected]\nhttp://yi.org/peter-e\n\n\n", "msg_date": "Fri, 24 Sep 1999 12:37:24 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] create rule changes table to view ?" }, { "msg_contents": "\n\nBruce Momjian ha scritto:\n\n> Can someone comment on this?\n\nThis an old question.\npsql suppose that table \"test\" is a view because it checks for pg_class.relhasrules and it prints \"view?\"\nif the value is TRUE and the value is if there's a rule for the table.\nThe only way to distinguish a table from a view is the pg_get_viewdef.\n\nSome time ago I suggested to use pg_get_viewdef('tablename') to check for views\nto print \"view or table\" instead of \"view?\".\nI made a patch to my psql and it now recognize views perfectly and I can display\nonly tables using \\d and/or only views using \\v\n\nComments.\n\n\n>\n>\n> > I just began to learn rules with 6.5 and notice:\n> > test=> \\dt\n> > Database = test\n> > +------------------+----------------------------------+----------+\n> > | Owner | Relation | Type |\n> > +------------------+----------------------------------+----------+\n> > | megera | access_log | table |\n> > | megera | hits | table |\n> > | megera | junk_qwerty | table |\n> > +------------------+----------------------------------+----------+\n> >\n> > test=> create rule log_hits as on update to hits do instead insert into hits values ( NEW.msg_id, 1);\n> > CREATE\n> > test=> \\dt\n> > Database = test\n> > +------------------+----------------------------------+----------+\n> > | Owner | Relation | Type |\n> > +------------------+----------------------------------+----------+\n> > | megera | access_log | table |\n> > | megera | hits | view? |\n> > | megera | junk_qwerty | table |\n> > +------------------+----------------------------------+----------+\n> >\n> > Table hits now becomes view ?\n> >\n> >\n> > Regards,\n> >\n> > Oleg\n> >\n> > _____________________________________________________________\n> > Oleg Bartunov, sci.researcher, hostmaster of AstroNet,\n> > Sternberg Astronomical Institute, Moscow University (Russia)\n> > Internet: [email protected], http://www.sai.msu.su/~megera/\n> > phone: +007(095)939-16-83, +007(095)939-23-83\n> >\n> >\n> >\n>\n> --\n> Bruce Momjian | http://www.op.net/~candle\n> [email protected] | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n>\n> ************\n\n", "msg_date": "Fri, 24 Sep 1999 15:19:52 +0200", "msg_from": "=?iso-8859-1?Q?Jos=E9?= Soares <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] create rule changes table to view ?" }, { "msg_contents": "Peter Eisentraut wrote:\n\n> A fix would be to display both tables and views as \"relation\". As far as I\n> know there is now 100% deterministic way to tell a table from a view. I\n> think one fine day Jan is going to change that but for now we don't have\n> to worry about it.\n\n There is currently a 100% failsafe way.\n\n Actually, rules ON SELECT are totally restricted to rules\n that are INSTEAD, return a targetlist that's exactly the\n relations (views) schema and there could only be one single-\n action rule on the SELECT event. These checks are performed\n during CREATE RULE.\n\n In short: If there's a rule ON SELECT, then the relation MUST\n BE A VIEW.\n\n The detail psql is doing wrong is that it treats any rule as\n if it is indicating a view. It must look for SELECT rules\n only.\n\n And I'm not planning to take out this restriction again.\n\n\nJan\n\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Sat, 25 Sep 1999 15:01:17 +0200 (MET DST)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] create rule changes table to view ?" }, { "msg_contents": "> I have noticed and lived with this problem for quite a while.\n> \n> There's nothing in pg_class that tells a table from a view, they're both\n> \"relations\". Since a view is implemented as in effect an empty table with\n> on select rules on it, psql thinks every table with a rule on it is a\n> view. This is just the output, nothing on the table changes.\n> \n> A fix would be to display both tables and views as \"relation\". As far as I\n> know there is now 100% deterministic way to tell a table from a view. I\n> think one fine day Jan is going to change that but for now we don't have\n> to worry about it.\n\nOK. Nothing added to TODO list.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 26 Sep 1999 20:08:56 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] create rule changes table to view ?" } ]
[ { "msg_contents": "I need accumulated hits statistics from my web appl. and it was looks easy \nto implement. \n\nQuick scenario:\n\n1. create table hits (msg_id int4 not null primary key, hits int4);\n2. in cgi script\n update hits set hits=hits+1 where msg_id = $msg_id;\n\nBut this will not works if there are no row with msg_id,\nso I need to insert row before. I could do this in application\nbut I suspect it could be done with rules.\n\nbefore I dig into rules programming I'd like to know if somebody\nhas already have similar rule or is there another way to do this\nin postgres. I'd prefer fast solution.\n\n\tRegards,\n\n\t\tOleg\n\n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n\n", "msg_date": "Sun, 11 Jul 1999 21:46:04 +0400 (MSD)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": true, "msg_subject": "accumulated statistics" }, { "msg_contents": "At 21:46 11/07/99 +0400, you wrote:\n>I need accumulated hits statistics from my web appl. and it was looks easy \n>to implement. \n>\n>Quick scenario:\n>\n>1. create table hits (msg_id int4 not null primary key, hits int4);\n>2. in cgi script\n> update hits set hits=hits+1 where msg_id = $msg_id;\n>\n>But this will not works if there are no row with msg_id,\n>so I need to insert row before. I could do this in application\n>but I suspect it could be done with rules.\n>\n>before I dig into rules programming I'd like to know if somebody\n>has already have similar rule or is there another way to do this\n>in postgres. I'd prefer fast solution.\n>\n\n\nI've done exactly this kind 'update or insert' logic using plpgsql (I presume it could be done with rules, but there may be a problem because if there is no row, how will a rule get fired?).\n\n------------------\nCREATE FUNCTION \"accumulate_something\" (int4,int4 ) RETURNS int4 AS '\nDeclare\n keyval Alias For $2;\n delta ALIAS For $3;\n cnt int4;\nBegin\n Select count into cnt from summary_table where keyfield = keyval;\n if Not Found then\n cnt := delta;\n If cnt <> 0 Then -- Don't include zero values\n Insert Into summary_table (keyfield,count) values (keyval, cnt);\n End If;\n else\n cnt := cnt + delta;\n If cnt <> 0 Then\n Update summary_table set count = cnt where keyfield = keyval;\n Else\n Delete From summary_table where keyfield = keyval;\n End If;\n End If;\n return cnt;\nEnd;' LANGUAGE 'plpgsql';\n-----------------------\n\nRather than doing an update, I just call the function from SQL. You could also do it with a dummy insert into a table and use a 'before insert' trigger to prevent the insert, but cause an update on another table.\n\nThis is far less nice than it needs to be. I've sent some patches to Jan Weick for plpgsql that allow access to 'SPI_PROCESSED' which tells you how many rows were affected by the last statement. When (and if) these patches get applied, you will be able to do the following:\n\n------------------\nCREATE FUNCTION \"accumulate_something\" (int4,int4 ) RETURNS int4 AS '\nDeclare\n keyval Alias For $2;\n delta ALIAS For $3;\n cnt int4;\n rows\t int4;\n\nBegin\n Update summary_table set count = count + delta where keyfield = keyval;\n\n Get Diagnostics Select PROCESSED Into rows;\n\n If rows = 0 then\n Insert Into summary_table (keyfield,count) values (keyval, delta);\n End If;\n\nEnd;' LANGUAGE 'plpgsql';\n-----------------------\n\nThe first function has the advantage that zero values are deleted, which for my application is probably a good thing. But for web page counters, is probably unnecessary.\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.C.N. 008 659 498) | /(@) ______---_\nTel: +61-03-5367 7422 | _________ \\\nFax: +61-03-5367 7430 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Mon, 12 Jul 1999 12:36:24 +1000", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] accumulated statistics" } ]
[ { "msg_contents": "Hi Thomas,\n\nI observe that gram.y's <res_target_list> nonterminal is only used in\nUPDATE statements. It accepts a lot too much for that purpose; the\nfollowing sorts of things get by the grammar, only to fail later on:\n\n\tUPDATE table SET *;\n\tUPDATE table SET table.column;\n\tUPDATE table SET table.*;\n\nNone of these are valid according to SQL92 or have any visible use.\n\nI propose renaming res_target_list and res_target_el to\nupdate_target_list and update_target_el, and removing the alternatives\nthat aren't actually valid for UPDATE.\n\nHaving done that, we might as well rename res_target_list2 and\nres_target_el2 to something clearer (I'm thinking just target_list and\ntarget_el, but if you want to keep the \"res_\" I won't object).\n\nComments, objections?\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 11 Jul 1999 15:37:32 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Overgenerous parsing of UPDATE targetlist" }, { "msg_contents": "> Having done that, we might as well rename res_target_list2 and\n> res_target_el2 to something clearer (I'm thinking just target_list and\n> target_el, but if you want to keep the \"res_\" I won't object).\n\nAnything would be clearer than what is there now. Please give them nice\nnames too, if possible.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 11 Jul 1999 22:11:04 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Overgenerous parsing of UPDATE targetlist" }, { "msg_contents": "> I propose renaming res_target_list and res_target_el to\n> update_target_list and update_target_el, and removing the alternatives\n> that aren't actually valid for UPDATE.\n> Having done that, we might as well rename res_target_list2 and\n> res_target_el2 to something clearer (I'm thinking just target_list and\n> target_el, but if you want to keep the \"res_\" I won't object).\n\nSounds good. I never poked very far into the usage of these clauses,\nbut istm that your suggestions would make things clearer.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Wed, 14 Jul 1999 15:20:39 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Overgenerous parsing of UPDATE targetlist" } ]
[ { "msg_contents": "I spent some time today trying to persuade the grammar to accept\nunadorned array subscripting, ie\n\tSELECT arraycolname[2] FROM table;\nrather than what you have to do in 6.5:\n\tSELECT table.arraycolname[2] FROM table;\n\nIt's easy enough to add \"opt_indirection\" to the rules that use ColId,\nbut I find one ends up with a bunch of reduce/reduce conflicts.\n\nThe basic problem is that at the start of an expression, the input\n\tident [\ncould be the beginning of a Typename with subscripts, or it could be\na column name with subscripts. The way the grammar is constructed,\nthe parser has to reduce the ident to either ColId or a typename\nnonterminal before it can shift the '[' ... and there's no way to\ndecide which.\n\nNow how did Typename get into the picture? There is one rule that\nis the culprit, namely \"AexprConst ::= Typename Sconst\". Without\nthat rule, a type name never appears at the start of an expression\nso there is no conflict.\n\nI can see three ways to proceed:\n\n1. Forget about making arrays easier to use.\n\n2. Remove \"AexprConst ::= Typename Sconst\" from the grammar. I do\nnot believe this rule is in SQL92. However, we've recommended\nconstructions like \"default text 'now'\" often enough that we might\nnot be able to get away with that.\n\n3. Simplify the AexprConst rule to only allow a subset of Typename\n--- it looks like forbidding array types in this context is enough.\n(You could still write a cast using :: or AS, of course, instead of\n\"int4[3] '{1,2,3}'\". The latter has never worked anyway.)\n\nI'm leaning to choice #3, but I wonder if anyone has a better idea.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 11 Jul 1999 18:06:54 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Arrays versus 'type constant' syntax" }, { "msg_contents": "> I spent some time today trying to persuade the grammar to accept\n> unadorned array subscripting, ie\n> \tSELECT arraycolname[2] FROM table;\n> rather than what you have to do in 6.5:\n> \tSELECT table.arraycolname[2] FROM table;\n> \n> It's easy enough to add \"opt_indirection\" to the rules that use ColId,\n> but I find one ends up with a bunch of reduce/reduce conflicts.\n\nYou know, that has been on the TODO list for a long time, so I should\nhave guessed it was some tricky problem.\n\n> The basic problem is that at the start of an expression, the input\n> \tident [\n> could be the beginning of a Typename with subscripts, or it could be\n> a column name with subscripts. The way the grammar is constructed,\n> the parser has to reduce the ident to either ColId or a typename\n> nonterminal before it can shift the '[' ... and there's no way to\n> decide which.\n\nThis reminds me of C grammar, where the scanner has to be able to ask\nthe grammar if a token is a type or not, because typedef can create its\nown types. This is why C grammar/scanning is not totally simple. We\nhave avoided that complexity so far.\n\n> Now how did Typename get into the picture? There is one rule that\n> is the culprit, namely \"AexprConst ::= Typename Sconst\". Without\n> that rule, a type name never appears at the start of an expression\n> so there is no conflict.\n\nThat is quite interesting.\n\n> I can see three ways to proceed:\n> \n> 1. Forget about making arrays easier to use.\n> \n> 2. Remove \"AexprConst ::= Typename Sconst\" from the grammar. I do\n> not believe this rule is in SQL92. However, we've recommended\n> constructions like \"default text 'now'\" often enough that we might\n> not be able to get away with that.\n> \n> 3. Simplify the AexprConst rule to only allow a subset of Typename\n> --- it looks like forbidding array types in this context is enough.\n> (You could still write a cast using :: or AS, of course, instead of\n> \"int4[3] '{1,2,3}'\". The latter has never worked anyway.)\n> \n> I'm leaning to choice #3, but I wonder if anyone has a better idea.\n\nYes, if it is easy, #3 sounds good. This is a very rarly used area of\nthe grammer, so any restriction on Arrays and Casting will probably\nnever be hit by a user, though there are so many users, I am sure\nsomeone will find it soon enough.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 11 Jul 1999 22:22:28 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Arrays versus 'type constant' syntax" }, { "msg_contents": "> I can see three ways to proceed:\n> 1. Forget about making arrays easier to use.\n> 2. Remove \"AexprConst ::= Typename Sconst\" from the grammar. I do\n> not believe this rule is in SQL92. However, we've recommended\n> constructions like \"default text 'now'\" often enough that we might\n> not be able to get away with that.\n\nSorry, this *is* SQL92 syntax. The older Postgres syntax using\n\"::typename\" is also supported, but is not standard anything, so I've\nbeen trying to move examples, etc. to the standard syntax when I can.\n\n> 3. Simplify the AexprConst rule to only allow a subset of Typename\n> --- it looks like forbidding array types in this context is enough.\n> (You could still write a cast using :: or AS, of course, instead of\n> \"int4[3] '{1,2,3}'\". The latter has never worked anyway.)\n> I'm leaning to choice #3, but I wonder if anyone has a better idea.\n\nI don't have a strong opinion about what #3 would introduce as far as\nfuture constraints.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Wed, 14 Jul 1999 15:32:36 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Arrays versus 'type constant' syntax" }, { "msg_contents": "Thomas Lockhart <[email protected]> writes:\n>> 2. Remove \"AexprConst ::= Typename Sconst\" from the grammar. I do\n>> not believe this rule is in SQL92.\n\n> Sorry, this *is* SQL92 syntax.\n\nI've just grepped the SQL92 spec in some detail, and I see noplace\nthat allows \"typename stringconstant\". \"::\" is indeed not standard,\nbut the only type conversion syntax I see in the spec is\n\tCAST (value AS type)\n\nIf I'm missing something, please cite chapter and verse.\n\n>> 3. Simplify the AexprConst rule to only allow a subset of Typename\n>> --- it looks like forbidding array types in this context is enough.\n>> (You could still write a cast using :: or AS, of course, instead of\n>> \"int4[3] '{1,2,3}'\". The latter has never worked anyway.)\n>> I'm leaning to choice #3, but I wonder if anyone has a better idea.\n\n> I don't have a strong opinion about what #3 would introduce as far as\n> future constraints.\n\nIf \"typename stringconstant\" actually is standard then we have a\nproblem, because I would not like to forbid array types in a standard\nconstruct. But the grammar is not LALR(1) in the presence of array\ntypes, so we may not have much choice...\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 14 Jul 1999 11:58:00 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Arrays versus 'type constant' syntax " }, { "msg_contents": "> >> 2. Remove \"AexprConst ::= Typename Sconst\" from the grammar. I do\n> >> not believe this rule is in SQL92.\n> > Sorry, this *is* SQL92 syntax.\n> I've just grepped the SQL92 spec in some detail, and I see noplace\n> that allows \"typename stringconstant\". \"::\" is indeed not standard,\n> but the only type conversion syntax I see in the spec is\n> CAST (value AS type)\n> If I'm missing something, please cite chapter and verse.\n\nWell, ahem, er...\n\nIt isn't an explicit general construct in SQL92, since there are only\na few data types defined in the language, and since type extensibility\nis not supported.\n\nHowever, the language does define syntax for specifying date/time\nliterals (the only string-like literal which is not a string type) and\nthat would seem to suggest the general solution. \n\nAllowed in SQL92 (according to my 2 reference books, and I may have\nmissed more info):\n\n'Bastille Day' -- string literal\nDATE '7/14/1999' -- date literal\nTIMESTAMP '7/14/1999 09:47' -- date/time literal\nTIME '09:47' -- time literal\n\nSQL3 should have more to say on the subject, and does, but I've got\nold versions of draft docs and have (so far) only found brief mention\nof ADTs etc. Perhaps they intend the CAST construct to cover this, but\nistm that it isn't a natural extension of the older forms mentioned\nabove.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Wed, 14 Jul 1999 17:17:59 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Arrays versus 'type constant' syntax" }, { "msg_contents": "btw, in a different context the \"type string\" form is allowed since\n\n _charset 'literal'\n\nspecifies the character set for a literal string; the leading\nunderscore is required by SQL92 in this context so isn't exactly\nequivalent to the general case Postgres currently allows.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Wed, 14 Jul 1999 20:19:57 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: Arrays versus 'type constant' syntax" }, { "msg_contents": "Thomas Lockhart <[email protected]> writes:\n> Well, ahem, er...\n> It isn't an explicit general construct in SQL92, since there are only\n> a few data types defined in the language, and since type extensibility\n> is not supported.\n> However, the language does define syntax for specifying date/time\n> literals (the only string-like literal which is not a string type) and\n> that would seem to suggest the general solution. \n\nHmm. OK, then, we're stuck with a tradeoff that (fortunately) only\naffects arrays. Is it better to force subscripted column names to be\nfully qualified \"table.column[subscripts]\" (the current situation),\nor to allow bare column names to be subscripted at the cost of requiring\ncasts from string constants to array types to use the long-winded CAST\nnotation (or nonstandard :: notation)?\n\nI would guess that the cast issue comes up *far* less frequently than\nsubscripting, so we'd be better off changing the behavior. But the\nfloor is open for discussion.\n\nI have this change implemented and tested here, btw, but I won't check\nit in until I see if there are objections...\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 14 Jul 1999 17:58:21 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Re: Arrays versus 'type constant' syntax " } ]
[ { "msg_contents": "Wow... I actully know the answer to this question :)\n\npsql declares the the type to be view? if the relkind is a relation and the \nrelhasrules = true in pg_class for that entry. I will pull the latest source \nand see if I can come up with a better way for determining the type tomorrow, if \nsomeone else doesn't beat me to it :)\n\n-Ryan\n\n\n> I just began to learn rules with 6.5 and notice:\n> test=> \\dt\n> Database = test\n> +------------------+----------------------------------+----------+\n> | Owner | Relation | Type |\n> +------------------+----------------------------------+----------+\n> | megera | access_log | table |\n> | megera | hits | table |\n> | megera | junk_qwerty | table |\n> +------------------+----------------------------------+----------+\n> \n> test=> create rule log_hits as on update to hits do instead insert into hits \nvalues ( NEW.msg_id, 1);\n> CREATE\n> test=> \\dt\n> Database = test\n> +------------------+----------------------------------+----------+\n> | Owner | Relation | Type |\n> +------------------+----------------------------------+----------+\n> | megera | access_log | table |\n> | megera | hits | view? |\n> | megera | junk_qwerty | table |\n> +------------------+----------------------------------+----------+\n> \n> Table hits now becomes view ? \n> \n> \n> \tRegards,\n> \n> \t\tOleg\n> \n> _____________________________________________________________\n> Oleg Bartunov, sci.researcher, hostmaster of AstroNet,\n> Sternberg Astronomical Institute, Moscow University (Russia)\n> Internet: [email protected], http://www.sai.msu.su/~megera/\n> phone: +007(095)939-16-83, +007(095)939-23-83\n> \n", "msg_date": "Mon, 12 Jul 1999 00:39:00 -0600 (MDT)", "msg_from": "Ryan Bradetich <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] create rule changes table to view ?" }, { "msg_contents": "Ryan Bradetich <[email protected]> writes:\n> psql declares the the type to be view? if the relkind is a relation\n> and the relhasrules = true in pg_class for that entry. I will pull\n> the latest source and see if I can come up with a better way for\n> determining the type tomorrow, if someone else doesn't beat me to it\n\nThe way Jan explained it to me, a view *is* a table that happens to\nhave an \"on select do instead\" rule attached to it. If the table\nhas data in it (which it normally wouldn't) you can't see that data\nanyway because of the select rule.\n\nThis is another example like SERIAL columns, UNIQUE columns, etc, where\nwe are not really leaving enough information in the system tables to\nallow accurate reconstruction of what the user originally said. Was\nit a CREATE VIEW, or a CREATE TABLE and manual attachment of a rule?\nNo way to tell. In one sense it doesn't matter a whole lot, but for\npsql displays and pg_dump it would be nice to know what happened.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 12 Jul 1999 11:00:44 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] create rule changes table to view ? " }, { "msg_contents": "Tom Lane wrote:\n\n>\n> Ryan Bradetich <[email protected]> writes:\n> > psql declares the the type to be view? if the relkind is a relation\n> > and the relhasrules = true in pg_class for that entry. I will pull\n> > the latest source and see if I can come up with a better way for\n> > determining the type tomorrow, if someone else doesn't beat me to it\n>\n> The way Jan explained it to me, a view *is* a table that happens to\n> have an \"on select do instead\" rule attached to it. If the table\n> has data in it (which it normally wouldn't) you can't see that data\n> anyway because of the select rule.\n\n Right\n\n>\n> This is another example like SERIAL columns, UNIQUE columns, etc, where\n> we are not really leaving enough information in the system tables to\n> allow accurate reconstruction of what the user originally said. Was\n> it a CREATE VIEW, or a CREATE TABLE and manual attachment of a rule?\n> No way to tell. In one sense it doesn't matter a whole lot, but for\n> psql displays and pg_dump it would be nice to know what happened.\n\n Oh - but for VIEW's we leave enough information in the system\n tables. Rules on event SELECT actually\n\n 1. must be INSTEAD\n\n 2. have exactly one action. This action must be another\n SELECT which exactly produces a targetlist where all\n attributes are in the order and of the types of the\n tables schema\n\n 3. must be named \"_RET<tablename>\"\n\n 4. must be the only rule on event SELECT.\n\n These restrictions clearly tell that if a table has an ON\n SELECT rule, it IS A VIEW! There is absolutely no other\n possibility.\n\n Stonebraker originally planned to have other rules on the\n SELECT case too, namely attribute rules which only rewrite a\n single attribute of a table, and rules performing other\n actions than a SELECT if someone scans that table. But AFAIK\n these plans never materialized.\n\n The problem on SELECT rules is that they have totally\n different semantics than any other rules in that they must\n get applied not only on SELECT. Instead we also rewrite\n things like\n\n INSERT ... SELECT\n\n and\n\n DELETE ... WHERE x = view.y AND view.z = ...\n\n so views become usable in all kinds of statements.\n\n When fixing the rewrite system for v6.4 I decided to simplify\n the rewriting of SELECT rules by restricting them totally to\n views. After that, I simply took out all that screwed up\n code dealing with attribute rewriting and sent it down to the\n bit recycling.\n\n I don't plan to turn this wheel back. And if someone else\n ever succeeds in doing so, we'll have another \"ruleguru\" :-)\n\n So if you find an entry in pg_rewrite with ev_type=1 and\n ev_class=<my_tables_oid>, then my_table is a view - end of\n story.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Mon, 12 Jul 1999 18:10:52 +0200 (MET DST)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] create rule changes table to view ?" }, { "msg_contents": "On Mon, 12 Jul 1999, Tom Lane wrote:\n\n> Ryan Bradetich <[email protected]> writes:\n> > psql declares the the type to be view? if the relkind is a relation\n> > and the relhasrules = true in pg_class for that entry. I will pull\n> > the latest source and see if I can come up with a better way for\n> > determining the type tomorrow, if someone else doesn't beat me to it\n> \n> The way Jan explained it to me, a view *is* a table that happens to\n> have an \"on select do instead\" rule attached to it. If the table\n> has data in it (which it normally wouldn't) you can't see that data\n> anyway because of the select rule.\n\nDoes anyone else see a problem with this? This sort of approach almost\nprevents views with distinct, union, order by, etc. from ever being\nimplemented.\n\nI don't know what other people use their views for but I use them to store\ncomplicated queries. So, in essence it would suffice to store the text of\nthe query with a view rather than faking tables for it, thus confusing all\nsorts of utility programs.\n\nThen again, I'd be interested to know what to developers' idea of normal\nusage of a view is.\n\n\n-- \nPeter Eisentraut\nPathWay Computing, Inc.\n\n", "msg_date": "Mon, 12 Jul 1999 14:49:05 -0400 (EDT)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] create rule changes table to view ? " }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n>> The way Jan explained it to me, a view *is* a table that happens to\n>> have an \"on select do instead\" rule attached to it. If the table\n>> has data in it (which it normally wouldn't) you can't see that data\n>> anyway because of the select rule.\n\n> Does anyone else see a problem with this? This sort of approach almost\n> prevents views with distinct, union, order by, etc. from ever being\n> implemented.\n\nWhat makes you think that? We do have work to do before some of those\nthings will work, but I don't think it has anything to do with whether\nthere is an empty table underlying a view...\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 12 Jul 1999 17:24:44 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] create rule changes table to view ? " }, { "msg_contents": "Peter Eisentraut wrote:\n\n>\n> On Mon, 12 Jul 1999, Tom Lane wrote:\n>\n> > Ryan Bradetich <[email protected]> writes:\n> > > psql declares the the type to be view? if the relkind is a relation\n> > > and the relhasrules = true in pg_class for that entry. I will pull\n> > > the latest source and see if I can come up with a better way for\n> > > determining the type tomorrow, if someone else doesn't beat me to it\n> >\n> > The way Jan explained it to me, a view *is* a table that happens to\n> > have an \"on select do instead\" rule attached to it. If the table\n> > has data in it (which it normally wouldn't) you can't see that data\n> > anyway because of the select rule.\n>\n> Does anyone else see a problem with this? This sort of approach almost\n> prevents views with distinct, union, order by, etc. from ever being\n> implemented.\n\nPardon - YES and NO!\n\n After all I think (even if it was a really great job) that\n Stonebraker was wrong. Views cannot be completely implemented\n by rules. That would make it impossibly complicated for a\n query planner.\n\n But I'm a YESBUTTER :-)\n\n But it really was a great job! In the actual version of\n PostgreSQL you can define a view that's a join of 3 tables\n and then select from that view by joining it with another 2\n tables. The result will be a querytree that's exactly what\n you would have to type if there wouldn't be any view's at all\n - a join over 5 tables. That (however complicated) querytree\n is handed to the optimizer.\n\n It is the optimizer's job to decide the best access path for\n a 5 table join.\n\n YESBUT!\n\n Stonebraker was wrong - and must have been bacause today we\n want to get SQL92 compliant - and that spec didn't existed\n when he designed our rule sytem. The rule system is\n something we got from the good old v4.2 Postgres. That\n wasn't an SQL database, the querylanguage was POSTQUEL. So it\n isn't surprising that the original rule system spec's don't\n meet today's SQL needs.\n\n For thing's like aggregates, distinct/grouping and the like,\n we need to take a step backward and really do some kind of\n view materialization (create a real execution path for the\n view's definition). But don't force that to be done whenever\n a view is used - that doesn't make things better.\n\n>\n> I don't know what other people use their views for but I use them to store\n> complicated queries. So, in essence it would suffice to store the text of\n> the query with a view rather than faking tables for it, thus confusing all\n> sorts of utility programs.\n>\n> Then again, I'd be interested to know what to developers' idea of normal\n> usage of a view is.\n\n It doesn't count what 95% of our users use view's for. A view\n is a relation like a table, and if appearing in the\n rangetable, it must be treated like a relation.\n\n Well - let's only store the \"QUERY TEXT\" of a view:\n\n CREATE VIEW v1 AS SELECT X.a, X.b, Y.b AS c\n FROM tab1 X, tab2 Y\n WHERE X.a = Y.a;\n\n Simple enough - O.K.?\n\n Now we execute some simple queries:\n\n SELECT * FROM vi;\n\n SELECT Z.a, V.b, V.c FROM tab3 Z, v1 V\n WHERE Z.a = V.a;\n\n SELECT Z.a, SUM(V.c) FROM tab3 Z, v1 V\n WHERE Z.a = V.a;\n\n INSERT INTO tab4 SELECT Z.a, SUM(V.c) FROM tab3 Z, v1 V\n WHERE Z.a = V.a\n AND V.b > 2;\n\n DELETE FROM tab5 WHERE aa = v1.a AND bb < v1.c;\n\n Simple enough? All valid SQL statements! Could you now simply\n explain HOW to build the correct final statements by\n incorporating the stored \"QUERY TEXT\" into the above\n statements?\n\n I really mean HOW - not what the equivalent statements, hand\n translated, would look like (I've read querytrees like\n printed in debug level 3 several night's until I understood\n how rules should work - so I know how to rewrite the above by\n hand). The way I know to express this in C is the rule\n system you find in rewrite_handler.c and rewrite_manip.c\n (mostly). If you know an easier way, let me know.\n\n PLEASE DON'T READ THIS REPLY AS A SORT OF A FLAME. I KNOW\n THAT IT IS HARD TO UNDERSTAND THE RULE SYSTEM - I HAD TO TAKE\n THAT LEARNING CURVE MYSELF. AFTER ALL I STILL MIGHT HAVE\n MISSED SOMETHING - THUS I THINK WE STILL NEED MATERIALIZATION\n OF VIEWS IN SOME CASES (yesbut only in few cases - not in all\n view cases).\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Tue, 13 Jul 1999 03:25:27 +0200 (MET DST)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] create rule changes table to view ?" }, { "msg_contents": "> Stonebraker was wrong - and must have been bacause today we\n> want to get SQL92 compliant - and that spec didn't existed\n> when he designed our rule sytem. The rule system is\n> something we got from the good old v4.2 Postgres. That\n> wasn't an SQL database, the querylanguage was POSTQUEL. So it\n> isn't surprising that the original rule system spec's don't\n> meet today's SQL needs.\n> \n> For thing's like aggregates, distinct/grouping and the like,\n> we need to take a step backward and really do some kind of\n> view materialization (create a real execution path for the\n> view's definition). But don't force that to be done whenever\n> a view is used - that doesn't make things better.\n\nThanks. Now I understand why aggregates cause problems with rules.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 12 Jul 1999 21:35:40 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] create rule changes table to view ?" }, { "msg_contents": "Thus spake Peter Eisentraut\n> I don't know what other people use their views for but I use them to store\n> complicated queries. So, in essence it would suffice to store the text of\n> the query with a view rather than faking tables for it, thus confusing all\n> sorts of utility programs.\n> \n> Then again, I'd be interested to know what to developers' idea of normal\n> usage of a view is.\n\nI use it for access control. Remember, in PostgreSQL we can grant and\nrevoke access to tables independent of the table it is a view of. I use\nit to allow wider access to a subset of the fields in a table.\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 424 2871 (DoD#0082) (eNTP) | what's for dinner.\n", "msg_date": "Tue, 13 Jul 1999 08:58:17 -0400 (EDT)", "msg_from": "\"D'Arcy\" \"J.M.\" Cain <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] create rule changes table to view ?" }, { "msg_contents": "On Tue, 13 Jul 1999, Jan Wieck wrote:\n\n> > > The way Jan explained it to me, a view *is* a table that happens to\n> > > have an \"on select do instead\" rule attached to it. If the table\n> > > has data in it (which it normally wouldn't) you can't see that data\n> > > anyway because of the select rule.\n> >\n> > Does anyone else see a problem with this? This sort of approach almost\n> > prevents views with distinct, union, order by, etc. from ever being\n> > implemented.\n> \n> Pardon - YES and NO!\n> \n> After all I think (even if it was a really great job) that\n> Stonebraker was wrong. Views cannot be completely implemented\n> by rules. That would make it impossibly complicated for a\n> query planner.\n\nThat was my point. Sure some of these things above could be done, but it's\na dead end of sorts.\n\n> > I don't know what other people use their views for but I use them to store\n> > complicated queries. So, in essence it would suffice to store the text of\n> > the query with a view rather than faking tables for it, thus confusing all\n> > sorts of utility programs.\n> >\n> > Then again, I'd be interested to know what to developers' idea of normal\n> > usage of a view is.\n> \n> It doesn't count what 95% of our users use view's for. A view\n\nUm, it should though, shouldn't it?\n\n> Well - let's only store the \"QUERY TEXT\" of a view:\n\n> Now we execute some simple queries:\n\n> Simple enough? All valid SQL statements! Could you now simply\n> explain HOW to build the correct final statements by\n> incorporating the stored \"QUERY TEXT\" into the above\n> statements?\n\nWell, this would be trivial if you'd allow subselects in the FROM clause.\nBut now I am beginning to realize that this is the very reason those\nsubselects in the from clause aren't possible. Perhaps we ought to think\nup some math magic there. But I can't think of anything short of a\ntemporary table of sorts right now.\n\nAnyway, you guys are doing a great job. If I had some more time I'd dig\nmyself into this business and help out. Until that day, I'm sure you have\nyour reasons for things to be the way they are, I'm just trying to point\nout ideas for improvements.\n\n\n-- \nPeter Eisentraut\nPathWay Computing, Inc.\n\n", "msg_date": "Tue, 13 Jul 1999 14:11:13 -0400 (EDT)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] create rule changes table to view ?" } ]
[ { "msg_contents": "Today I run stress test on my Web server. Test is rather complex\nmodperl script which does some manipulations with postgres (6.5)\ndatabase. In error_log I found:\n\nNOTICE: LockRelease: locktable lookup failed, no lock\nNOTICE: LockRelease: locktable lookup failed, no lock\n\nI'd like to identify which process wrote there message.\nAre these messages come from postgres backend ?\n\n\tRegards,\n\t\tOleg\n\n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Mon, 12 Jul 1999 11:12:13 +0400 (MSD)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": true, "msg_subject": "NOTICE: LockRelease: locktable lookup failed, no lock" } ]
[ { "msg_contents": "Tom,\n\nThe little bit of investigation I've done leads me to belive I can determine the \ndifference between a table and a view, because they are correctly seperated in \npg_views and pg_tables. I'll do some more research and see if I can actually do \nthis, or if you and Jan are right :)\n\nThanks,\n- Ryan\n\n\n> Ryan Bradetich <[email protected]> writes:\n> > psql declares the the type to be view? if the relkind is a relation\n> > and the relhasrules = true in pg_class for that entry. I will pull\n> > the latest source and see if I can come up with a better way for\n> > determining the type tomorrow, if someone else doesn't beat me to it\n> \n> The way Jan explained it to me, a view *is* a table that happens to\n> have an \"on select do instead\" rule attached to it. If the table\n> has data in it (which it normally wouldn't) you can't see that data\n> anyway because of the select rule.\n> \n> This is another example like SERIAL columns, UNIQUE columns, etc, where\n> we are not really leaving enough information in the system tables to\n> allow accurate reconstruction of what the user originally said. Was\n> it a CREATE VIEW, or a CREATE TABLE and manual attachment of a rule?\n> No way to tell. In one sense it doesn't matter a whole lot, but for\n> psql displays and pg_dump it would be nice to know what happened.\n> \n> \t\t\tregards, tom lane\n", "msg_date": "Mon, 12 Jul 1999 09:21:33 -0600 (MDT)", "msg_from": "Ryan Bradetich <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] create rule changes table to view ?" } ]
[ { "msg_contents": "This particular query from the regression test reltime.sql fails\nalso fails for >=\nBUT not for < of <=\n\nWhere is the Comparison made, so I can trace backwards how it got there.\n\nAny Hints would be appreaciated.\ngat\n\n\nQUERY: SELECT '' AS five, RELTIME_TBL.*\n WHERE RELTIME_TBL.f1 > '@ 3 seconds ago'::reltime;\nfive|f1\n----+--\n(0 rows)", "msg_date": "Mon, 12 Jul 1999 15:27:29 -0400", "msg_from": "Uncle George <[email protected]>", "msg_from_op": true, "msg_subject": "Postgres Alpha Port On RH6.0" } ]
[ { "msg_contents": "Andreas Zeugswetter wrote:\n\n>\n>\n> > > For thing's like aggregates, distinct/grouping and the like,\n> > > we need to take a step backward and really do some kind of\n> > > view materialization (create a real execution path for the\n> > > view's definition). But don't force that to be done whenever\n> > > a view is used - that doesn't make things better.\n> >\n> > Thanks. Now I understand why aggregates cause problems with rules.\n> >\n> Couldn't all views be expressed with the rule system, if we had subselects\n> in the\n> from clause ? This would be useful for other SQL too. RDB has this e.g.\n\nI hope so,\n\n because the FROM clause is what I (thinking in querytrees)\n usually call the rangetable. After parsing, all relations\n (tables and views - the parser doesn't care) the user\n mentioned in his query appear in the querytree as RTE's\n (Range Table Entries).\n\n On a first thought it looks simple to just add another Node\n pointer to the RTE structure and if a view has something that\n requires materialization just throw it's querytree from\n pg_rewrite into there. The planner then has to produce the\n entire subtree for that as a left- or righttree for the\n \"relation\".\n\n The problem is just to decide which restrictions from the\n WHERE clause could be taken down into this subselecting RTE\n to reduce the amount of data the view materializes instead of\n filtering them out later.\n\n Example:\n\n CREATE VIEW v1 AS SELECT a, sum(b) FROM t1 GROUP BY a;\n\n SELECT count(*) FROM v1 WHERE a < 10;\n\n Let's assume now that t1 has a million rows but only a few\n hundred that match a < 10. If we now materialize the view in\n a subplan without telling a < 10, a seqscan over the entire\n table plus sorting/grouping and summing would happen instead\n of fetching the few tuples by index and then sort/group/sum.\n\n The opposite:\n\n CREATE VIEW v2 AS SELECT a, sum(c) FROM t2 GROUP BY a;\n\n SELECT v1.a FROM v1, v2 WHERE v1.a = v2.a AND v1.b = v2.c;\n\n This time there is no chance - we ask for comparision of two\n aggregates of different views. The WHERE clause here can only\n be evaluated after both views have completely been\n materialized.\n\n> I do not beleive, that Stonebraker had an incomplete Rule System in mind.\n\n At least his concept is expandable to meet our needs. An\n expandable concept is never really incomplete as long as it\n never leaves the drawing board :-)\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Tue, 13 Jul 1999 11:56:41 +0200 (MET DST)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": true, "msg_subject": "Re: AW: [HACKERS] create rule changes table to view ?" } ]
[ { "msg_contents": "\nTagged with a release tag of 'REL6_5' ...\n\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Tue, 13 Jul 1999 09:23:15 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "PostgreSQL v6.5 - Tagged " }, { "msg_contents": "The Hermit Hacker <[email protected]> writes:\n> Tagged with a release tag of 'REL6_5' ...\n\nEr, don't you need to make a branch, not just tags?\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 13 Jul 1999 11:33:23 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] PostgreSQL v6.5 - Tagged " }, { "msg_contents": "On Tue, 13 Jul 1999, Tom Lane wrote:\n\n> The Hermit Hacker <[email protected]> writes:\n> > Tagged with a release tag of 'REL6_5' ...\n> \n> Er, don't you need to make a branch, not just tags?\n\nYa know something, after all these years, I'm still not 100% of the\ndifference between the two :(\n\nI just 'branched' the tree off of the REL6_5 tag...personally, unless you\nget into multiple branches off of a tree at one tag point, I don't *think*\nthat the branch is required, but, hell, better safe then sorry...\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Tue, 13 Jul 1999 15:11:21 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] PostgreSQL v6.5 - Tagged " }, { "msg_contents": "> On Tue, 13 Jul 1999, Tom Lane wrote:\n> \n> > The Hermit Hacker <[email protected]> writes:\n> > > Tagged with a release tag of 'REL6_5' ...\n> > \n> > Er, don't you need to make a branch, not just tags?\n> \n> Ya know something, after all these years, I'm still not 100% of the\n> difference between the two :(\n> \n> I just 'branched' the tree off of the REL6_5 tag...personally, unless you\n> get into multiple branches off of a tree at one tag point, I don't *think*\n> that the branch is required, but, hell, better safe then sorry...\n\nOK, I am snagging the REL6_5 branch now.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 13 Jul 1999 14:43:07 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] PostgreSQL v6.5 - Tagged" }, { "msg_contents": "The Hermit Hacker <[email protected]> writes:\n> On Tue, 13 Jul 1999, Tom Lane wrote:\n>> Er, don't you need to make a branch, not just tags?\n\n> Ya know something, after all these years, I'm still not 100% of the\n> difference between the two :(\n\nI'm not either ... I just saw that cvs log was reporting it differently\nthan it did for the REL6_4 stuff.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 13 Jul 1999 15:26:22 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] PostgreSQL v6.5 - Tagged " }, { "msg_contents": "On Tue, Jul 13, 1999 at 03:26:22PM -0400, Tom Lane wrote:\n> The Hermit Hacker <[email protected]> writes:\n> > On Tue, 13 Jul 1999, Tom Lane wrote:\n> >> Er, don't you need to make a branch, not just tags?\n> \n> > Ya know something, after all these years, I'm still not 100% of the\n> > difference between the two :(\n> \n> I'm not either ... I just saw that cvs log was reporting it differently\n> than it did for the REL6_4 stuff.\n\nMy understanding is that a tag is just a convenient way to cluster a\nset of file versions so they can be checked out together. In practice,\nit's usually synonymous with using a particular -D date flag.\n\nA branch, on the other hand, allows the dreaded double patching - the\nbranch can now evolve independently of the trunk, perhaps to be merged\nlater, perhaps not. If you double patch, no need to merge, just abandon\nthe branch when it gets too different.\n\nRoss \n(This is, of course, just my understanding, not backed by years of using\nCVS in a branching environment)\n\n-- \nRoss J. Reedstrom, Ph.D., <[email protected]> \nNSBRI Research Scientist/Programmer\nComputer and Information Technology Institute\nRice University, 6100 S. Main St., Houston, TX 77005\n", "msg_date": "Tue, 13 Jul 1999 17:00:55 -0500", "msg_from": "\"Ross J. Reedstrom\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] PostgreSQL v6.5 - Tagged" }, { "msg_contents": "\nWorks for me...its now tag'd and branched :)\n\n\nOn Tue, 13 Jul 1999, Ross J. Reedstrom wrote:\n\n> On Tue, Jul 13, 1999 at 03:26:22PM -0400, Tom Lane wrote:\n> > The Hermit Hacker <[email protected]> writes:\n> > > On Tue, 13 Jul 1999, Tom Lane wrote:\n> > >> Er, don't you need to make a branch, not just tags?\n> > \n> > > Ya know something, after all these years, I'm still not 100% of the\n> > > difference between the two :(\n> > \n> > I'm not either ... I just saw that cvs log was reporting it differently\n> > than it did for the REL6_4 stuff.\n> \n> My understanding is that a tag is just a convenient way to cluster a\n> set of file versions so they can be checked out together. In practice,\n> it's usually synonymous with using a particular -D date flag.\n> \n> A branch, on the other hand, allows the dreaded double patching - the\n> branch can now evolve independently of the trunk, perhaps to be merged\n> later, perhaps not. If you double patch, no need to merge, just abandon\n> the branch when it gets too different.\n> \n> Ross \n> (This is, of course, just my understanding, not backed by years of using\n> CVS in a branching environment)\n> \n> -- \n> Ross J. Reedstrom, Ph.D., <[email protected]> \n> NSBRI Research Scientist/Programmer\n> Computer and Information Technology Institute\n> Rice University, 6100 S. Main St., Houston, TX 77005\n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Tue, 13 Jul 1999 19:19:05 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] PostgreSQL v6.5 - Tagged" } ]
[ { "msg_contents": "> > > > For thing's like aggregates, distinct/grouping and the like,\n> > > > we need to take a step backward and really do some kind of\n> > > > view materialization (create a real execution path for the\n> > > > view's definition). But don't force that to be done whenever\n> > > > a view is used - that doesn't make things better.\n> > >\n> > > Thanks. Now I understand why aggregates cause problems with rules.\n> > >\n> > Couldn't all views be expressed with the rule system, if we had\n> subselects\n> > in the\n> > from clause ? This would be useful for other SQL too. RDB has this e.g.\n> \n> I hope so,\n> \n> because the FROM clause is what I (thinking in querytrees)\n> usually call the rangetable. After parsing, all relations\n> (tables and views - the parser doesn't care) the user\n> mentioned in his query appear in the querytree as RTE's\n> (Range Table Entries).\n> \n> On a first thought it looks simple to just add another Node\n> pointer to the RTE structure and if a view has something that\n> requires materialization just throw it's querytree from\n> pg_rewrite into there. The planner then has to produce the\n> entire subtree for that as a left- or righttree for the\n> \"relation\".\n> \n> The problem is just to decide which restrictions from the\n> WHERE clause could be taken down into this subselecting RTE\n> to reduce the amount of data the view materializes instead of\n> filtering them out later.\n> \n> Example:\n> \n> CREATE VIEW v1 AS SELECT a, sum(b) FROM t1 GROUP BY a;\n> \n> SELECT count(*) FROM v1 WHERE a < 10;\n> \n> Let's assume now that t1 has a million rows but only a few\n> hundred that match a < 10. If we now materialize the view in\n> a subplan without telling a < 10, a seqscan over the entire\n> table plus sorting/grouping and summing would happen instead\n> of fetching the few tuples by index and then sort/group/sum.\n> \n> The opposite:\n> \n> CREATE VIEW v2 AS SELECT a, sum(c) FROM t2 GROUP BY a;\n> \n> SELECT v1.a FROM v1, v2 WHERE v1.a = v2.a AND v1.b = v2.c;\n> \n> This time there is no chance - we ask for comparision of two\n> aggregates of different views. The WHERE clause here can only\n> be evaluated after both views have completely been\n> materialized.\n> \n> > I do not beleive, that Stonebraker had an incomplete Rule System in\n> mind.\n> \n> At least his concept is expandable to meet our needs. An\n> expandable concept is never really incomplete as long as it\n> never leaves the drawing board :-)\n> \n> \n> Jan\n> \n\tWould it be possible to make the executor reentrant for those\nsubqueries which couldn't be rewritten/resolved into the parent query.\n\tIf you took your second example above and store the results of v1\nand v2 into temp tables v1_temp, v2_temp respectively, They could be used to\ncomplete the query on another executor pass.\n\tYou wouldn't need to re-parse/optimize because you could simple\nreplace the sections of the RTE with the oids of the temp tables and then\nexecute. There wouldn't be any indexes to optimize upon so you could just\nchoose a join method (i.e. HASH) that would work best with the number of\nrows that need to be sequentially scanned and/or sorted.\n\tI think it would be dog slow but it would work for those cases.\n\n\tI haven't thought through all of the possible cases but it appears\nthat the best case for combining is a single table single constraint\nsituation.\n\tFrom your first example it's easy to see that the constant would be\ntaken into the subselect and since this leaves the outside query without any\nconstraining terms then see if you can just rewrite the select list to\nperform the query without the subselect.\n\n\tIf you're willing to give me a fairly comprehensive query/view\ncombinations I'm willing to work out a strategy to resolve them all; I don't\nknow how efficient it will all be but I'll give it a whirl.\n\n\tdiscussion can always be useful,\n\t\t-DEJ\n", "msg_date": "Tue, 13 Jul 1999 13:47:51 -0500", "msg_from": "\"Jackson, DeJuan\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: AW: [HACKERS] create rule changes table to view ?" } ]
[ { "msg_contents": "Hi.\n\nI think I've created a monster...\n\nWorking on an email system I have the following:\nTable = usermail\n+----------------------------------+--------------------------+-------+\n| Field | Type | Length|\n+----------------------------------+--------------------------+-------+\n| contentlength | int4 | 4 |\n| folder | int4 | 4 |\n| flagnew | bool | 1 |\netc...\n\nAnd:\nTable = folders\n+----------------------------------+--------------------------+-------+\n| Field | Type | Length|\n+----------------------------------+--------------------------+-------+\n| loginid | varchar() not null | 16 |\n| folderid | int4 not null default ( | 4 |\n| foldername | varchar() | 25 |\netc...\n\nSo each email message has an entry in usermail, and each mail folder has\nan entry in folders. I need to extract the following info:\nfoldername, number of messages in that folder, number of messages in that\nfolder with flagread set, total size of all the messages in each folder\n\nSince postgres does not appear to support outer joins, I've come up with a\nreally icky query that almost does what I want:\n\nSELECT folderid,foldername,count(*),sum(contentlength) \n FROM usermail,folders \n WHERE usermail.loginid='michael' AND\n folders.loginid=usermail.loginid AND \n usermail.folder=folders.folderid\n GROUP BY folderid,foldername \nUNION SELECT folderid,foldername,null,null\n FROM folders \n WHERE loginid='michael' AND \n folderid NOT IN \n (SELECT folder FROM usermail WHERE loginid='michael');\n\nWHEW!\n\nfolderid|foldername |count| sum\n--------+----------------+-----+-------\n -4|Deleted Messages| 110| 245627\n -3|Saved Drafts | | \n -2|Sent Mail | 7| 10878\n -1|New Mail Folder | 73|8831226\n 1|OOL | 7| 8470\netc...\n\nMy final problem is to count all the messages with flagnew set to true.\nThe only way I can think to do this is to convert the bool value to a 1 or\n0 (which I think should be a standard conversion anyway) and run a sum()\non them.\n\nUnless anyone can come up with a better way to do this, What is the best\nway to implement a conversion from bool to int?\n\n-Michael\n\n", "msg_date": "Tue, 13 Jul 1999 20:25:09 -0300 (ADT)", "msg_from": "Michael Richards <[email protected]>", "msg_from_op": true, "msg_subject": "Counting bool flags in a complex query" }, { "msg_contents": "> Hi.\n> \n> I think I've created a monster...\n> \n...\n> \n> My final problem is to count all the messages with flagnew set to true.\n> The only way I can think to do this is to convert the bool value to a 1 or\n> 0 (which I think should be a standard conversion anyway) and run a sum()\n> on them.\n> \n> Unless anyone can come up with a better way to do this, What is the best\n> way to implement a conversion from bool to int?\n> \n> -Michael\n\nOf course, you could always use count() and a 'WHERE flagnew' clause...\n\nDuane\n", "msg_date": "Wed, 14 Jul 1999 08:49:51 +0000 (AST)", "msg_from": "Duane Currie <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Counting bool flags in a complex query" }, { "msg_contents": "On Wed, 14 Jul 1999, Duane Currie wrote:\n\n> > My final problem is to count all the messages with flagnew set to true.\n> > The only way I can think to do this is to convert the bool value to a 1 or\n> > 0 (which I think should be a standard conversion anyway) and run a sum()\n> > on them.\n> > \n> > Unless anyone can come up with a better way to do this, What is the best\n> > way to implement a conversion from bool to int?\n> \n> Of course, you could always use count() and a 'WHERE flagnew' clause...\n\nProblem with that of course is that by limiting the query with a \"where\",\nI'd lose all the records in the original count, and therefore the total\nnumber of messages (a count that ignores the status of flagnew) would be\nwrong.\n\nWhat I was sort of hoping for was a way to implement a native conversion\nfrom bool to int, and have it included in the standard postgres system. I\nthink the conversion if a reasonable logical one where true==1 and\nfalse==0. The problem is, I don't have a sweet clue how to do this. I\nthink it should be a trivial matter to insert something into a system\ntable...\n\n-Michael\n\n", "msg_date": "Wed, 14 Jul 1999 11:51:31 -0300 (ADT)", "msg_from": "Michael Richards <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Counting bool flags in a complex query" }, { "msg_contents": "> Unless anyone can come up with a better way to do this, What is the best\n> way to implement a conversion from bool to int?\n\nTry\n\n select sum(case when bfield = TRUE then 1 else 0 end) from table;\n\nIt works for me...\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Wed, 14 Jul 1999 16:13:39 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Counting bool flags in a complex query" }, { "msg_contents": "On Wed, 14 Jul 1999, Thomas Lockhart wrote:\n\n> > Unless anyone can come up with a better way to do this, What is the best\n> > way to implement a conversion from bool to int?\n> \n> select sum(case when bfield = TRUE then 1 else 0 end) from table;\n\nI'm not sure this is correct, but I think I see a bug of some sort...\n\nSELECT folderid,foldername,count(*),sum(contentlength),sum(case when\nflagnew = TRUE then 1 else 0 end) FROM usermail,folders WHERE\nusermail.loginid='michael' and folders.loginid=usermail.loginid AND\nusermail.folder = folders.folderid GROUP BY folderid,foldername UNION\nSELECT folderid,foldername,0,0,0 FROM folders WHERE loginid='michael' AND\nNOT EXISTS (SELECT folder FROM usermail WHERE loginid='michael' AND\nfolder=folderid) ;\nERROR: _finalize_primnode: can't handle node 723\n\nIt seems to be the union that is confuzing it...\n\nSELECT folderid,foldername,count(*),sum(contentlength),sum(case when\nflagnew = TRUE then 1 else 0 end) FROM usermail,folders WHERE\nusermail.loginid='michael' and folders.loginid=usermail.loginid AND\nusermail.folder = folders.folderid GROUP BY folderid,foldername; \nfolderid|foldername |count| sum|sum\n--------+----------------+-----+-------+---\n -4|Deleted Messages| 110| 245627| 50\n -2|Sent Mail | 7| 10878| 2\n -1|New Mail Folder | 73|8831226| 1\n 1|OOL | 7| 8470| 0\netc\n\n-Michael\n\n", "msg_date": "Wed, 14 Jul 1999 13:34:33 -0300 (ADT)", "msg_from": "Michael Richards <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Counting bool flags in a complex query" }, { "msg_contents": "Michael Richards <[email protected]> writes:\n> ERROR: _finalize_primnode: can't handle node 723\n\nGrumble. Still another routine that doesn't know as much as it should\nabout traversing parsetrees. Looks like a job for <flourish of trumpets>\nexpression_tree_walker.\n\n> It seems to be the union that is confuzing it...\n\nCASE expression inside a UNION/INTERSECT/EXCEPT, to be specific.\n\nWill fix this in time for 6.5.1.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 14 Jul 1999 18:05:03 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Counting bool flags in a complex query " }, { "msg_contents": "Michael Richards <[email protected]> writes:\n> I'm not sure this is correct, but I think I see a bug of some sort...\n\n> SELECT folderid,foldername,count(*),sum(contentlength),sum(case when\n> flagnew = TRUE then 1 else 0 end) FROM usermail,folders WHERE\n> usermail.loginid='michael' and folders.loginid=usermail.loginid AND\n> usermail.folder = folders.folderid GROUP BY folderid,foldername UNION\n> SELECT folderid,foldername,0,0,0 FROM folders WHERE loginid='michael' AND\n> NOT EXISTS (SELECT folder FROM usermail WHERE loginid='michael' AND\n> folder=folderid) ;\n> ERROR: _finalize_primnode: can't handle node 723\n\nI committed a fix last night; it will be in 6.5.1.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 15 Jul 1999 11:07:10 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Counting bool flags in a complex query " }, { "msg_contents": "On Thu, 15 Jul 1999, Tom Lane wrote:\n\n> Michael Richards <[email protected]> writes:\n> > I'm not sure this is correct, but I think I see a bug of some sort...\n> \n> I committed a fix last night; it will be in 6.5.1.\n\nI've found what I believe is another set of bugs:\nThis is my monster query again...\n\nMy folder numbers are: negative numbers are system folders such as New\nmail, trash, drafts and sentmail. I wanted to order the tuples so that the\nfolderids were sorted from -1 to -4, then 1 to x. This way the system\nfolders would always appear first in the list.\n\nThis may not be valid SQL, as none of my books mention it. Is it possible\nto order by an expression?\n\nHere are some examples which some some odd behaviour. My suspected bug\nfindings are at the end:\n\nSELECT folderid,foldername,count(*) as \"messgaes\",sum(bool2int(flagnew))\nas \"newmessages\",sum(contentlength) as \"size\" FROM usermail,folders WHERE\nusermail.loginid='michael' and folders.loginid=usermail.loginid AND\nusermail.folder = folders.folderid GROUP BY folderid,foldername UNION\nSELECT folderid,foldername,0,0,0 FROM folders WHERE loginid='michael' AND\nNOT EXISTS (SELECT folder FROM usermail WHERE loginid='michael' AND\nfolder=folderid) order by (folderid>0);\nfolderid|foldername |messgaes|newmessages| size\n--------+----------------+--------+-----------+-------\n -4|Deleted Messages| 110| 50| 245627\n -2|Sent Mail | 7| 2| 10878\n -1|New Mail Folder | 73| 1|8831226\n 1|OOL | 7| 0| 8470\n 2|suggestions | 26| 0| 35433\n 3|Acadia | 5| 0| 17703\n 4|advertising | 4| 2| 5394\n 5|dealt with | 3| 0| 2883\n 36|dauphne | 9| 0| 66850\n -3|Saved Drafts | 0| 0| 0\n(10 rows)\n\nIt looks like the order by is only being applied to the original select,\nnot the unioned select. Some authority should check on it, but by thought\nit that a union does not necessarily maintain the order, so the entire\nselect should be applied to the order.\n\nI'm not so good at interpreting the query plan, but here it is:\nUnique (cost=8.10 rows=0 width=0)\n -> Sort (cost=8.10 rows=0 width=0)\n -> Append (cost=8.10 rows=0 width=0)\n -> Aggregate (cost=6.05 rows=1 width=49)\n -> Group (cost=6.05 rows=1 width=49)\n -> Sort (cost=6.05 rows=1 width=49)\n -> Nested Loop (cost=6.05 rows=1 width=49)\n -> Index Scan using usermail_pkey on usermail (cost=2.05 rows=2 width=21)\n -> Index Scan using folders_pkey on folders (cost=2.00 rows=8448 width=28)\n -> Index Scan using folders_pkey on folders (cost=2.05 rows=2 width=16)\n SubPlan\n -> Index Scan using usermail_pkey on usermail (cost=2.05 rows=1 width=4)\n\nI would have expected the folderid -3 to appear as the 3rd one in this\ncase.\n\nI'm probably going to change the numbering scheme of the system folders so\nthey will sort correctly without a kluge such as:\ncreate function ordfolderid(int) returns int as 'select $1*-1 where $1<0\nunion select $1+1*10 where $1>=0' language 'sql';\n\nThen running the order clause as: \norder by (folderid<0),ordfolderid(folderid)\nMy thought behind this kludge is that the table should first be ordered by\nthe t/f value of the fact folderid<0, then within each of the true and\nfalse sortings, subsort those by the value of folderid.\n\nComplicated enough for you?\n\nWell, in my playing I notice what appears to be more of a bug...\nSELECT folderid,foldername,count(*) as \"messages\",sum(bool2int(flagnew))\nas \"newmessages\",sum(contentlength) as \"size\" FROM usermail,folders WHERE\nusermail.loginid='michael' and folders.loginid=usermail.loginid AND\nusermail.folder = folders.folderid GROUP BY folderid,foldername UNION\nSELECT folderid,foldername,0,0,0 FROM folders WHERE loginid='michael' AND\nNOT EXISTS (SELECT folder FROM usermail WHERE loginid='michael' AND\nfolder=folderid) order by (folderid<0);\nfolderid|foldername |messgaes|newmessages| size\n--------+----------------+--------+-----------+-------\n 1|OOL | 7| 0| 8470\n 2|suggestions | 26| 0| 35433\n 3|Acadia | 5| 0| 17703\n 4|advertising | 4| 2| 5394\n 5|dealt with | 3| 0| 2883\n 36|dauphne | 9| 0| 66850\n -4|Deleted Messages| 110| 50| 245627\n -2|Sent Mail | 7| 2| 10878\n -1|New Mail Folder | 73| 1|8831226\n -3|Saved Drafts | 0| 0| 0\n(10 rows)\n\nSELECT folderid,foldername,count(*) as \"messages\",sum(bool2int(flagnew))\nas \"newmessages\",sum(contentlength) as \"size\" FROM usermail,folders WHERE\nusermail.loginid='michael' and folders.loginid=usermail.loginid AND\nusermail.folder = folders.folderid GROUP BY folderid,foldername UNION\nSELECT folderid,foldername,0,0,0 FROM folders WHERE loginid='michael' AND\nNOT EXISTS (SELECT folder FROM usermail WHERE loginid='michael' AND\nfolder=folderid) order by (messages<10);\nERROR: attribute 'messages' not found\n\nUsing a column name within an expression in the order by does not seem to\nwork...\nOr a much simpler example to illustrate the bug:\nfastmail=> select 1 as \"test\" order by (test<9);\nERROR: attribute 'test' not found\n\nfastmail=> select 1 as \"test\" order by test;\ntest\n----\n 1\n(1 row)\n\n\nI was almost able to make it work properly aside from the sorting issue\nwith my kludged up routine... This is so nasty that I most definitely\ndon't want to put it into production:\n\nSELECT folderid,foldername,count(*) as \"messages\",sum(bool2int(flagnew))\nas \"newmessages\",sum(contentlength) as \"size\",(folderid>=0) FROM\nusermail,folders WHERE usermail.loginid='michael' and\nfolders.loginid=usermail.loginid AND usermail.folder = folders.folderid\nGROUP BY folderid,foldername UNION SELECT\nfolderid,foldername,0,0,0,(folderid>=0) FROM folders WHERE\nloginid='michael' AND NOT EXISTS (SELECT folder FROM usermail WHERE\nloginid='michael' AND folder=folderid) order by 6,ordfolderid(folderid);\nfolderid|foldername |messages|newmessages| size|?column?\n--------+----------------+--------+-----------+-------+--------\n -1|New Mail Folder | 73| 1|8831226|f \n -2|Sent Mail | 7| 2| 10878|f \n -4|Deleted Messages| 110| 50| 245627|f \n -3|Saved Drafts | 0| 0| 0|f \n 1|OOL | 7| 0| 8470|t \n 2|suggestions | 26| 0| 35433|t \n 3|Acadia | 5| 0| 17703|t \n 4|advertising | 4| 2| 5394|t \n 5|dealt with | 3| 0| 2883|t \n 36|dauphne | 9| 0| 66850|t \n(10 rows)\n\nDo I need outer joins to make this work instead of the screwed up union\nmethod I'm trying here, or is it just a series of bugs?\n\n-Michael\n\n", "msg_date": "Fri, 16 Jul 1999 05:37:20 -0300 (ADT)", "msg_from": "Michael Richards <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Counting bool flags in a complex query " }, { "msg_contents": "Michael Richards <[email protected]> writes:\n> I've found what I believe is another set of bugs:\n\nI can shed some light on these.\n\n> This may not be valid SQL, as none of my books mention it. Is it possible\n> to order by an expression?\n\nPostgres accepts expressions as ORDER BY clauses, although strict SQL92\nonly allows sorting by a column name or number.\n\n> It looks like the order by is only being applied to the original select,\n> not the unioned select. Some authority should check on it, but by thought\n> it that a union does not necessarily maintain the order, so the entire\n> select should be applied to the order.\n\nThat looks like a bug to me too --- I think the ORDER BY is supposed to\napply across the whole UNION result. Will look into it.\n\n> I'm probably going to change the numbering scheme of the system folders so\n> they will sort correctly without a kluge such as:\n\nGood plan. Although you could sort by a user-defined function result,\nit's likely to be horribly slow (because user-defined functions are\nslow:-().\n\n> Using a column name within an expression in the order by does not seem to\n> work...\n> Or a much simpler example to illustrate the bug:\n> fastmail=> select 1 as \"test\" order by (test<9);\n> ERROR: attribute 'test' not found\n\nThis is not so much a bug as a definitional issue. For SQL92\ncompatibility, we accept ORDER BY a column label so long as it's\na bare column label, but column labels are NOT part of the namespace\nfor full expression evaluation. You can't do this either:\n\nselect 1 as \"test\" , test<9 ;\nERROR: attribute 'test' not found\n\nThere are all sorts of squirrely questions about this feature IMHO.\nFor example,\n\ncreate table z1 (f1 int4, f2 int4);\nCREATE\nselect f1 as f2, f2 from z1 order by f2;\nf2|f2\n--+--\n(0 rows)\n\nWhich column do you think it's ordering by? Which column *should* it\norder by? I think this ought to draw an \"ambiguous column label\" error\n... there is code in there that claims to be looking for such a thing,\nin fact, so I am not quite sure why it doesn't trigger on this example.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 16 Jul 1999 10:35:32 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Counting bool flags in a complex query " }, { "msg_contents": "On Fri, 16 Jul 1999, Tom Lane wrote:\n\n> Good plan. Although you could sort by a user-defined function result,\n> it's likely to be horribly slow (because user-defined functions are\n> slow:-().\nYes, but I did include my horrible design ideas so you could see why in\n\"god's name\" I was trying to do what I was trying to do when I found what\nlooked to be a \"bug\"\n\n> This is not so much a bug as a definitional issue. For SQL92\n> compatibility, we accept ORDER BY a column label so long as it's\n> a bare column label, but column labels are NOT part of the namespace\n> for full expression evaluation. You can't do this either:\n> \n> select 1 as \"test\" , test<9 ;\n> ERROR: attribute 'test' not found\n> \n> There are all sorts of squirrely questions about this feature IMHO.\n> For example,\n> \n> create table z1 (f1 int4, f2 int4);\n> CREATE\n> select f1 as f2, f2 from z1 order by f2;\n> f2|f2\n> --+--\n> (0 rows)\n> \n> Which column do you think it's ordering by? Which column *should* it\n> order by? I think this ought to draw an \"ambiguous column label\" error\n> ... there is code in there that claims to be looking for such a thing,\n> in fact, so I am not quite sure why it doesn't trigger on this example.\n\nGood point. Is there anything in the SQL standard that defined how this\n\"is supposed\" to work? I suppose with no expression support it isn't\nreally necessary. How about requiring quotes when we're to look at it was\n\"named\" columns? If \nselect f1 as f2, f2 from z1 order by \"f2\";\n\nOf course I have no idea how this would conflicy with SQL-92. It's more of\nan idea...\n\n-Michael\n\n", "msg_date": "Fri, 16 Jul 1999 18:19:58 -0300 (ADT)", "msg_from": "Michael Richards <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Counting bool flags in a complex query " }, { "msg_contents": "Michael Richards <[email protected]> writes:\n>> For example,\n>> \n>> create table z1 (f1 int4, f2 int4);\n>> CREATE\n>> select f1 as f2, f2 from z1 order by f2;\n>> f2|f2\n>> --+--\n>> (0 rows)\n>> \n>> Which column do you think it's ordering by? Which column *should* it\n>> order by? I think this ought to draw an \"ambiguous column label\" error\n\n> Good point. Is there anything in the SQL standard that defined how this\n> \"is supposed\" to work?\n\nAfter looking at the SQL spec I think the above definitely ought to draw\nan error. We have the following verbiage concerning the column names\nfor the result of a SELECT:\n\n a) If the i-th <derived column> in the <select list> specifies\n an <as clause> that contains a <column name> C, then the\n <column name> of the i-th column of the result is C.\n\n b) If the i-th <derived column> in the <select list> does not\n specify an <as clause> and the <value expression> of that\n <derived column> is a single <column reference>, then the\n <column name> of the i-th column of the result is C.\n\n c) Otherwise, the <column name> of the i-th column of the <query\n specification> is implementation-dependent and different\n from the <column name> of any column, other than itself, of\n a table referenced by any <table reference> contained in the\n SQL-statement.\n\nwhich Postgres does indeed follow, and we see from (a) and (b) that \"f2\"\nis the required column name for both columns of the SELECT result.\nNow ORDER BY says\n\n a) If a <sort specification> contains a <column name>, then T\n shall contain exactly one column with that <column name> and\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n the <sort specification> identifies that column.\n\nwhich sure looks to me like it mandates an error for the example\nstatement.\n\nHowever, since SQL doesn't consider the possibility of expressions as\nORDER BY entries, we are more or less on our own for those. An\nexpression appearing in the target list of a SELECT is not allowed to\nrefer to columns by their \"AS\" names (and this does seem to be mandated\nby SQL92). So I think it makes sense to carry over the same restriction\nto ORDER BY.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 16 Jul 1999 17:56:44 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [SQL] Re: [HACKERS] Counting bool flags in a complex query " }, { "msg_contents": "\n> > Good plan. Although you could sort by a user-defined function result,\n> > it's likely to be horribly slow (because user-defined functions are\n> > slow:-().\n\n\nWhat is the actual overhead from using a userdefined\nfunction (a C function, say)?\n\nObviously there is an overhead involved in calling\nany function. I would like to know what kind of issues \nare involved when using results of a user defined function for \nsorting. \n\nAre the results calculated only once, as one would expect,\nfor example.\n\nThanks,\n\n\nTroy\n\nTroy Korjuslommi Tksoft OY, Inc.\[email protected] Software Development\n Open Source Solutions\n Hosting Services\n\n\n\n\n\n", "msg_date": "Fri, 16 Jul 1999 15:24:05 -0700 (PDT)", "msg_from": "\"[email protected]\" <[email protected]>", "msg_from_op": false, "msg_subject": "user defined function speeds" }, { "msg_contents": "At 11:37 +0300 on 16/07/1999, Michael Richards wrote:\n\n\n> My folder numbers are: negative numbers are system folders such as New\n> mail, trash, drafts and sentmail. I wanted to order the tuples so that the\n> folderids were sorted from -1 to -4, then 1 to x. This way the system\n> folders would always appear first in the list.\n>\n> This may not be valid SQL, as none of my books mention it. Is it possible\n> to order by an expression?\n>\n> Here are some examples which some some odd behaviour. My suspected bug\n> findings are at the end:\n\nI think the problem results from using non-standard constructs such as\norder by expression, and indeed ordering by columns that don't appear in\nthe select list.\n\nIf you want to do the best by yourself, put the expression by which you\norder in the select list. A simple example would be:\n\nInstead of:\n SELECT f1, min( f2 ), max ( f3 )\n GROUP BY f1\n ORDER BY expr( f1 );\n\nUse:\n\n SELECT expr( f1 ) AS ordcol, f1, min( f2 ), max( f3 )\n GROUP BY ordcol, f1\n ORDER BY ordcol;\n\nWhat is the difference? The difference is that now GROUP BY (which also\ndoes internal sorting) knows about that expression and considers it. Since\nordcol is the same for each value of f1, this should not change the groups.\nThis simply makes sure all parts of the query are aware of what is being\ndone around them. This is also the standard, as far as I recall.\n\nWhat's the problem? You have a column in the output that you didn't really\nwant. But hey, why should that bother you? If you're reading it through\nsome frontend, simply have it ignore the first column that returns.\n\nHerouth\n\n--\nHerouth Maoz, Internet developer.\nOpen University of Israel - Telem project\nhttp://telem.openu.ac.il/~herutma\n\n\n", "msg_date": "Mon, 19 Jul 1999 15:11:26 +0300", "msg_from": "Herouth Maoz <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [SQL] Re: [HACKERS] Counting bool flags in a complex query" }, { "msg_contents": "Quite awhile ago, Michael Richards <[email protected]> wrote:\n> It looks like the order by is only being applied to the original select,\n> not the unioned select. Some authority should check on it, but by thought\n> it that a union does not necessarily maintain the order, so the entire\n> select should be applied to the order.\n\nJust FYI, I have committed code for 7.1 that allows ORDER BY to work\ncorrectly for a UNION'd query. A limitation is that you can only do\nordering on columns that are outputs of the UNION:\n\nregression=# select q1 from int8_tbl union select q2 from int8_tbl order by 1;\n q1\n-------------------\n -4567890123456789\n 123\n 456\n 4567890123456789\n(4 rows)\n\nregression=# select q1 from int8_tbl union select q2 from int8_tbl order by int8_tbl.q1+1;\nERROR: ORDER BY on a UNION/INTERSECT/EXCEPT result must be on one of the result columns\n\nIn the general case of an arbitrary ORDER BY expression, it's not clear\nhow to transpose it into each UNION source select anyway. It could\nbe made to work for expressions using only the output columns, but since\nORDER BY expressions are not standard SQL I'm not in a big hurry to make\nthat happen...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 06 Oct 2000 01:01:47 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Counting bool flags in a complex query " }, { "msg_contents": "Tom,\n\n> Just FYI, I have committed code for 7.1 that allows ORDER\n> BY to work\n> correctly for a UNION'd query. A limitation is that you\n> can only do\n> ordering on columns that are outputs of the UNION:\n\nAs far as I know, that limitation is standard to all SQL\nthat supports UNION; the relational calculus (I'm told) is\nimpossible otherwise. \n\nSo ... we keep hearing about all the fantastic fixes in 7.1.\nWhen will a stable build show up? :-)\n\n-Josh\n", "msg_date": "Fri, 06 Oct 2000 09:31:44 -0700", "msg_from": "\"Josh Berkus\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [HACKERS] Counting bool flags in a complex query " }, { "msg_contents": "\"Josh Berkus\" <[email protected]> writes:\n>> Just FYI, I have committed code for 7.1 that allows ORDER BY to work\n>> correctly for a UNION'd query. A limitation is that you can only do\n>> ordering on columns that are outputs of the UNION:\n\n> As far as I know, that limitation is standard to all SQL\n> that supports UNION; the relational calculus (I'm told) is\n> impossible otherwise. \n\nIt's not very reasonable to imagine ordering on arbitrary expressions;\nhow would you interpret the expression in each sub-SELECT? But it's\nreasonable to imagine ordering on expressions that use only the\noutput columns of the UNION-type query:\n\n\tSELECT q1, q2 FROM tbl1 UNION SELECT ...\n\t\tORDER BY q1+q2;\n\nHowever, I didn't try to implement this yet.\n\n> So ... we keep hearing about all the fantastic fixes in 7.1.\n> When will a stable build show up? :-)\n\nHow stable is stable? I'd say it's plenty stable enough for beta\ntesting now, even though we're not putting out formal beta releases\nquite yet. You could grab a nightly snapshot off the FTP server\nif you want to try it. (Beware that you will most likely have to\ndo another initdb before beta, so loading lots and lots of data\ninto a snapshot installation is probably a waste of time.)\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 06 Oct 2000 17:23:43 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: [HACKERS] Counting bool flags in a complex query " } ]
[ { "msg_contents": "Hi all,\n\nI could create a 9-key index.\n\ncreate table ix9 (\ni1 int4,\ni2 int4,\ni3 int4,\ni4 int4,\ni5 int4,\ni6 int4,\ni7 int4,\ni8 int4,\ni9 int4,\nprimary key (i1,i2,i3,i4,i5,i6,i7,i8,i9)\n);\nNOTICE: CREATE TABLE/PRIMARY KEY will create implicit index 'ix9_pkey'\nfor table 'ix9'\nCREATE\n\n\\d ix9_pkey\n\nTable = ix9_pkey\n+----------------------------------+----------------------------------+-----\n--+\n| Field | Type |\nLength|\n+----------------------------------+----------------------------------+-----\n--+\n| i1 | int4 |\n4 |\n| i2 | int4 |\n4 |\n| i3 | int4 |\n4 |\n| i4 | int4 |\n4 |\n| i5 | int4 |\n4 |\n| i6 | int4 |\n4 |\n| i7 | int4 |\n4 |\n| i8 | int4 |\n4 |\n| i9 | int4 |\n4 |\n+----------------------------------+----------------------------------+-----\n--+\n\nIs it right ?\n\nRegards.\n\nHiroshi Inoue\[email protected]\n\n", "msg_date": "Wed, 14 Jul 1999 11:25:09 +0900", "msg_from": "\"Hiroshi Inoue\" <[email protected]>", "msg_from_op": true, "msg_subject": "9-key index ?" } ]
[ { "msg_contents": "Hi all,\n\nI have a question about \"explain\" output.\nCould someone teach me ?\n\nLet a and b tables such that\n\n create table a (\n int4\tpkey primary key,\n ....\n );\n\n create table b (\n int4\tkey1,\n int2\tkey2,\n ....,\n primary key (key1,key2)\n );\n\nTable a has 15905 rows and table b has 25905 rows.\n\n\nFor the following query\n\n select a.pkey, b.key2 from a, b\n where b.key1 = 1369\n and a.pkey = b.key1;\n\n\"explain\" shows\n\nNOTICE: QUERY PLAN:\n\nNested Loop (cost=6.19 rows=3 width=10)\n -> Index Scan using b_pkey on b on b (cost=2.09 rows=2 width=6)\n -> Index Scan using a_pkey on a on a (cost=2.05 rows=15905 width=4)\n\nWhat does \"rows=15905\" of InnerPlan mean ?\nIs \"rows=3\" of Nested Loop irrelevant to \"rows=15905\" ?\n\nRegards.\n\nHiroshi Inoue\[email protected]\n", "msg_date": "Wed, 14 Jul 1999 11:38:54 +0900", "msg_from": "\"Hiroshi Inoue\" <[email protected]>", "msg_from_op": true, "msg_subject": "What does explain show ?" }, { "msg_contents": "> Table a has 15905 rows and table b has 25905 rows.\n> \n> \n> For the following query\n> \n> select a.pkey, b.key2 from a, b\n> where b.key1 = 1369\n> and a.pkey = b.key1;\n> \n> \"explain\" shows\n> \n> NOTICE: QUERY PLAN:\n> \n> Nested Loop (cost=6.19 rows=3 width=10)\n> -> Index Scan using b_pkey on b on b (cost=2.09 rows=2 width=6)\n> -> Index Scan using a_pkey on a on a (cost=2.05 rows=15905 width=4)\n> \n> What does \"rows=15905\" of InnerPlan mean ?\n> Is \"rows=3\" of Nested Loop irrelevant to \"rows=15905\" ?\n\nIt means it thinks it is going to access X rows in that pass, but end\nup with 3 joined rows as a result of the nested loop. It is only an\nestimate, based on table size and column uniqueness from vacuum analyze.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 13 Jul 1999 22:56:42 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] What does explain show ?" }, { "msg_contents": "> -----Original Message-----\n> From: Bruce Momjian [mailto:[email protected]]\n> Sent: Wednesday, July 14, 1999 11:57 AM\n> To: Hiroshi Inoue\n> Cc: pgsql-hackers\n> Subject: Re: [HACKERS] What does explain show ?\n>\n>\n> > Table a has 15905 rows and table b has 25905 rows.\n> >\n> >\n> > For the following query\n> >\n> > select a.pkey, b.key2 from a, b\n> > where b.key1 = 1369\n> > and a.pkey = b.key1;\n> >\n> > \"explain\" shows\n> >\n> > NOTICE: QUERY PLAN:\n> >\n> > Nested Loop (cost=6.19 rows=3 width=10)\n> > -> Index Scan using b_pkey on b on b (cost=2.09 rows=2 width=6)\n> > -> Index Scan using a_pkey on a on a (cost=2.05 rows=15905 width=4)\n> >\n> > What does \"rows=15905\" of InnerPlan mean ?\n> > Is \"rows=3\" of Nested Loop irrelevant to \"rows=15905\" ?\n>\n> It means it thinks it is going to access X rows in that pass, but end\n> up with 3 joined rows as a result of the nested loop. It is only an\n> estimate, based on table size and column uniqueness from vacuum analyze.\n>\n\nHmmm,I couldn't understand where does \"rows=15905\" come from.\nShouldn't \"rows\" of InnerPlan be 1 ?\nIs the caluculation \"rows of Nested loop = rows of OuterPlan * rows of\nInnerPlan\"\nwrong ?\n\nRegards.\n\nHiroshi Inoue\[email protected]\n\n\n", "msg_date": "Wed, 14 Jul 1999 12:25:10 +0900", "msg_from": "\"Hiroshi Inoue\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] What does explain show ?" }, { "msg_contents": "\"Hiroshi Inoue\" <[email protected]> writes:\n>>>> select a.pkey, b.key2 from a, b\n>>>> where b.key1 = 1369\n>>>> and a.pkey = b.key1;\n>>>> \n>>>> NOTICE: QUERY PLAN:\n>>>> \n>>>> Nested Loop (cost=6.19 rows=3 width=10)\n>>>> -> Index Scan using b_pkey on b on b (cost=2.09 rows=2 width=6)\n>>>> -> Index Scan using a_pkey on a on a (cost=2.05 rows=15905 width=4)\n\n> Hmmm,I couldn't understand where does \"rows=15905\" come from.\n> Shouldn't \"rows\" of InnerPlan be 1 ?\n\nNo, because that number is formed by considering just the available\nrestriction clauses on table A, and there aren't any --- so the system\nuses the whole size of A as the rows count. The fact that we are\njoining against another table should be taken into account at the\nnext level up, ie the nested loop.\n\nActually the number that looks fishy to me for the innerplan is the cost\n--- if the system thinks it will be visiting all 15905 rows each time,\nit should be estimating a cost of more than 2.05 to do it.\n\n> Is the caluculation \"rows of Nested loop = rows of OuterPlan * rows of\n> InnerPlan\" wrong ?\n\nCareful --- rows produced and cost are quite different things. The\ncost estimate for a nestloop is \"cost of outerplan + rows of outerplan *\ncost of innerplan\", but we don't necessarily expect to get as many rows\nout as the product of the row counts. Typically, it'd be lower due to\njoin selectivity. Above you see only 3 rows out, which is not too bad\na guess, certainly better than 2*15905 would be.\n\nYou raise a good point though. That cost estimate is reasonable if\nthe inner plan is a sequential scan, since then the system will actually\nhave to visit each inner tuple on each iteration. But if the inner plan\nis an index scan then the outer tuple's key value could be used as an\nindex constraint, reducing the number of tuples visited by a lot.\nI am not sure whether the executor is smart enough to do that --- there\nare comments in nodeNestloop suggesting that it is, but I haven't traced\nthrough it for sure. I am fairly sure that the optimizer isn't figuring\nthe costs correctly, if that is how it's done :-(\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 14 Jul 1999 11:10:38 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] What does explain show ? " }, { "msg_contents": ">\n> > Is the caluculation \"rows of Nested loop = rows of OuterPlan * rows of\n> > InnerPlan\" wrong ?\n>\n> Careful --- rows produced and cost are quite different things. The\n> cost estimate for a nestloop is \"cost of outerplan + rows of outerplan *\n> cost of innerplan\", but we don't necessarily expect to get as many rows\n> out as the product of the row counts. Typically, it'd be lower due to\n> join selectivity. Above you see only 3 rows out, which is not too bad\n> a guess, certainly better than 2*15905 would be.\n>\n\nI see. rows of Join = rows of outerplan * rows of innerplan * \"join\nselectity\".\nand \"join selectivity\" is calcutated by eqjoinsel() etc.\n\nThanks.\n\nHiroshi Inoue\[email protected]\n\n", "msg_date": "Thu, 15 Jul 1999 17:33:01 +0900", "msg_from": "\"Hiroshi Inoue\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] What does explain show ? " }, { "msg_contents": "Quite some time ago, \"Hiroshi Inoue\" <[email protected]> wrote:\n> I have a question about \"explain\" output.\n> Table a has 15905 rows and table b has 25905 rows.\n> For the following query\n> select a.pkey, b.key2 from a, b\n> where b.key1 = 1369\n> and a.pkey = b.key1;\n> \"explain\" shows\n\n> NOTICE: QUERY PLAN:\n> Nested Loop (cost=6.19 rows=3 width=10)\n> -> Index Scan using b_pkey on b on b (cost=2.09 rows=2 width=6)\n> -> Index Scan using a_pkey on a on a (cost=2.05 rows=15905 width=4)\n\n> What does \"rows=15905\" of InnerPlan mean ?\n\nI have finally traced through enough of the optimizer logic that I\nunderstand where these numbers are coming from. A nestloop with an\ninner index scan is a slightly unusual beast, because the cost of the\ninner scan can often be reduced by using the join conditions as index\nrestrictions. For example, if we have \"outer.a = inner.b\" and the\ninner scan is an indexscan on b, then during the inner scan that's\ndone for an outer tuple with a = 42 we'd use \"b = 42\" as an indexqual.\nThis makes the inner scan much cheaper than it would be if we had to\nscan the whole table.\n\nNow the problem is that the \"rows=\" numbers come from the RelOptInfo\nnodes for each relation, and they are set independently of the context\nthat the relation is used in. For any context except an inner\nindexscan, we would indeed have to scan all 15905 rows of a, because\nwe have no pure-restriction WHERE clauses that apply to a. So that's\nwhy rows says 15905. The cost is being estimated correctly for the\ncontext, though --- an indexscan across 15905 rows would take a lot more\nthan 2 disk accesses.\n\nThis is just a cosmetic bug since it doesn't affect the planner's cost\nestimate; still, it makes the EXPLAIN output confusing. I think the\noutput for a nestloop should probably show the estimated number of rows\nthat will be scanned during each pass of the inner indexscan, which\nwould be about 1 in the above example. This could be done by saving the\nestimated row count (or just the selectivity) in IndexScan path nodes.\n\nComments? Does anyone think we should show some other number?\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 04 Jan 2000 19:30:37 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] What does explain show ? " }, { "msg_contents": "> -----Original Message-----\n> From: Tom Lane [mailto:[email protected]]\n> Sent: Wednesday, January 05, 2000 9:31 AM\n> \n> Quite some time ago, \"Hiroshi Inoue\" <[email protected]> wrote:\n> > I have a question about \"explain\" output.\n> > Table a has 15905 rows and table b has 25905 rows.\n> > For the following query\n> > select a.pkey, b.key2 from a, b\n> > where b.key1 = 1369\n> > and a.pkey = b.key1;\n> > \"explain\" shows\n> \n> > NOTICE: QUERY PLAN:\n> > Nested Loop (cost=6.19 rows=3 width=10)\n> > -> Index Scan using b_pkey on b on b (cost=2.09 rows=2 width=6)\n> > -> Index Scan using a_pkey on a on a (cost=2.05 rows=15905 width=4)\n> \n> > What does \"rows=15905\" of InnerPlan mean ?\n> \n> I have finally traced through enough of the optimizer logic that I\n> understand where these numbers are coming from. A nestloop with an\n> inner index scan is a slightly unusual beast, because the cost of the\n> inner scan can often be reduced by using the join conditions as index\n> restrictions. For example, if we have \"outer.a = inner.b\" and the\n> inner scan is an indexscan on b, then during the inner scan that's\n> done for an outer tuple with a = 42 we'd use \"b = 42\" as an indexqual.\n> This makes the inner scan much cheaper than it would be if we had to\n> scan the whole table.\n> \n> Now the problem is that the \"rows=\" numbers come from the RelOptInfo\n> nodes for each relation, and they are set independently of the context\n> that the relation is used in. For any context except an inner\n> indexscan, we would indeed have to scan all 15905 rows of a, because\n> we have no pure-restriction WHERE clauses that apply to a. So that's\n> why rows says 15905. The cost is being estimated correctly for the\n> context, though --- an indexscan across 15905 rows would take a lot more\n> than 2 disk accesses.\n> \n> This is just a cosmetic bug since it doesn't affect the planner's cost\n> estimate; still, it makes the EXPLAIN output confusing. I think the\n> output for a nestloop should probably show the estimated number of rows\n> that will be scanned during each pass of the inner indexscan, which\n> would be about 1 in the above example. This could be done by saving the\n> estimated row count (or just the selectivity) in IndexScan path nodes.\n> \n> Comments? Does anyone think we should show some other number?\n>\n\nI agree with you.\nThe rows should show some kind of average number of rows,because\nthe cost of innerplan seems to mean average cost.\n\nRegards.\n\nHiroshi Inoue\[email protected] \n", "msg_date": "Wed, 5 Jan 2000 16:41:52 +0900", "msg_from": "\"Hiroshi Inoue\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] What does explain show ? " } ]
[ { "msg_contents": "I'm trying to get Postgres up and running on NT. I've got it compiled,\nand I can run the initdb process and start the postmaster. However I\ncan't create a database or user or start psql. The error I get is :\nConnection to database 'template1' failed.\nconnectDB() -- socket() failed: errno=106\nAddresses in the specified family cannot be used with this socket.\n\ncreatedb: database creation failed on brentp\n\nAny suggestions???\n\nThanks\nBrent.\n-- \n*********************************************\n\nAchieve a greater Return on IT Investments...\n\nWe will help Create, Implement, Evaluate and \nImprove your Information Management Strategy.\n\n*********************************************\n\nBrent Peckover\n\nArchiveWare Inc.\nThe Information Management Specialists.\n\nFifth Avenue Place, Suite 3400,\n425 1st Street SW,\nCalgary, AB, T2P 3L8\n\nTel: (403) 213-5588\nCell: (403) 703-7713\nFax: (403) 261-3911\n", "msg_date": "Tue, 13 Jul 1999 22:43:21 -0600", "msg_from": "Brent Peckover <[email protected]>", "msg_from_op": true, "msg_subject": "PGSQL on NT - connection to database template1 failed." } ]
[ { "msg_contents": "> > > Bruce Momjian <[email protected]> writes:\n> > > >> DB admin has no business knowing other's passwords. \n> The current security\n> > > >> scheme is seriously flawed.\n> > >\n> > > > But it is the db passwords, not the Unix passwords.\n> > >\n> > > I think the original point was that some people use the \n> same or related\n> > > passwords for psql as for their login password.\n> > >\n> > > Nonetheless, since we have no equivalent of \"passwd\" that \n> would let a\n> > > db user change his db password for himself, it's a little silly to\n> > > talk about hiding db passwords from the admin who puts them in.\n> > >\n> > > If this is a concern, we'd need to add both encrypted storage of\n> > > passwords and a remote-password-change feature.\n> >\n> > Doing the random salt over the wire would still be a problem.\n> \n> And I don't like password's at all. Well, up to now the bare\n> PostgreSQL doesn't need anything else. But would it really\n> hurt to use ssl in the case someone needs security? I don't\n> know exactly, but the authorized keys might reside in a new\n> system catalog. So such a secure installation can live with a\n> wide open hba.conf and who can be who is controlled by\n> pg_authorizedkeys then.\n> \n> As a side effect, all communication between the backend and\n> the client would be crypted, so no wire listener could see\n> anything :-)\n\nI've actually been using this on and off for a while. (I did some changes to\nlibpq some time back, so it no longer used fdopen() to access thins - in\npreparation of SSL patches). Though not a complete implementation - just\nthat \"if client certificate is trusted, then let'em in as whatever user they\nsay\". But it shouldn't be too hard to finish off.\n\nI'll try to get around to fix those patches up. The client-side\nimplementation was really ugly, and not at all \"generic\" (only worked in my\nsituation). Server-side was more generic. I'll try to fix the client-side\nthere..\n\n//Magnus\n", "msg_date": "Wed, 14 Jul 1999 08:06:49 +0200", "msg_from": "Magnus Hagander <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] Updated TODO list" } ]
[ { "msg_contents": "Please Cc: to [email protected] \n\nI have a table called \"note\" that looks like this :\n\ncreate table \"note\" (id serial,perioada int2,schema int2,explicatie\ntext,...);\n\nThe \"note\" table has 22.000 records and the record length is about 75\nbytes (is has also a \"text\" field\").\n\nBecause I am frequently accesing the table with queries like \"... where\nperioada=12\" I was tempted to make also indexes on \"perioada\" and\n\"schema\" field.\n\nThe tables have the following sizes (their file sizes into\n/usr/local/pgsql/data/base....)\n\nnote 2.890 Kb\nnote_id 385 Kb\nnote_perioada 409 Kb\nnote_schema 466 Kb \n\nI ran previusly \"vacuum analyze\" on that database ensuring that\nstatistical tables have been updated.\n\nTrying some selects with explain I got the following results:\n\ncontabil=> explain select * from note where id=15;\nNOTICE: QUERY PLAN:\nIndex Scan using note_id on note (cost=2.05 rows=2 width=87) \n\n\ncontabil=> explain select * from note where perioada=15;\nNOTICE: QUERY PLAN:\nSeq Scan on note (cost=1099.99 rows=1600 width=87)\n\n\ncontabil=> explain select * from note where schema=15;\nNOTICE: QUERY PLAN:\nSeq Scan on note (cost=1099.99 rows=432 width=87)\n\n\nThat means that searching on \"perioada\" field don't use \"note_perioada\"\nindex!!!\n\nI know that the query optimisation take care of record lengths, table\nsizes, index sizes, but I thought that in this case it will use\n\"note_perioada\" index.\n\nThe distribution of \"perioada\" values within \"note\" records is like that\n:\n\ncontabil=> select perioada,count(*) from note group by perioada;\nperioada|count\n--------+-----\n 4| 2\n 7| 66\n 8| 108\n 9| 135\n 10| 151\n 11| 146\n 12| 4468\n 13| 3045\n 14| 3377\n 15| 3207\n 16| 3100\n 17| 3039\n 18| 1789\n 19| 1\n 22| 2\n(15 rows) \n\nSo, I think that PostgreSQL is doing right when he chooses not to use\n\"note_perioada\" index for that type of query by comparing different\ncosts (althought it still remains strange at the first look).\n\nIs there any chance to speed up that type of simple query (select * from\nnote where perioada=N) ?\n\nI dropped the index and try with a \"hash\" index on the same \"perioada\"\nfield. The same result.\n\nIn this case, it seems that the \"note_perioada\" index will never be\nused. That means it can be safely dropped without affecting the\napplication performance, isn't it? It is expected that the database will\ngrow in the same manner, with approx. the same nr. of records per\n\"perioada\" field every month.\n\nBest regards,\n\nPlease Cc: to [email protected]\n\n===============================\nConstantin Teodorescu\nFLEX Consulting Braila, ROMANIA\n", "msg_date": "Wed, 14 Jul 1999 07:16:28 +0000", "msg_from": "Constantin Teodorescu <[email protected]>", "msg_from_op": true, "msg_subject": "Interesting behaviour !" }, { "msg_contents": "Constantin Teodorescu wrote:\n> \n> \n> create table \"note\" (id serial,perioada int2,schema int2,explicatie\n> text,...);\n> \n...\n> \n> contabil=> explain select * from note where perioada=15;\n> NOTICE: QUERY PLAN:\n> Seq Scan on note (cost=1099.99 rows=1600 width=87)\n\nYou may try :\n\nexplain select * from note where perioada=15::int2;\n\ni think that the default for 'untyped' numbers is int4 and \nthis currently confuses the optimiser.\n\n------------------\nHannu\n", "msg_date": "Wed, 14 Jul 1999 13:27:54 +0300", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Interesting behaviour !" }, { "msg_contents": "Hannu Krosing wrote:\n> \n> You may try :\n> \n> explain select * from note where perioada=15::int2;\n> \n> i think that the default for 'untyped' numbers is int4 and\n> this currently confuses the optimiser.\n\nYou are right! Thanks a lot! Watch this!\n\ncontabil=> explain select * from note where perioada=29::int2;\nNOTICE: QUERY PLAN:\nIndex Scan using note_perioada on note (cost=108.96 rows=1600 width=87)\n\nEXPLAIN\ncontabil=> explain select * from note where perioada=29;\nNOTICE: QUERY PLAN:\nSeq Scan on note (cost=1099.99 rows=1600 width=87)\n\nMy queries are faster now!\n\nI think that this thing should be fixed. You need more than common SQL\nin order to optimize your queries.\nThat conversions should be automatically assumed by the query optimizer\nin order to deliver real performances.\nI don't know how difficult that would be.\n\nThanks a lot,\nBest regards,\n-- \nConstantin Teodorescu\nFLEX Consulting Braila, ROMANIA\n", "msg_date": "Wed, 14 Jul 1999 12:17:59 +0000", "msg_from": "Constantin Teodorescu <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Interesting behaviour !" }, { "msg_contents": "Constantin Teodorescu <[email protected]> writes:\n> So, I think that PostgreSQL is doing right when he chooses not to use\n> \"note_perioada\" index for that type of query by comparing different\n> costs (althought it still remains strange at the first look).\n\nAlthough the real problem here was a type clash (which I agree ought\nto be fixed), it should be pointed out that there *is* a threshold of\nselectivity below which the optimizer will choose not to use an index\nscan. I'm not sure what it is offhand, nor whether it's set at a\ngood level. This behavior emerges indirectly from the cost estimate\nfunctions for sequential and index scans, and I'm not convinced that\nthey are as accurate as they need to be...\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 14 Jul 1999 10:11:34 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Interesting behaviour ! " }, { "msg_contents": "> Hannu Krosing wrote:\n> > \n> > You may try :\n> > \n> > explain select * from note where perioada=15::int2;\n> > \n> > i think that the default for 'untyped' numbers is int4 and\n> > this currently confuses the optimiser.\n> \n> You are right! Thanks a lot! Watch this!\n> \n> contabil=> explain select * from note where perioada=29::int2;\n> NOTICE: QUERY PLAN:\n> Index Scan using note_perioada on note (cost=108.96 rows=1600 width=87)\n> \n> EXPLAIN\n> contabil=> explain select * from note where perioada=29;\n> NOTICE: QUERY PLAN:\n> Seq Scan on note (cost=1099.99 rows=1600 width=87)\n> \n> My queries are faster now!\n> \n> I think that this thing should be fixed. You need more than common SQL\n> in order to optimize your queries.\n> That conversions should be automatically assumed by the query optimizer\n> in order to deliver real performances.\n> I don't know how difficult that would be.\n\nI thought we had this fixed in 6.5. Is that what you are using?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 14 Jul 1999 11:14:47 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Interesting behaviour !" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> \n> I thought we had this fixed in 6.5. Is that what you are using?\n\nYes!\n\nSorry that I forgot to describe the environment :\n\nRedHat 5.2 i386 + PostgreSQL 6.5 final version\n\n-- \nConstantin Teodorescu\nFLEX Consulting Braila, ROMANIA\n", "msg_date": "Wed, 14 Jul 1999 15:52:44 +0000", "msg_from": "Constantin Teodorescu <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Interesting behaviour !" }, { "msg_contents": "Constantin Teodorescu <[email protected]> writes:\n> Bruce Momjian wrote:\n>> I thought we had this fixed in 6.5. Is that what you are using?\n\n> Yes!\n\nIt's not fixed, obviously. We talked about the issue a while back,\nbut it isn't 100% clear what the most reasonable fix is. Just making\nthe parser label small constants as int2 rather than int4 is no\nanswer; that'll only move the problem over to int4 tables :-(.\n\nI have not looked closely at the parse trees produced for this sort\nof thing, but if we are lucky they come out like\n\tvar int2eq (int4toint2(int4constant))\nin which case the constant-subexpression-folding pass that I want to\nadd would solve the problem by reducing the righthand side to a simple\nconstant.\n\nBut it's more likely that the parser is producing\n\tint2toint4(var) int4eq int4constant\nin which case it would take some actual intelligence to decide that this\ncould and should be converted to the other form. I'd still be inclined\nto tackle the issue in a post-rewriter, pre-planner pass over the tree.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 14 Jul 1999 12:17:52 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Interesting behaviour ! " }, { "msg_contents": "I wrote:\n> But it's more likely that the parser is producing\n> \tint2toint4(var) int4eq int4constant\n> in which case it would take some actual intelligence to decide that this\n> could and should be converted to the other form.\n\nHmm, actually it seems to be producing\n\tvar int24eq int4constant\nMy first thought on seeing this was that int24eq didn't have an entry\nin pg_amop, but it does. So why doesn't the system realize it can use\nan index scan? This might be a relatively simple bug to fix after all,\nbut it needs more time to find exactly where things are going wrong...\nand I have to get some Real Work done...\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 14 Jul 1999 12:39:28 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Interesting behaviour ! " }, { "msg_contents": "> This might be a relatively simple bug to fix after all,\n> but it needs more time to find exactly where things are going wrong...\n> and I have to get some Real Work done...\n\nDon't let me stop anyone from looking at this, but fyi this is the one\narea I didn't yet touch for the \"transparent type coersion\" work I did\nfor v6.4 and which is still ongoing of course. \n\nistm that wherever index use is evaluated one could allow\npre-evaluated functions on constants, rather than just strict\nconstants as is the case now. There is a precedent for pre-evaluation\nof elements of the query tree.\n\nIf function calls are allowed, then we can try coercing constants\nusing these existing coersion functions, at least when the target\ncolumn is a \"superset type\" of the constant. You still run into\ntrouble for cases like\n\n select intcol from table1 where intcol < 2.5;\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Wed, 14 Jul 1999 17:28:51 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Interesting behaviour !" }, { "msg_contents": "> Constantin Teodorescu <[email protected]> writes:\n> > Bruce Momjian wrote:\n> >> I thought we had this fixed in 6.5. Is that what you are using?\n> \n> > Yes!\n> \n> It's not fixed, obviously. We talked about the issue a while back,\n> but it isn't 100% clear what the most reasonable fix is. Just making\n> the parser label small constants as int2 rather than int4 is no\n> answer; that'll only move the problem over to int4 tables :-(.\n\nAdded to TODO:\n\n\t* Allow SELECT * FROM tab WHERE int2col = 4 use int2col index \n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 14 Jul 1999 16:07:20 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Interesting behaviour !" }, { "msg_contents": "I wrote:\n>> This might be a relatively simple bug to fix after all,\n>> but it needs more time to find exactly where things are going wrong...\n>> and I have to get some Real Work done...\n\nWell, no, it's not simple. After looking at the executor I can see that\nindexscan support is only prepared to deal with comparison operators\nthat are in the pg_amop class associated with the index. In other\nwords, for an int2 index the indexquals have to be \"int2 op int2\".\nThe optimizer is doing the right thing by not trying to use int24eq\nas an indexqual.\n\nSo, we're back to needing to figure out that we can reduce the int4\nconstant to an int2 constant and adjust the comparison operator\nappropriately.\n\nThomas Lockhart <[email protected]> writes:\n> Don't let me stop anyone from looking at this, but fyi this is the one\n> area I didn't yet touch for the \"transparent type coersion\" work I did\n> for v6.4 and which is still ongoing of course. \n>\n> istm that wherever index use is evaluated one could allow\n> pre-evaluated functions on constants, rather than just strict\n> constants as is the case now. There is a precedent for pre-evaluation\n> of elements of the query tree.\n\nPerhaps that could be handled by the constant-subexpression-reducer\nthat I want to add. That is, \"typeconversionfunction(constant)\" would\nbe reduced to \"constant\", and then the optimizer has the same case to\ndeal with as it has now.\n\n> If function calls are allowed, then we can try coercing constants\n> using these existing coersion functions,\n\nWhere are said functions? I have not run across them yet...\n\n> at least when the target\n> column is a \"superset type\" of the constant. You still run into\n> trouble for cases like\n> select intcol from table1 where intcol < 2.5;\n\nRight, you don't want to truncate the float constant to integer\n(at least not without adding even more smarts).\n\nI think we are probably going to have to do this in the form of code\nthat has some type-specific knowledge about conversions between certain\nstandard types, and knows some things about the operators on those types\nas well. Here is another example that can produce trouble:\n\tselect int2col + 30000 from table1;\nIf we reduce the int4 constant to int2 and change int24add to int2add,\nwe have just created a potential for int2 overflow in an expression\nthat did not have it before. So, while folding an int4 constant to int2\nif it's within int2 range is safe when the constant is an argument of\na comparison operator, it is *not* safe in the general case.\n\nI don't see any real good way to build a type-independent transformation\nroutine that can do this sort of thing. Probably best just to hardcode\nit for the standard numeric and character-string types, and leave it at\nthat.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 14 Jul 1999 18:47:55 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Interesting behaviour ! " }, { "msg_contents": "> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]]On Behalf Of Tom Lane\n> Sent: Thursday, July 15, 1999 7:48 AM\n> To: Thomas Lockhart\n> Cc: Constantin Teodorescu; Bruce Momjian; Hannu Krosing;\n> [email protected]\n> Subject: Re: [HACKERS] Interesting behaviour ! \n> \n> \n> I wrote:\n> >> This might be a relatively simple bug to fix after all,\n> >> but it needs more time to find exactly where things are going wrong...\n> >> and I have to get some Real Work done...\n> \n> Well, no, it's not simple. After looking at the executor I can see that\n> indexscan support is only prepared to deal with comparison operators\n> that are in the pg_amop class associated with the index. In other\n> words, for an int2 index the indexquals have to be \"int2 op int2\".\n> The optimizer is doing the right thing by not trying to use int24eq\n> as an indexqual.\n> \n> So, we're back to needing to figure out that we can reduce the int4\n> constant to an int2 constant and adjust the comparison operator\n> appropriately.\n> \n> Thomas Lockhart <[email protected]> writes:\n> > Don't let me stop anyone from looking at this, but fyi this is the one\n> > area I didn't yet touch for the \"transparent type coersion\" work I did\n> > for v6.4 and which is still ongoing of course. \n> >\n> > istm that wherever index use is evaluated one could allow\n> > pre-evaluated functions on constants, rather than just strict\n> > constants as is the case now. There is a precedent for pre-evaluation\n> > of elements of the query tree.\n> \n> Perhaps that could be handled by the constant-subexpression-reducer\n> that I want to add. That is, \"typeconversionfunction(constant)\" would\n> be reduced to \"constant\", and then the optimizer has the same case to\n> deal with as it has now.\n>\n\nEach type has a typeinput(char * => type ) proc and a typeoutput(\ntype -> char *) proc.\n\nFor example int2in/int2out for type int2 and int4in/int4out for type int4.\nDoesn't int2in(int4out()) convert int4 to int2 ?\n\nHowever,typeinput proc causes elog(ERROR) in most cases if it \ncouldn't convert correctly. \n\nRegards.\n\nHiroshi Inoue\[email protected]\n \n", "msg_date": "Thu, 15 Jul 1999 09:25:07 +0900", "msg_from": "\"Hiroshi Inoue\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: [HACKERS] Interesting behaviour ! " }, { "msg_contents": "> Each type has a typeinput(char * => type ) proc and a typeoutput(\n> type -> char *) proc.\n> Doesn't int2in(int4out()) convert int4 to int2 ?\n> However,typeinput proc causes elog(ERROR) in most cases if it\n> couldn't convert correctly.\n\nConversion using an intermediate string is possible, but not the\npreferred technique.\n\nThe \"automatic type coersion\" code, used earlier in the parser, uses\nthe convention that any single-argument function taking the source\ntype as input and with the same name as the target type can be used\nfor type conversion. For example, the function int4(int2) would\nconvert int2 to int4. There are now routines in the parser for\nchoosing conversion strategies and for finding candidates, and these\ncould be reused for similar purposes when trying to match index\narguments.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Thu, 15 Jul 1999 06:15:16 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Interesting behaviour !" }, { "msg_contents": "> \n> > Each type has a typeinput(char * => type ) proc and a typeoutput(\n> > type -> char *) proc.\n> > Doesn't int2in(int4out()) convert int4 to int2 ?\n> > However,typeinput proc causes elog(ERROR) in most cases if it\n> > couldn't convert correctly.\n> \n> Conversion using an intermediate string is possible, but not the\n> preferred technique.\n>\n\nEvery type of PostgreSQL must have typeinput/typeoutput procs.\nSo this technique doesn't need new procs/operators any more.\nIsn't it an advantage ?\n \n> The \"automatic type coersion\" code, used earlier in the parser, uses\n> the convention that any single-argument function taking the source\n> type as input and with the same name as the target type can be used\n> for type conversion. For example, the function int4(int2) would\n> convert int2 to int4. There are now routines in the parser for\n> choosing conversion strategies and for finding candidates, and these\n> could be reused for similar purposes when trying to match index\n> arguments.\n>\n\nIt seems reasonable.\nBut I'm afraid that the defintion of new type requires many functions \nof type conversion.\n\nRegards.\n\nHiroshi Inoue\[email protected]\n", "msg_date": "Fri, 16 Jul 1999 09:55:32 +0900", "msg_from": "\"Hiroshi Inoue\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: [HACKERS] Interesting behaviour !" }, { "msg_contents": "Sorry,I've misunderstood Thomas's posting.\nPlease ignore my previous posting.\n\nHiroshi Inoue\[email protected]\n\n> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]]On Behalf Of Hiroshi Inoue\n> Sent: Friday, July 16, 1999 9:56 AM\n> To: Thomas Lockhart\n> Cc: Tom Lane; Constantin Teodorescu; Bruce Momjian; Hannu Krosing;\n> [email protected]\n> Subject: RE: [HACKERS] Interesting behaviour !\n> \n> \n> > \n> > > Each type has a typeinput(char * => type ) proc and a typeoutput(\n> > > type -> char *) proc.\n> > > Doesn't int2in(int4out()) convert int4 to int2 ?\n> > > However,typeinput proc causes elog(ERROR) in most cases if it\n> > > couldn't convert correctly.\n> > \n> > Conversion using an intermediate string is possible, but not the\n> > preferred technique.\n> >\n> \n> Every type of PostgreSQL must have typeinput/typeoutput procs.\n> So this technique doesn't need new procs/operators any more.\n> Isn't it an advantage ?\n> \n> > The \"automatic type coersion\" code, used earlier in the parser, uses\n> > the convention that any single-argument function taking the source\n> > type as input and with the same name as the target type can be used\n> > for type conversion. For example, the function int4(int2) would\n> > convert int2 to int4. There are now routines in the parser for\n> > choosing conversion strategies and for finding candidates, and these\n> > could be reused for similar purposes when trying to match index\n> > arguments.\n> >\n> \n> It seems reasonable.\n> But I'm afraid that the defintion of new type requires many functions \n> of type conversion.\n> \n> Regards.\n> \n> Hiroshi Inoue\n> [email protected]\n> \n> \n", "msg_date": "Fri, 16 Jul 1999 10:09:08 +0900", "msg_from": "\"Hiroshi Inoue\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: [HACKERS] Interesting behaviour !" } ]
[ { "msg_contents": "\n---------- Forwarded message ----------\nDate: Wed, 7 Jul 1999 19:43:00 +0700 (GMT+0700)\nFrom: Nuchanach Klinjun <[email protected]>\nTo: [email protected]\nSubject: upgrade problem\n\nDear Sir/Madam,\n\nAfter Upgrade Database to the latest version I got some problem like this\n1. ODBC connection \n\tI used to link table from postgresql to my database file (MS\nAccess) but now I cannot and it gives me error \"Invalid field definition\n'accont' in definition of index or relationship\" \n\n2. Cannot execute some query which I could excute before upgrade.\nthe query is complicate but it used to work well both with MS Access\nand with db prompt it returned me error like it terminated my query\nabnormally i.e. \"Memory exhausted in AlloSetAlloc() (#1)\",\n\"No response from the backend, Socket has been closed (#1)\".\n\nmy sql is .. \n\"select date(s.stop) as accdate,sum(date_part('epoch',s.stop-s.start)/3600)\n,sum((s.rate/t.rate)*date_part('epoch',s.stop-s.start)/3600)\n,sum(s.value*100/107),sum(t.balance)\nfrom session s,ticket t \nwhere s.stop is not null and date(s.stop) between '1999/06/01' and\n'1999/06/15' group by accdate;\"\n\n3.Some query work abnormally since upgrade. \n\nI did attached the infomation I have\n\nTable Schema which I use in my query in 2. and I used to link it\ninto my Access Database file.\n\n=> \\d ticket\nTable = ticket\n+----------------------------------+----------------------------------+-------+\n| Field | Type |\nLength|\n+----------------------------------+----------------------------------+-------+\n| id | text not null |\nvar |\n| registered | datetime not null |\n8 |\n| lifetime | timespan not null default '@ 180 |\n12 |\n| value | money not null |\n4 |\n| balance | money not null default 0 |\n4 |\n| rate | float8 not null |\n8 |\n| account | text not null |\nvar |\n| free | bool not null default 'f' |\n1 |\n| priority | int4 not null default 0 |\n4 |\n| marker | int4 |\n4 |\n+----------------------------------+----------------------------------+-------+\nIndices: ticket_account\n ticket_pkey\n \n=> \\d session\nTable = session\n+----------------------------------+----------------------------------+-------+\n| Field | Type |\nLength|\n+----------------------------------+----------------------------------+-------+\n| id | int4 not null default nextval ( |\n4 |\n| seq | int4 not null default 1 |\n4 |\n| last | bool not null default 'f' |\n1 |\n| nas | text not null |\nvar |\n| sessionid | text not null |\nvar |\n| description | text |\nvar |\n| account | text not null |\nvar |\n| ticket | text not null |\nvar |\n| rate | float8 |\n8 |\n| value | money |\n4 |\n| start | datetime not null default dateti |\n8 |\n| stop | datetime |\n8 |\n| reserved | datetime |\n8 |\n| approved | datetime |\n8 |\n| cancelled | datetime |\n8 |\n| clientip | text |\nvar |\n+----------------------------------+----------------------------------+-------+\nIndices: i_session_stop\n session_account\n session_pkey\n session_ticket \n\nHere's the backtrace:\n\n(gdb) bt\n#0 0xd9a4c in AllocSetReset ()\n#1 0xdaa0a in EndPortalAllocMode ()\n#2 0x19147 in AtCommit_Memory ()\n#3 0x19302 in CommitTransaction ()\n#4 0x19550 in CommitTransactionCommand ()\n#5 0xa8665 in PostgresMain ()\n#6 0x8e4ba in DoBackend ()\n#7 0x8dfbe in BackendStartup ()\n#8 0x8d35e in ServerLoop ()\n#9 0x8cbd7 in PostmasterMain ()\n#10 0x4a102 in main ()\n \naccess=> select version();\nversion\n--------------------------------------------------------------\nPostgreSQL 6.5.0 on i386-unknown-freebsd2.2.8, compiled by cc\n\n> PostgreSQL ODBC driver 6.40.00.06 \nset up with valid user name and password, valid port.\n\nThank you very much and I really hope to hear from you really soon.\n\nCheers,\nNuchanach K.\n\n-----------------------------------------\nNuchanach Klinjun\nR&D Project. Internet Thailand\nEmail: [email protected]\n\n\n\n\n", "msg_date": "Wed, 14 Jul 1999 14:26:45 +0700 (GMT+0700)", "msg_from": "Nuchanach Klinjun <[email protected]>", "msg_from_op": true, "msg_subject": "upgrade problem " } ]
[ { "msg_contents": "\nTrawling through the code last night I noticed that:\n#define MAX_QUERY_SIZE (BLCKSZ * 2)\n\nIs there any conceivable reason why the query length would be dependent on\nthe block size? Or do I just have old source code?\n\nMikeA\n\n\n", "msg_date": "Wed, 14 Jul 1999 09:48:01 +0200", "msg_from": "\"Ansley, Michael\" <[email protected]>", "msg_from_op": true, "msg_subject": "MAX Query length" }, { "msg_contents": "\"Ansley, Michael\" <[email protected]> writes:\n> Trawling through the code last night I noticed that:\n> #define MAX_QUERY_SIZE (BLCKSZ * 2)\n\n> Is there any conceivable reason why the query length would be dependent on\n> the block size?\n\nSure: you want to be able to INSERT a tuple of maximum size. In the\nabsence of dynamically sized text buffers, a reasonable estimate of\nthe longest INSERT command of interest is going to depend on BLCKSZ.\n\nI don't know how long that particular constant has been defined like\nthat, though. I had the idea that it was the same as BLCKSZ, not 2x.\nYou may well find that frontend libpq is using a different value for\nits buffer sizes than the backend is :-(\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 14 Jul 1999 10:16:43 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] MAX Query length " }, { "msg_contents": "[Charset iso-8859-1 unsupported, filtering to ASCII...]\n> \n> Trawling through the code last night I noticed that:\n> #define MAX_QUERY_SIZE (BLCKSZ * 2)\n> \n> Is there any conceivable reason why the query length would be dependent on\n> the block size? Or do I just have old source code?\n\nNo great reason, but is seems like a good maximum. This controls the\nbuffer size on the client and server. Do you need it larger?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 14 Jul 1999 11:02:23 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] MAX Query length" }, { "msg_contents": "Tom Lane wrote:\n> \n> \"Ansley, Michael\" <[email protected]> writes:\n> > Trawling through the code last night I noticed that:\n> > #define MAX_QUERY_SIZE (BLCKSZ * 2)\n> \n>\n> Sure: you want to be able to INSERT a tuple of maximum size. In the\n> absence of dynamically sized text buffers, a reasonable estimate of\n> the longest INSERT command of interest is going to depend on BLCKSZ.\n...\n> regards, tom lane\n\nWhile I agree that it is reasonable that the query size should be\ndependent on the block-size, there is an assumption here that the\ntype_in() and type_out() routines that do not expand the size of the\nascii representation of the tuple data in the query string to more than\ntwice is size in it's internal disk representation. An important\nexception to this assumption would be large arrays of floating point\ndata that are stored with limited precision. A (single-precision) float\ntakes 4 bytes of space in a disk block, yet the ascii representation\nfor the same data before conversion could easily take in excess of 16\nbits if it comes from a \npiece of code like \n\n double x;\n\tint buf_pos\n ....\n ....\n\tbuf_pos += \n\t snprintf( &query_buf[buf_pos], (l_buf - buf_pos ), \"%e\", x); \n\nsomewhere in a front end. Perhaps it would be a good idea to increase\nthe multiplier in \n\n #define MAX_QUERY_SIZE (BLCKSZ * 2)\n\n\nto something larger than 2.\n\nBernie Frankpitt\n", "msg_date": "Wed, 14 Jul 1999 15:56:04 +0000", "msg_from": "Bernard Frankpitt <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] MAX Query length" }, { "msg_contents": "Bernard Frankpitt <[email protected]> writes:\n> Tom Lane wrote:\n>> Sure: you want to be able to INSERT a tuple of maximum size. In the\n>> absence of dynamically sized text buffers, a reasonable estimate of\n>> the longest INSERT command of interest is going to depend on BLCKSZ.\n\n> Perhaps it would be a good idea to increase\n> the multiplier in \n> #define MAX_QUERY_SIZE (BLCKSZ * 2)\n> to something larger than 2.\n\nThis entire chain of logic will fall to the ground anyway once we support\ntuples larger than a disk block, which I believe is going to happen\nbefore too much longer. So, rather than argue about what the multiplier\nought to be, I think it's more productive to just press on with making\nthe query buffers dynamically resizable...\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 14 Jul 1999 12:02:08 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] MAX Query length " }, { "msg_contents": "Tom Lane wrote:\n\n>\n> Bernard Frankpitt <[email protected]> writes:\n> > Tom Lane wrote:\n> >> Sure: you want to be able to INSERT a tuple of maximum size. In the\n> >> absence of dynamically sized text buffers, a reasonable estimate of\n> >> the longest INSERT command of interest is going to depend on BLCKSZ.\n>\n> > Perhaps it would be a good idea to increase\n> > the multiplier in\n> > #define MAX_QUERY_SIZE (BLCKSZ * 2)\n> > to something larger than 2.\n>\n> This entire chain of logic will fall to the ground anyway once we support\n> tuples larger than a disk block, which I believe is going to happen\n> before too much longer. So, rather than argue about what the multiplier\n> ought to be, I think it's more productive to just press on with making\n> the query buffers dynamically resizable...\n\n Yes, even if we choose to make some other limit (like Vadim\n suggested around 64K), a query operating on them could be\n much bigger. I already had some progress with a data type\n that uses a simple, byte oriented lz compression buffer as\n internal representation.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Thu, 15 Jul 1999 10:36:16 +0200 (MET DST)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] MAX Query length" } ]
[ { "msg_contents": "Dear Sir/Madam,\n\nAfter Upgrade Database to the latest version I got some problem like this\n1. ODBC connection \n\tI used to link table from postgresql to my database file (MS\nAccess) but now I cannot and it gives me error \"Invalid field definition\n'accont' in definition of index or relationship\" \n\n2. Cannot execute some query which I could excute before upgrade.\nthe query is complicate but it used to work well both with MS Access\nand with db prompt it returned me error like it terminated my query\nabnormally i.e. \"Memory exhausted in AlloSetAlloc() (#1)\",\n\"No response from the backend, Socket has been closed (#1)\".\n\nmy sql is .. \n\"select date(s.stop) as accdate,sum(date_part('epoch',s.stop-s.start)/3600)\n,sum((s.rate/t.rate)*date_part('epoch',s.stop-s.start)/3600)\n,sum(s.value*100/107),sum(t.balance)\nfrom session s,ticket t \nwhere s.stop is not null and date(s.stop) between '1999/06/01' and\n'1999/06/15' group by accdate;\"\n\n3.Some query work abnormally since upgrade. \n\nI did attached the infomation I have\n\nTable Schema which I use in my query in 2. and I used to link it\ninto my Access Database file.\n\n=> \\d ticket\nTable = ticket\n+----------------------------------+----------------------------------+-------+\n| Field | Type |\nLength|\n+----------------------------------+----------------------------------+-------+\n| id | text not null |\nvar |\n| registered | datetime not null |\n8 |\n| lifetime | timespan not null default '@ 180 |\n12 |\n| value | money not null |\n4 |\n| balance | money not null default 0 |\n4 |\n| rate | float8 not null |\n8 |\n| account | text not null |\nvar |\n| free | bool not null default 'f' |\n1 |\n| priority | int4 not null default 0 |\n4 |\n| marker | int4 |\n4 |\n+----------------------------------+----------------------------------+-------+\nIndices: ticket_account\n ticket_pkey\n \n=> \\d session\nTable = session\n+----------------------------------+----------------------------------+-------+\n| Field | Type |\nLength|\n+----------------------------------+----------------------------------+-------+\n| id | int4 not null default nextval ( |\n4 |\n| seq | int4 not null default 1 |\n4 |\n| last | bool not null default 'f' |\n1 |\n| nas | text not null |\nvar |\n| sessionid | text not null |\nvar |\n| description | text |\nvar |\n| account | text not null |\nvar |\n| ticket | text not null |\nvar |\n| rate | float8 |\n8 |\n| value | money |\n4 |\n| start | datetime not null default dateti |\n8 |\n| stop | datetime |\n8 |\n| reserved | datetime |\n8 |\n| approved | datetime |\n8 |\n| cancelled | datetime |\n8 |\n| clientip | text |\nvar |\n+----------------------------------+----------------------------------+-------+\nIndices: i_session_stop\n session_account\n session_pkey\n session_ticket \n\nHere's the backtrace:\n\n(gdb) bt\n#0 0xd9a4c in AllocSetReset ()\n#1 0xdaa0a in EndPortalAllocMode ()\n#2 0x19147 in AtCommit_Memory ()\n#3 0x19302 in CommitTransaction ()\n#4 0x19550 in CommitTransactionCommand ()\n#5 0xa8665 in PostgresMain ()\n#6 0x8e4ba in DoBackend ()\n#7 0x8dfbe in BackendStartup ()\n#8 0x8d35e in ServerLoop ()\n#9 0x8cbd7 in PostmasterMain ()\n#10 0x4a102 in main ()\n \naccess=> select version();\nversion\n--------------------------------------------------------------\nPostgreSQL 6.5.0 on i386-unknown-freebsd2.2.8, compiled by cc\n\n> PostgreSQL ODBC driver 6.40.00.06 \nset up with valid user name and password, valid port.\n\nThank you very much and I really hope to hear from you really soon.\n\nCheers,\nNuchanach K.\n\n-----------------------------------------\nNuchanach Klinjun\nR&D Project. Internet Thailand\nEmail: [email protected]\n\n\n\n\n\n\n\n", "msg_date": "Wed, 14 Jul 1999 15:06:34 +0700 (GMT+0700)", "msg_from": "Nuchanach Klinjun <[email protected]>", "msg_from_op": true, "msg_subject": "upgrade problem " } ]
[ { "msg_contents": "Thanks for all the answers, everybody. Bruce, I had thought to start work\nadjusting this so that the size wasn't limited at all. I'm just busy\ngathering as much info as I can about the subject area, and hopefully in a\ncouple of days, if not earlier, I'll be in a position to start working on\nthe code.\nI seem to remember there being a hackers guide somewhere. If I remember\nright, it dealt with issues like where to check out the latest source from\ncvs, rough standards, and other basic advice. Can anybody point me to it?\n\nThanks\n\nMikeA\n\n>> -----Original Message-----\n>> From: Bruce Momjian [mailto:[email protected]]\n>> Sent: Wednesday, July 14, 1999 5:02 PM\n>> To: Ansley, Michael\n>> Cc: '[email protected]'\n>> Subject: Re: [HACKERS] MAX Query length\n>> \n>> \n>> [Charset iso-8859-1 unsupported, filtering to ASCII...]\n>> > \n>> > Trawling through the code last night I noticed that:\n>> > #define MAX_QUERY_SIZE (BLCKSZ * 2)\n>> > \n>> > Is there any conceivable reason why the query length would \n>> be dependent on\n>> > the block size? Or do I just have old source code?\n>> \n>> No great reason, but is seems like a good maximum. This controls the\n>> buffer size on the client and server. Do you need it larger?\n>> \n>> -- \n>> Bruce Momjian | http://www.op.net/~candle\n>> [email protected] | (610) 853-3000\n>> + If your life is a hard drive, | 830 Blythe Avenue\n>> + Christ can be your backup. | Drexel Hill, \n>> Pennsylvania 19026\n>> \n", "msg_date": "Wed, 14 Jul 1999 17:12:06 +0200", "msg_from": "\"Ansley, Michael\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] MAX Query length" }, { "msg_contents": "[Charset iso-8859-1 unsupported, filtering to ASCII...]\n> Thanks for all the answers, everybody. Bruce, I had thought to start work\n> adjusting this so that the size wasn't limited at all. I'm just busy\n> gathering as much info as I can about the subject area, and hopefully in a\n> couple of days, if not earlier, I'll be in a position to start working on\n> the code.\n> I seem to remember there being a hackers guide somewhere. If I remember\n> right, it dealt with issues like where to check out the latest source from\n> cvs, rough standards, and other basic advice. Can anybody point me to it?\n> \n\nInfo Central/Documenation, see the Developers section.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 14 Jul 1999 12:27:07 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] MAX Query length" } ]
[ { "msg_contents": "I was just thinking of removing the limit completely. The query would fail\nwhen it could allocate more memory for the query string.\n\nMikeA\n\n>> -----Original Message-----\n>> From: Bernard Frankpitt [mailto:[email protected]]\n>> Sent: Wednesday, July 14, 1999 5:56 PM\n>> To: Tom Lane; [email protected]\n>> Subject: Re: [HACKERS] MAX Query length\n>> \n>> \n>> Tom Lane wrote:\n>> > \n>> > \"Ansley, Michael\" <[email protected]> writes:\n>> > > Trawling through the code last night I noticed that:\n>> > > #define MAX_QUERY_SIZE (BLCKSZ * 2)\n>> > \n>> >\n>> > Sure: you want to be able to INSERT a tuple of maximum \n>> size. In the\n>> > absence of dynamically sized text buffers, a reasonable estimate of\n>> > the longest INSERT command of interest is going to depend \n>> on BLCKSZ.\n>> ...\n>> > regards, tom lane\n>> \n>> While I agree that it is reasonable that the query size should be\n>> dependent on the block-size, there is an assumption here that the\n>> type_in() and type_out() routines that do not expand the size of the\n>> ascii representation of the tuple data in the query string \n>> to more than\n>> twice is size in it's internal disk representation. An important\n>> exception to this assumption would be large arrays of floating point\n>> data that are stored with limited precision. A \n>> (single-precision) float\n>> takes 4 bytes of space in a disk block, yet the ascii \n>> representation\n>> for the same data before conversion could easily take in excess of 16\n>> bits if it comes from a \n>> piece of code like \n>> \n>> double x;\n>> \tint buf_pos\n>> ....\n>> ....\n>> \tbuf_pos += \n>> \t snprintf( &query_buf[buf_pos], (l_buf - buf_pos ), \"%e\", x); \n>> \n>> somewhere in a front end. Perhaps it would be a good idea \n>> to increase\n>> the multiplier in \n>> \n>> #define MAX_QUERY_SIZE (BLCKSZ * 2)\n>> \n>> \n>> to something larger than 2.\n>> \n>> Bernie Frankpitt\n>> \n", "msg_date": "Wed, 14 Jul 1999 17:54:49 +0200", "msg_from": "\"Ansley, Michael\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] MAX Query length" } ]
[ { "msg_contents": "Nice parody.\n\n>> \n>> Info Central/Documenation, see the Developers section.\n>> \n>> Bruce Momjian | http://www.op.net/~candle\n>> \n", "msg_date": "Wed, 14 Jul 1999 18:27:38 +0200", "msg_from": "\"Ansley, Michael\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] MAX Query length" } ]
[ { "msg_contents": "I would like to add PAM support PGSQL. Many people (Microsoft Access users)\nhave mentioned that this would be a good feature in the past. I've read the\n6.4 source the current authentication scheme doesn't appear to hard to\nunderstand. However, this might present some performance problems as well as\npresenting additional installation issues. I would like this to work\nsomewhat simmilar to MSQL in that the DBO can give detailed permisions to\ntables, create groups and handle other SQL defined permissions.\nAdditionaly, the ODBC driver would need to be modified to transparantly pass\ntrusted connection information. This support should be considered optional\nnot the default. Hackers would this be a good project to approach or would\nit cause more problems than its worth?\n", "msg_date": "Wed, 14 Jul 1999 10:41:24 -0700", "msg_from": "\"Morris, Sam@EDD\" <[email protected]>", "msg_from_op": true, "msg_subject": "Authentication - To do" } ]
[ { "msg_contents": "I have been playing around with this for some time now to no avail. I\nhave a table info with a two-dimensional text type array action. Is\nthere any way to select the corresponding value of one of the elements\nwithout knowing the order of the elements?\n\nE.g.\n\nCREATE TABLE info (action text[][]);\n\nINSERT INTO info VALUES ('{{\"VAR\",\"VAL\"},{\"VAR2\",\"VAL2\"}}');\n\nNow what SELECT query will search for \"VAR\" within action (in this\ncase it is the first element, but it may not always be) and print out\n\"VAL.\"\n\n\nAny information would be greatly appreciated.\n\nThank you very much.\n\nEvan\n", "msg_date": "Wed, 14 Jul 1999 14:05:00 -0700", "msg_from": "Evan Klinger <[email protected]>", "msg_from_op": true, "msg_subject": "SELECT using arrays" }, { "msg_contents": "At 00:05 +0300 on 15/07/1999, Evan Klinger wrote:\n\n\n> I have been playing around with this for some time now to no avail. I\n> have a table info with a two-dimensional text type array action. Is\n> there any way to select the corresponding value of one of the elements\n> without knowing the order of the elements?\n>\n> E.g.\n>\n> CREATE TABLE info (action text[][]);\n>\n> INSERT INTO info VALUES ('{{\"VAR\",\"VAL\"},{\"VAR2\",\"VAL2\"}}');\n>\n> Now what SELECT query will search for \"VAR\" within action (in this\n> case it is the first element, but it may not always be) and print out\n> \"VAL.\"\n>\n>\n> Any information would be greatly appreciated.\n\nI think somewhere, somehow, in the Postgres documentation, it should be\nwritten that arrays are only recommended to be used as a bulk, like a\npolygon or an image. You can update them or get a particular item in them,\nbut nothing furter than that.\n\nIf you need anything further, don't use arrays, but use the relational data\nmodel and arrange the related data in a related table. Collecting all the\nvalues together is a pretty easy trick from the frontend. So is population\nof the table. The said type of searches should become much much easier.\n\nHerouth\n\n--\nHerouth Maoz, Internet developer.\nOpen University of Israel - Telem project\nhttp://telem.openu.ac.il/~herutma\n\n\n", "msg_date": "Mon, 19 Jul 1999 14:48:57 +0300", "msg_from": "Herouth Maoz <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [SQL] SELECT using arrays" } ]
[ { "msg_contents": "I am in the process of cleaning up the many un-needed includes and\nadding includes so each *.h file can is compile-able on its own.\n\nThis is for 6.6, and should be completed in a few days.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 14 Jul 1999 17:07:14 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "#include files" } ]
[ { "msg_contents": " I am in the process of cleaning up the many un-needed includes and\n adding includes so each *.h file can is compile-able on its own.\n\nSpeaking of header files, I'm pretty sure that not enough header files\nget installed in order to compile something using spi.h/trigger.h\noutside the postgresql source tree. This seems like a problem.\n\nShouldn't the normal install procedure install everything necessary to\nuse published postgresql header files?\n\nCheers,\nBrook\n", "msg_date": "Wed, 14 Jul 1999 16:47:54 -0600 (MDT)", "msg_from": "Brook Milligan <[email protected]>", "msg_from_op": true, "msg_subject": "header files for spi.h/trigger.h" }, { "msg_contents": "> I am in the process of cleaning up the many un-needed includes and\n> adding includes so each *.h file can is compile-able on its own.\n> \n> Speaking of header files, I'm pretty sure that not enough header files\n> get installed in order to compile something using spi.h/trigger.h\n> outside the postgresql source tree. This seems like a problem.\n> \n> Shouldn't the normal install procedure install everything necessary to\n> use published postgresql header files?\n\nGood questions. I am not touching spi.h because I realize they are used\nby client code.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 14 Jul 1999 19:46:53 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] header files for spi.h/trigger.h]" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n>> Speaking of header files, I'm pretty sure that not enough header files\n>> get installed in order to compile something using spi.h/trigger.h\n>> outside the postgresql source tree. This seems like a problem.\n>> \n>> Shouldn't the normal install procedure install everything necessary to\n>> use published postgresql header files?\n\n> Good questions. I am not touching spi.h because I realize they are used\n> by client code.\n\nBut how much of the stuff that those files are including is really\nneeded to compile them? It could be that the right answer is not to\ninstall more headers but to get rid of some #includes in the headers\nthat are published...\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 14 Jul 1999 20:34:37 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] header files for spi.h/trigger.h] " }, { "msg_contents": "> Bruce Momjian <[email protected]> writes:\n> >> Speaking of header files, I'm pretty sure that not enough header files\n> >> get installed in order to compile something using spi.h/trigger.h\n> >> outside the postgresql source tree. This seems like a problem.\n> >> \n> >> Shouldn't the normal install procedure install everything necessary to\n> >> use published postgresql header files?\n> \n> > Good questions. I am not touching spi.h because I realize they are used\n> > by client code.\n> \n> But how much of the stuff that those files are including is really\n> needed to compile them? It could be that the right answer is not to\n> install more headers but to get rid of some #includes in the headers\n> that are published...\n\nI am doing that will all the other include files, but because spi.h is\nused in contrib, I assume they want some stuff auto-included with spi.h.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 14 Jul 1999 23:08:23 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] header files for spi.h/trigger.h]" }, { "msg_contents": "> \n> I am doing that will all the other include files, but because spi.h is\n> used in contrib, I assume they want some stuff auto-included with spi.h.\n> \n\nI sent in a bug-report a while back for all the files that I needed to\ninclude to compile something using spi.h. Lost that message, but here is\nthe listing of my\ncurrent installed include tree.\n\nThis was a pretty minimal copy of files accross, just to get something\nusing spi.h to compile, i.e. trigger.h may need more. I think I copied\nthe storage directory accross as a whole, but for the rest only the\nfiles that are really required are included. I think you can safely\nassume that everything date Jun 19th needs to be added.\n\nAdriaan\n\n\n\nbash-2.03$ ls -lR include\ntotal 110\ndrwxr-xr-x 2 postgres postgres 512 Jun 29 10:10 access\n-r--r--r-- 1 postgres postgres 19636 Jun 15 18:14 c.h\ndrwxr-xr-x 2 postgres postgres 512 Jun 29 10:11 catalog\ndrwxr-xr-x 2 postgres postgres 512 Jun 15 18:14 commands\n-r--r--r-- 1 postgres postgres 15937 Jun 15 18:14 config.h\n-r--r--r-- 1 postgres postgres 929 Jun 15 18:14 ecpgerrno.h\n-r--r--r-- 1 postgres postgres 1206 Jun 15 18:14 ecpglib.h\n-r--r--r-- 1 postgres postgres 1549 Jun 15 18:14 ecpgtype.h\ndrwxr-xr-x 2 postgres postgres 512 Jun 29 10:07 executor\n-r--r--r-- 1 postgres postgres 24960 Jun 15 18:14 fmgr.h\ndrwxr-xr-x 2 postgres postgres 512 Jun 29 09:40 lib\n-r--r--r-- 1 postgres postgres 588 Jun 17 11:19 libpgtcl.h\ndrwxr-xr-x 2 postgres postgres 512 Jun 15 18:14 libpq\n-r--r--r-- 1 postgres postgres 10277 Jun 15 18:14 libpq-fe.h\n-r--r--r-- 1 postgres postgres 10190 Jun 15 18:14 libpq-int.h\ndrwxr-xr-x 2 postgres postgres 512 Jun 29 09:46 nodes\n-r--r--r-- 1 postgres postgres 169 Jun 15 18:14 os.h\ndrwxr-xr-x 2 postgres postgres 512 Jun 29 10:02 parser\ndrwxr-xr-x 3 postgres postgres 512 Jun 15 18:14 port\n-r--r--r-- 1 postgres postgres 5098 Jun 15 18:14 postgres.h\n-r--r--r-- 1 postgres postgres 1222 Jun 15 18:14 postgres_ext.h\ndrwxr-xr-x 2 postgres postgres 512 Jun 29 09:45 rewrite\n-r--r--r-- 1 postgres postgres 957 Jun 15 18:14 sqlca.h\ndrwxr-xr-x 2 postgres postgres 1024 Jun 29 10:15 storage\ndrwxrwxr-x 2 postgres postgres 512 Jun 29 10:02 tcop\ndrwxr-xr-x 2 postgres postgres 512 Jun 29 10:16 utils\n\ninclude/access:\ntotal 46\n-r--r--r-- 1 postgres postgres 1465 Jun 15 18:14 attnum.h\n-rw-r--r-- 1 postgres postgres 1158 Jun 29 09:46 funcindex.h\n-rw-r--r-- 1 postgres postgres 8080 Jun 29 10:04 heapam.h\n-rw-r--r-- 1 postgres postgres 4002 Jun 29 09:40 htup.h\n-rw-r--r-- 1 postgres postgres 863 Jun 29 10:10 ibit.h\n-rw-r--r-- 1 postgres postgres 4080 Jun 29 10:09 itup.h\n-rw-r--r-- 1 postgres postgres 2652 Jun 29 09:42 relscan.h\n-rw-r--r-- 1 postgres postgres 1399 Jun 29 09:45 sdir.h\n-rw-r--r-- 1 postgres postgres 1210 Jun 29 09:43 skey.h\n-rw-r--r-- 1 postgres postgres 2506 Jun 29 09:43 strat.h\n-rw-r--r-- 1 postgres postgres 5683 Jun 29 10:04 transam.h\n-rw-r--r-- 1 postgres postgres 1804 Jun 29 09:44 tupdesc.h\n-rw-r--r-- 1 postgres postgres 2868 Jun 29 10:04 tupmacs.h\n-rw-r--r-- 1 postgres postgres 3413 Jun 29 10:04 xact.h\n\ninclude/catalog:\ntotal 205\n-rw-r--r-- 1 postgres postgres 1713 Jun 29 10:11 catname.h\n-rw-r--r-- 1 postgres postgres 3351 Jun 29 09:43 pg_am.h\n-rw-r--r-- 1 postgres postgres 23347 Jun 29 09:44 pg_attribute.h\n-rw-r--r-- 1 postgres postgres 6048 Jun 29 09:43 pg_class.h\n-rw-r--r-- 1 postgres postgres 2780 Jun 29 10:07 pg_index.h\n-rw-r--r-- 1 postgres postgres 2132 Jun 29 10:04 pg_language.h\n-rw-r--r-- 1 postgres postgres 131098 Jun 29 09:46 pg_proc.h\n-rw-r--r-- 1 postgres postgres 19494 Jun 29 09:47 pg_type.h\n\ninclude/commands:\ntotal 3\n-r--r--r-- 1 postgres postgres 2243 Jun 15 18:14 trigger.h\n\ninclude/executor:\ntotal 20\n-rw-r--r-- 1 postgres postgres 1177 Jun 29 10:07 execdefs.h\n-rw-r--r-- 1 postgres postgres 1024 Jun 29 09:48 execdesc.h\n-rw-r--r-- 1 postgres postgres 6313 Jun 29 10:06 executor.h\n-rw-r--r-- 1 postgres postgres 3836 Jun 29 09:40 hashjoin.h\n-r--r--r-- 1 postgres postgres 2913 Jun 15 18:14 spi.h\n-rw-r--r-- 1 postgres postgres 2244 Jun 29 09:46 tuptable.h\n\ninclude/lib:\ntotal 6\n-r--r--r-- 1 postgres postgres 2277 Jun 15 18:14 dllist.h\n-rw-r--r-- 1 postgres postgres 2887 Jun 29 09:40 fstack.h\n\ninclude/libpq:\ntotal 9\n-r--r--r-- 1 postgres postgres 3313 Jun 15 18:14 libpq-fs.h\n-r--r--r-- 1 postgres postgres 4623 Jun 15 18:14 pqcomm.h\n\ninclude/nodes:\ntotal 90\n-rw-r--r-- 1 postgres postgres 23654 Jun 29 09:39 execnodes.h\n-rw-r--r-- 1 postgres postgres 2768 Jun 29 09:39 memnodes.h\n-rw-r--r-- 1 postgres postgres 6090 Jun 29 09:38 nodes.h\n-rw-r--r-- 1 postgres postgres 2983 Jun 29 09:45 params.h\n-rw-r--r-- 1 postgres postgres 24571 Jun 29 09:39 parsenodes.h\n-rw-r--r-- 1 postgres postgres 2838 Jun 29 09:38 pg_list.h\n-rw-r--r-- 1 postgres postgres 8459 Jun 29 09:46 plannodes.h\n-rw-r--r-- 1 postgres postgres 10348 Jun 29 09:00 primnodes.h\n-rw-r--r-- 1 postgres postgres 6981 Jun 29 09:39 relation.h\n\ninclude/parser:\ntotal 3\n-rw-r--r-- 1 postgres postgres 1386 Jun 29 10:02 parse_node.h\n-rw-r--r-- 1 postgres postgres 996 Jun 29 10:02 parse_type.h\n\ninclude/port:\ntotal 1\ndrwxr-xr-x 2 postgres postgres 512 Jun 15 18:14 alpha\n\ninclude/port/alpha:\ntotal 0\n\ninclude/rewrite:\ntotal 1\n-rw-r--r-- 1 postgres postgres 996 Jun 29 09:45 prs2lock.h\n\ninclude/storage:\ntotal 90\n-rw-r--r-- 1 postgres postgres 830 Jun 29 10:15 backendid.h\n-rw-r--r-- 1 postgres postgres 3164 Jun 29 10:15 block.h\n-rw-r--r-- 1 postgres postgres 1110 Jun 29 10:15 buf.h\n-rw-r--r-- 1 postgres postgres 4979 Jun 29 10:15 buf_internals.h\n-rw-r--r-- 1 postgres postgres 4513 Jun 29 10:15 bufmgr.h\n-rw-r--r-- 1 postgres postgres 8684 Jun 29 10:15 bufpage.h\n-rw-r--r-- 1 postgres postgres 3251 Jun 29 10:15 fd.h\n-rw-r--r-- 1 postgres postgres 5352 Jun 29 10:15 ipc.h\n-rw-r--r-- 1 postgres postgres 411 Jun 29 10:15 item.h\n-rw-r--r-- 1 postgres postgres 1546 Jun 29 10:15 itemid.h\n-rw-r--r-- 1 postgres postgres 962 Jun 29 10:15 itempos.h\n-rw-r--r-- 1 postgres postgres 2921 Jun 29 10:15 itemptr.h\n-rw-r--r-- 1 postgres postgres 1977 Jun 29 10:15 large_object.h\n-rw-r--r-- 1 postgres postgres 1729 Jun 29 10:15 lmgr.h\n-rw-r--r-- 1 postgres postgres 7599 Jun 29 10:15 lock.h\n-rw-r--r-- 1 postgres postgres 1844 Jun 29 10:15 multilev.h\n-rw-r--r-- 1 postgres postgres 1533 Jun 29 10:15 off.h\n-rw-r--r-- 1 postgres postgres 523 Jun 29 10:15 page.h\n-rw-r--r-- 1 postgres postgres 689 Jun 29 10:15 pagenum.h\n-rw-r--r-- 1 postgres postgres 1533 Jun 29 10:15 pos.h\n-rw-r--r-- 1 postgres postgres 3554 Jun 29 10:15 proc.h\n-rw-r--r-- 1 postgres postgres 9219 Jun 29 10:15 s_lock.h\n-rw-r--r-- 1 postgres postgres 3197 Jun 29 10:15 shmem.h\n-rw-r--r-- 1 postgres postgres 880 Jun 29 10:15 sinval.h\n-rw-r--r-- 1 postgres postgres 3947 Jun 29 10:15 sinvaladt.h\n-rw-r--r-- 1 postgres postgres 3008 Jun 29 10:15 smgr.h\n-rw-r--r-- 1 postgres postgres 828 Jun 29 10:15 spin.h\n\ninclude/tcop:\ntotal 9\n-rw-r--r-- 1 postgres postgres 4143 Jun 29 09:47 dest.h\n-rw-r--r-- 1 postgres postgres 814 Jun 29 09:48 pquery.h\n-rw-r--r-- 1 postgres postgres 2032 Jun 29 10:01 tcopprot.h\n-rw-r--r-- 1 postgres postgres 493 Jun 29 10:02 utility.h\n\ninclude/utils:\ntotal 109\n-rw-r--r-- 1 postgres postgres 5443 Jun 29 10:03 array.h\n-rw-r--r-- 1 postgres postgres 25268 Jun 29 10:03 builtins.h\n-rw-r--r-- 1 postgres postgres 1406 Jun 29 10:03 cash.h\n-rw-r--r-- 1 postgres postgres 477 Jun 29 10:03 datetime.h\n-rw-r--r-- 1 postgres postgres 2035 Jun 29 10:02 datum.h\n-rw-r--r-- 1 postgres postgres 11237 Jun 29 09:40 dt.h\n-r--r--r-- 1 postgres postgres 1600 Jun 15 18:14 elog.h\n-rw-r--r-- 1 postgres postgres 1689 Jun 29 09:37 fcache.h\n-r--r--r-- 1 postgres postgres 13260 Jun 15 18:14 geo_decls.h\n-rw-r--r-- 1 postgres postgres 4659 Jun 29 10:16 hsearch.h\n-rw-r--r-- 1 postgres postgres 1070 Jun 29 10:03 inet.h\n-rw-r--r-- 1 postgres postgres 3190 Jun 29 10:03 int8.h\n-r--r--r-- 1 postgres postgres 1706 Jun 15 18:14 mcxt.h\n-rw-r--r-- 1 postgres postgres 8584 Jun 29 09:40 memutils.h\n-rw-r--r-- 1 postgres postgres 4126 Jun 29 09:40 nabstime.h\n-rw-r--r-- 1 postgres postgres 1736 Jun 29 10:03 numeric.h\n-r--r--r-- 1 postgres postgres 1066 Jun 15 18:14 palloc.h\n-rw-r--r-- 1 postgres postgres 2571 Jun 29 10:03 portal.h\n-rw-r--r-- 1 postgres postgres 4345 Jun 29 09:42 rel.h\n-rw-r--r-- 1 postgres postgres 2530 Jun 29 10:02 syscache.h\n-rw-r--r-- 1 postgres postgres 2961 Jun 29 09:45 tqual.h\n", "msg_date": "Thu, 15 Jul 1999 08:53:45 +0300", "msg_from": "Adriaan Joubert <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] header files for spi.h/trigger.h]" } ]
[ { "msg_contents": "\n> Hmm. OK, then, we're stuck with a tradeoff that (fortunately) only\n> affects arrays. Is it better to force subscripted column names to be\n> fully qualified \"table.column[subscripts]\" (the current situation),\n> or to allow bare column names to be subscripted at the cost of requiring\n> casts from string constants to array types to use the long-winded CAST\n> notation (or nonstandard :: notation)?\n> \nYes, me thinks so too.\n\n> I would guess that the cast issue comes up *far* less frequently than\n> subscripting, so we'd be better off changing the behavior. But the\n> floor is open for discussion.\n> \nYes.\n\n\tI have this change implemented and tested here, btw, but I won't\ncheck\n\tit in until I see if there are objections...\n\nI would apply it :-)\n\nAndreas\n", "msg_date": "Thu, 15 Jul 1999 09:35:17 +0200", "msg_from": "Zeugswetter Andreas IZ5 <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Re: Arrays versus 'type constant' syntax " } ]
[ { "msg_contents": "Well, I'm starting on this, so hopefully in a couple of weeks the length\nlimit of the query buffer will fade into insignificance.\nIs somebody actively working on removing the tuple-length dependence on the\nblock size?\n\nAt present, disk blocks are set to 8k. Is it as easy as just adjusting the\nconstant to enlarge this? Testing queries larger than 16k with only an 8k\ntuple size could be challenging.\n\n\nMikeA\n\n\n>> > This entire chain of logic will fall to the ground anyway \n>> once we support\n>> > tuples larger than a disk block, which I believe is going to happen\n>> > before too much longer. So, rather than argue about what \n>> the multiplier\n>> > ought to be, I think it's more productive to just press on \n>> with making\n>> > the query buffers dynamically resizable...\n>> \n>> Yes, even if we choose to make some other limit (like Vadim\n>> suggested around 64K), a query operating on them could be\n>> much bigger. I already had some progress with a data type\n>> that uses a simple, byte oriented lz compression buffer as\n>> internal representation.\n>> \n>> \n>> Jan\n>> \n>> --\n>> \n>> #============================================================\n>> ==========#\n>> # It's easier to get forgiveness for being wrong than for \n>> being right. #\n>> # Let's break this rule - forgive me. \n>> #\n>> #========================================= [email protected] \n>> (Jan Wieck) #\n>> \n>> \n>> \n", "msg_date": "Thu, 15 Jul 1999 10:56:42 +0200", "msg_from": "\"Ansley, Michael\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] MAX Query length" }, { "msg_contents": "\"Ansley, Michael\" <[email protected]> writes:\n> At present, disk blocks are set to 8k. Is it as easy as just adjusting the\n> constant to enlarge this? Testing queries larger than 16k with only an 8k\n> tuple size could be challenging.\n\nAs of 6.5, it's just a matter of adjusting BLCKSZ in include/config.h,\nrebuilding, and re-initdb-ing. The workable sizes are 8k 16k and 32k;\nbigger than 32k fails for reasons I don't recall exactly (offsets\nstored in signed shorts somewhere, no doubt).\n\n> Is somebody actively working on removing the tuple-length dependence on the\n> block size?\n\nThere was considerable discussion about it a few weeks ago, but I didn't\nhear anyone actually committing to do the work :-(. Maybe when you've\nmade some progress on the text-length issues, someone will get excited\nabout the tuple-length issue...\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 15 Jul 1999 09:58:53 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] MAX Query length " }, { "msg_contents": "I wrote:\n> \"Ansley, Michael\" <[email protected]> writes:\n>> At present, disk blocks are set to 8k. Is it as easy as just adjusting the\n>> constant to enlarge this? Testing queries larger than 16k with only an 8k\n>> tuple size could be challenging.\n\n> As of 6.5, it's just a matter of adjusting BLCKSZ in include/config.h,\n> rebuilding, and re-initdb-ing.\n\nA further thought on this: if you increase BLCKSZ then at least some of\nthe fixed-size text buffers will get bigger, so it's not clear that you\nwill be stressing things all that hard if you take that route. Might be\neasier to leave BLCKSZ alone and test with queries that are long and\ncomplicated but don't actually require a large tuple size. Some\nexamples:\n\n1. SELECT a,a,a,a,... FROM table;\n\n2. SELECT a FROM table WHERE x = 1 OR x = 2 OR x = 3 OR ...;\n\n3. Hugely complex CREATE TABLE commands (lots of constraints and\n defaults and indexes, which don't enlarge the size of an actual\n tuple of the table).\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 15 Jul 1999 10:47:44 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] MAX Query length " } ]
[ { "msg_contents": "Once I have recompiled with a new block size, how do I update the databases\nthat I already have. If I understand right, once the block size has been\nupdated, my current dbs will not work. Do I just pg_dump before make\ninstall and then recreate the dbs and load the dumps afterwards?\n\n>> As of 6.5, it's just a matter of adjusting BLCKSZ in \n>> include/config.h,\n>> rebuilding, and re-initdb-ing. The workable sizes are 8k \n>> 16k and 32k;\n>> bigger than 32k fails for reasons I don't recall exactly (offsets\n>> stored in signed shorts somewhere, no doubt).\n\n\nMikeA\n", "msg_date": "Thu, 15 Jul 1999 16:11:33 +0200", "msg_from": "\"Ansley, Michael\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] MAX Query length " }, { "msg_contents": "\"Ansley, Michael\" <[email protected]> writes:\n> Once I have recompiled with a new block size, how do I update the databases\n> that I already have. If I understand right, once the block size has been\n> updated, my current dbs will not work. Do I just pg_dump before make\n> install and then recreate the dbs and load the dumps afterwards?\n\nRight, the real sequence when you are changing disk layout details is\n\tpg_dumpall with old pg_dump and backend.\n\tstop postmaster\n\trm -rf installation\n\tmake install\n\tinitdb\n\tstart postmaster\n\tpsql <pgdumpscript.\n\nYou may want to do your development work in a \"playpen\" installation\ninstead of risking breaking your \"production\" installation with these\nsorts of shenanigans. I do that all the time here; for one thing I\ndon't have to bother saving and restoring any data when I blow away\na playpen installation.\n\nThe easiest kind of playpen setup is a separate server machine, but if\nyou only have one machine available then you do something like this to\nbuild a playpen:\n\n\tconfigure --with-pgport=5440 --prefix=/users/postgres/testversion\n\n(Adjust playpen's port and install location to taste; make more than one\nif you want...) BTW, if you are messing with the backend then your\nplaypen should also be built with --enable-cassert.\n\nAs I commented a moment ago, it's probably not really necessary for you\nto change BLCKSZ for your testing, but the above tips are worth\nrepeating every so often for the benefit of new hackers.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 15 Jul 1999 10:58:23 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] MAX Query length " }, { "msg_contents": "Hi,\nI've posted 3 messages to pgsql-general about a weird index problem I'm\nhaving. I've found a very simple case that exhibits this problems.\nThis time I'm using a different database and different table that the\nfirst 3 messages(It's the same pg install however).\n\nThe index called mcrl1_partnumber_index is an index on the 'reference'\nfield. The table was just vacuumed(with and without analyze).\nThe pg install is from CVS last night around 7pm Central time.\n\nThe problems seems to be rooted in 'OR' combined with 'LIKE'. If I remove\nthe % in the string, explain shows the same (high) cost. If I also remove\nthe 'LIKE' the cost basically goes to nothing. The cost is indeed\ncorrect, either of the 2 first cases takes ~5 minutes, while the last one\n(no LIKE) finishes instantly.\n\nThe weird thing is, why is the cost being calculated as being that high\nwhen it's actually using the index on that field and is there a reason why\nexplain shows the index name twice?\n\nI ran the same exact query on a MS SQL server with the same data, and\nthat took in comparison about 2 seconds to finish.\nBoth Postgres and MS SQL are on Pentium 100 servers(Yes, very pathetic),\nand Linux 2.2.6 and NT 4.0 respectively.\n\nThanks,\nOle Gjerde\n\nHere's the SQL: \n---------------------\nselect * from mcrl1 where reference = 'AN914' OR reference LIKE 'AN914-%';\n\nHere's the explain: \n-----------------\nmcrl=> explain select * from mcrl1 where reference = 'AN914' OR reference\nLIKE AN914-%';\nNOTICE: QUERY PLAN:\n\nIndex Scan using mcrl1_reference_index, mcrl1_reference_index on mcrl1\n(cost=418431.81 rows=1 width=120)\n\nEXPLAIN\n\nHere's the table layout: \n------------\nTable = mcrl1\n+----------------------------------+----------------------------------+-------+\n| Field | Type |Length|\n+----------------------------------+----------------------------------+-------+\n| reference | varchar() |32 |\n| cage_num | char() |5 |\n| fsc | char() |4 |\n| niin | char() |9 |\n| isc | char() |1 |\n| rnvc | char() |1 |\n| rncc | char() |1 |\n| sadc | char() |1 |\n| da | char() |1 |\n| description | varchar() |32 |\n+----------------------------------+----------------------------------+-------+\nIndex: mcrl1_partnumber_index\n\n\n\n\n\n\n", "msg_date": "Thu, 15 Jul 1999 14:58:08 -0500 (CDT)", "msg_from": "Ole Gjerde <[email protected]>", "msg_from_op": false, "msg_subject": "Interesting index/LIKE/join slowness problems" }, { "msg_contents": "Ole Gjerde <[email protected]> writes:\n> The pg install is from CVS last night around 7pm Central time.\n\nDo you have USE_LOCALE defined?\n\n> The problems seems to be rooted in 'OR' combined with 'LIKE'. If I remove\n> the % in the string, explain shows the same (high) cost. If I also remove\n> the 'LIKE' the cost basically goes to nothing. The cost is indeed\n> correct, either of the 2 first cases takes ~5 minutes, while the last one\n> (no LIKE) finishes instantly.\n\nWhen you have just \"where reference = 'AN914'\", the system knows it can\nuse the index to scan just the tuples with keys between AN914 and AN914\n(duh). Very few tuples actually get fetched.\n\nAs soon as you use LIKE with a %, more tuples have to be scanned. It's\nparticularly bad if you have USE_LOCALE; with the current code, that\nbasically means that LIKE 'AN914-%' will cause all tuples beginning with\nkey AN914- and running to the end of the table to be scanned.\n\nSee the extensive thread on this topic from about a month or two back\nin the pgsql-hackers mail list archives; I don't feel like repeating the\ninfo now.\n\nWhen you throw in the OR, the indexqual logic basically breaks down\ncompletely; I think you end up scanning the entire table. (This could\nbe made smarter, perhaps, but right now I don't believe the system is\nable to figure out the union of indexqual conditions.) I would say it\nis an optimizer bug that it is not reverting to sequential scan here\n... that would be a good bit faster, I bet.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 15 Jul 1999 18:39:45 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Interesting index/LIKE/join slowness problems " }, { "msg_contents": "On Thu, 15 Jul 1999, Tom Lane wrote:\n> Do you have USE_LOCALE defined?\n\nNope.. Not unless it defaults to on... I did a \n./configure --prefix=/home/postgres ; make ; make install as usual\n\n> As soon as you use LIKE with a %, more tuples have to be scanned. It's\n> particularly bad if you have USE_LOCALE; with the current code, that\n> basically means that LIKE 'AN914-%' will cause all tuples beginning with\n> key AN914- and running to the end of the table to be scanned.\n\nOk.. I get that.. But why does LIKE 'AN914' have the same problem? The %\ndoesn't have to be there as long as it's either LIKE or ~*(or ~ etc)\nquery. And that still doesn't explain why it happens with USE_LOCALE\noff..\nAlso, since the ='s work using OR, why wouldn't LIKE also? Both methods\nwould use the indexes, and the LIKE doesn't take that much longer to run..\nDoesn't make sense, especially concerning what you mention below..\n\n> See the extensive thread on this topic from about a month or two back\n> in the pgsql-hackers mail list archives; I don't feel like repeating the\n> info now.\n\nI haven't been able to find a discussion on this topic last few months, I\nfound discussion about something similar in March, but that didn't explain\nit very well.. I'll just have to look some more :)\n\n> When you throw in the OR, the indexqual logic basically breaks down\n> completely; I think you end up scanning the entire table. (This could\n> be made smarter, perhaps, but right now I don't believe the system is\n> able to figure out the union of indexqual conditions.) I would say it\n> is an optimizer bug that it is not reverting to sequential scan here\n> ... that would be a good bit faster, I bet.\n\nOk.. I can believe that.. This is a pretty nasty problem tho.. I don't\nbelieve using OR with LIKE is all that rare.. Maybe it's rare on a 17\nmill row table, but still..\nWhat would be the outlook on fixing the problem and not the symptom? :)\n\nAs far as sequential scan being faster.. Unfortunately, this table has\nabout 17 million rows, so any kind of seq scan is gonna be really slow.\n\nThanks,\nOle Gjerde\n\n\n\n", "msg_date": "Thu, 15 Jul 1999 19:23:44 -0500 (CDT)", "msg_from": "Ole Gjerde <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Interesting index/LIKE/join slowness problems " }, { "msg_contents": "Ole Gjerde <[email protected]> writes:\n> Ok.. I get that.. But why does LIKE 'AN914' have the same problem? The %\n> doesn't have to be there as long as it's either LIKE or ~*(or ~ etc)\n> query.\n\nA pure \"where field LIKE constant\" doesn't have the problem; it's the\nOR that does it. More specifically it's an OR of ANDs that doesn't work\nvery well.\n\nBy the time the parser gets done with it, your query looks like\n\nselect * from mcrl1 where\n\treference = 'AN914' OR\n\t(reference LIKE 'AN914-%'\n\t AND reference >= 'AN914-'\n\t AND reference <= 'AN914-\\377');\n\n(ugly, ain't it?) Those comparison clauses are what need to be pulled\nout and fed to the indexscan mechanism, so that only part of the table\ngets scanned, not the whole table. Indexscan doesn't know anything\nabout LIKE, but it does grok >= and <=.\n\nUnfortunately the current optimizer doesn't do it right. I looked into\na very similar bug report from Hiroshi Inoue (see his message of 3/19/99\nand my response of 4/3 in the hackers archives), and what I found was\nthat the cause is a fairly fundamental optimizer design choice. The\nANDed conditions get split into separate top-level clauses and there's\nno easy way to put them back together. The optimizer ends up passing\nonly one of them to the indexscan executor. That's better than nothing,\nbut on average you still end up scanning half the table rather than\njust a small range of it.\n\n> I haven't been able to find a discussion on this topic last few months, I\n> found discussion about something similar in March, but that didn't explain\n> it very well.. I'll just have to look some more :)\n\nI was referring to the discussion around 4/15/99 about why LIKE needs a\nsmarter way to generate the upper comparison clause. That's not\ndirectly your problem, but it is causing the same kind of slowdown for\neveryone who does use LOCALE...\n\n>> When you throw in the OR, the indexqual logic basically breaks down\n>> completely; I think you end up scanning the entire table. (This could\n>> be made smarter, perhaps, but right now I don't believe the system is\n>> able to figure out the union of indexqual conditions.)\n\nI was wrong about that --- the executor *does* handle OR'd indexqual\nconditions, basically by performing a new indexscan for each OR'd\ncondition. (That's why EXPLAIN is listing the index multiple times.)\nThe trouble with OR-of-ANDs is entirely the optimizer's fault; the\nexecutor would do them fine if the optimizer would only hand them over\nin that form.\n\n> What would be the outlook on fixing the problem and not the symptom? :)\n\nI plan to look into fixing this for 6.6, but don't hold your breath\nwaiting...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 16 Jul 1999 10:14:08 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Interesting index/LIKE/join slowness problems " }, { "msg_contents": "On Fri, 16 Jul 1999, Tom Lane wrote:\n> I was wrong about that --- the executor *does* handle OR'd indexqual\n> conditions, basically by performing a new indexscan for each OR'd\n> condition. (That's why EXPLAIN is listing the index multiple times.)\n> The trouble with OR-of-ANDs is entirely the optimizer's fault; the\n> executor would do them fine if the optimizer would only hand them over\n> in that form.\n> > What would be the outlook on fixing the problem and not the symptom? :)\n> I plan to look into fixing this for 6.6, but don't hold your breath\n> waiting...\n\nThanks for giving the very detailed explanation!\n\nSince we really need to have this work, or go with a different database,\nwe would be willing to pay someone to fix this problem. Would anybody be\ninterested in doing this, how soon and how much? It would be preferable\nthat this would be a patch that would be accepted back into postgres for\n6.6.\n\nThanks,\nOle Gjerde\nAvsupport Inc.\n\n\n", "msg_date": "Sun, 18 Jul 1999 15:30:41 -0500 (CDT)", "msg_from": "Ole Gjerde <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Interesting index/LIKE/join slowness problems " }, { "msg_contents": "Ole Gjerde <[email protected]> writes:\n> On Fri, 16 Jul 1999, Tom Lane wrote:\n>> The trouble with OR-of-ANDs is entirely the optimizer's fault; the\n>> executor would do them fine if the optimizer would only hand them over\n>> in that form.\n\n> Since we really need to have this work, or go with a different database,\n> we would be willing to pay someone to fix this problem. Would anybody be\n> interested in doing this, how soon and how much? It would be preferable\n> that this would be a patch that would be accepted back into postgres for\n> 6.6.\n\nFixing the general OR-of-ANDs problem is going to be quite ticklish,\nI think, because it would be easy to make other cases worse if we're\nnot careful about how we rewrite the qual condition.\n\nHowever, I had an idea yesterday about a narrow, localized fix for LIKE\n(and the other ops processed by makeIndexable), which I think would meet\nyour needs if the particular cases you are concerned about are just ORs\nof LIKEs and simple comparisons.\n\nIt goes like this: while we want LIKE to generate indexable comparisons\nif possible, having the parser insert them into the parsetree is a\nreally crude hack. The extra clauses are just a waste of cycles under\nmany scenarios (no index on the field being looked at, LIKE not in the\nWHERE clause or buried too deeply to be an indexqual, etc etc).\nWhat's worse, the parser doesn't know for sure that what it's\nmanipulating really *is* a LIKE --- it's making an unwarranted\nassumption on the basis of the operator name, before the actual operator\nhas been looked up! So I've wanted to replace that method of optimizing\nLIKE since the moment I saw it ;-)\n\nWhat would be better would be to teach the indexqual extractor in the\noptimizer that it can make indexqual conditions from a LIKE operator.\nThen, the LIKE just passes through the cnfify() step without getting\nrewritten, so we don't have the OR-of-ANDs problem. Plus we don't pay\nany overhead if the LIKE can't be used as an indexqual condition for any\nreason. And by the time the optimizer is acting, we really know whether\nwe have a LIKE or not, because type resolution and operator lookup have\nbeen done.\n\nI don't know how soon the general OR-of-ANDs problem can be solved,\nbut I am planning to try to make this LIKE fix for 6.6. If you want\nto send some $$ my way, all the better...\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 18 Jul 1999 17:27:53 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Interesting index/LIKE/join slowness problems " }, { "msg_contents": "Hey,\nI've been having this bizarre problem with some index on this one table.\nThe table has in the past had more than 9 indexes, but today I redid the\ntable and it still has the same problem.\nI just did a dump of the schema, COPY'd the data out. Deleted all\npostgres files, and installed 6.5.1.\nThe table has 3,969,935 rows in it.\n\nAny ideas?\n\nHere is the explain reports after both vacuum and vacuum analyze on the\ntable:\n---------------------------------------------\nparts=> explain select * from av_parts where partnumber = '123456';\nNOTICE: QUERY PLAN:\n\nIndex Scan using av_parts_partnumber_index on av_parts (cost=3.55 rows=32\nwidth=124)\n\nEXPLAIN\nparts=> explain select * from av_parts where nsn = '123456';\nNOTICE: QUERY PLAN:\n\nSeq Scan on av_parts (cost=194841.86 rows=3206927 width=124)\n\nEXPLAIN\n-------------------------------------------------\n\nThis is how I create the 2 indexes:\n-------------------------------------------------\nCREATE INDEX \"av_parts_partnumber_index\" on \"av_parts\" using btree\n ( \"partnumber\" \"varchar_ops\" );\nCREATE INDEX \"av_parts_nsn_index\" on \"av_parts\" using btree\n ( \"nsn\" \"varchar_ops\" );\n-------------------------------------------------\n\nTable = av_parts\n+----------------------------------+----------------------------------+-------+\n| Field | Type |Length|\n+----------------------------------+----------------------------------+-------+\n| itemid | int4 not null default nextval ( |4 |\n| vendorid | int4 |4 |\n| partnumber | varchar() |25 |\n| alternatepartnumber | varchar() |25 |\n| nsn | varchar() |15 |\n| description | varchar() |50 |\n| condition | varchar() |10 |\n| quantity | int4 |4 |\n| rawpartnumber | varchar() |25 |\n| rawalternatenumber | varchar() |25 |\n| rawnsnnumber | varchar() |15 |\n| date | int4 |4 |\n| cagecode | varchar() |10 |\n+----------------------------------+----------------------------------+-------+\nIndices: av_parts_itemid_key\n av_parts_nsn_index\n av_parts_partnumber_index\n\nThanks,\nOle Gjerde\n\n", "msg_date": "Thu, 22 Jul 1999 13:19:18 -0500 (CDT)", "msg_from": "Ole Gjerde <[email protected]>", "msg_from_op": false, "msg_subject": "Index not used on simple select" }, { "msg_contents": "Ole Gjerde <[email protected]> writes:\n> parts=> explain select * from av_parts where nsn = '123456';\n> Seq Scan on av_parts (cost=194841.86 rows=3206927 width=124)\n> [ why isn't it using the index on nsn? ]\n\nThat is darn peculiar. You're probably managing to trigger some nitty\nlittle bug in the optimizer, but I haven't the foggiest what it might\nbe.\n\n> Indices: av_parts_itemid_key\n> av_parts_nsn_index\n> av_parts_partnumber_index\n\nOne bit of info you didn't provide is how that third index is defined.\n\nShipping your 4-million-row database around is obviously out of the\nquestion, but I think a reproducible test case is needed; it's going to\ntake burrowing into the code with a debugger to find this one. Can\nyou make a small test case that behaves the same way? (One thing\nto try right away is loading the same table and index definitions into\nan empty database, but *not* loading any data and not doing vacuum.\nIf that setup doesn't show the bug, try adding a couple thousand\nrepresentative rows from your real data, vacuum analyzing, and then\nseeing if it happens.)\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 23 Jul 1999 10:19:21 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Index not used on simple select " } ]
[ { "msg_contents": "I have committed changes that clean up the *.h files for includes, and\naddes tools/pginclude so people can see the tools I wrote to do this.\n\nNow, I will be going after the *.c files, removing unused includes.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 15 Jul 1999 11:25:22 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "#includes" } ]